Dec 09 12:04:59 crc systemd[1]: Starting Kubernetes Kubelet... Dec 09 12:04:59 crc restorecon[4696]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 09 12:04:59 crc restorecon[4696]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 09 12:05:00 crc restorecon[4696]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 09 12:05:00 crc restorecon[4696]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Dec 09 12:05:00 crc kubenswrapper[4703]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 09 12:05:00 crc kubenswrapper[4703]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Dec 09 12:05:00 crc kubenswrapper[4703]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 09 12:05:00 crc kubenswrapper[4703]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 09 12:05:00 crc kubenswrapper[4703]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 09 12:05:00 crc kubenswrapper[4703]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.902984 4703 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.905577 4703 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.905595 4703 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.905599 4703 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.905604 4703 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.905616 4703 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.905622 4703 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.905627 4703 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.905632 4703 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.905636 4703 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.905641 4703 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.905645 4703 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.905650 4703 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.905654 4703 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.905659 4703 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.905663 4703 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.905666 4703 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.905670 4703 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.905673 4703 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.905677 4703 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.905680 4703 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.905684 4703 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.905687 4703 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.905691 4703 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.905694 4703 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.905698 4703 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.905701 4703 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.905704 4703 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.905708 4703 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.905712 4703 feature_gate.go:330] unrecognized feature gate: Example Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.905716 4703 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.905719 4703 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.905723 4703 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.905726 4703 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.905731 4703 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.905735 4703 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.905739 4703 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.905744 4703 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.905747 4703 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.905751 4703 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.905755 4703 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.905759 4703 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.905764 4703 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.905768 4703 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.905772 4703 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.905775 4703 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.905779 4703 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.905783 4703 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.905787 4703 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.905790 4703 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.905794 4703 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.905797 4703 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.905801 4703 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.905804 4703 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.905807 4703 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.905811 4703 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.905814 4703 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.905819 4703 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.905823 4703 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.905827 4703 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.905830 4703 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.905835 4703 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.905839 4703 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.905843 4703 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.905847 4703 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.905850 4703 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.905855 4703 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.905858 4703 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.905862 4703 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.905865 4703 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.905869 4703 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.905874 4703 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906109 4703 flags.go:64] FLAG: --address="0.0.0.0" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906121 4703 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906127 4703 flags.go:64] FLAG: --anonymous-auth="true" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906132 4703 flags.go:64] FLAG: --application-metrics-count-limit="100" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906138 4703 flags.go:64] FLAG: --authentication-token-webhook="false" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906142 4703 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906147 4703 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906152 4703 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906157 4703 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906161 4703 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906166 4703 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906170 4703 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906174 4703 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906178 4703 flags.go:64] FLAG: --cgroup-root="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906182 4703 flags.go:64] FLAG: --cgroups-per-qos="true" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906202 4703 flags.go:64] FLAG: --client-ca-file="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906206 4703 flags.go:64] FLAG: --cloud-config="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906210 4703 flags.go:64] FLAG: --cloud-provider="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906214 4703 flags.go:64] FLAG: --cluster-dns="[]" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906220 4703 flags.go:64] FLAG: --cluster-domain="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906224 4703 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906228 4703 flags.go:64] FLAG: --config-dir="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906232 4703 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906237 4703 flags.go:64] FLAG: --container-log-max-files="5" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906242 4703 flags.go:64] FLAG: --container-log-max-size="10Mi" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906246 4703 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906250 4703 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906255 4703 flags.go:64] FLAG: --containerd-namespace="k8s.io" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906259 4703 flags.go:64] FLAG: --contention-profiling="false" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906263 4703 flags.go:64] FLAG: --cpu-cfs-quota="true" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906267 4703 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906272 4703 flags.go:64] FLAG: --cpu-manager-policy="none" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906276 4703 flags.go:64] FLAG: --cpu-manager-policy-options="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906281 4703 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906285 4703 flags.go:64] FLAG: --enable-controller-attach-detach="true" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906289 4703 flags.go:64] FLAG: --enable-debugging-handlers="true" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906293 4703 flags.go:64] FLAG: --enable-load-reader="false" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906297 4703 flags.go:64] FLAG: --enable-server="true" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906302 4703 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906307 4703 flags.go:64] FLAG: --event-burst="100" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906311 4703 flags.go:64] FLAG: --event-qps="50" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906315 4703 flags.go:64] FLAG: --event-storage-age-limit="default=0" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906319 4703 flags.go:64] FLAG: --event-storage-event-limit="default=0" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906323 4703 flags.go:64] FLAG: --eviction-hard="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906328 4703 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906332 4703 flags.go:64] FLAG: --eviction-minimum-reclaim="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906336 4703 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906340 4703 flags.go:64] FLAG: --eviction-soft="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906344 4703 flags.go:64] FLAG: --eviction-soft-grace-period="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906348 4703 flags.go:64] FLAG: --exit-on-lock-contention="false" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906352 4703 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906356 4703 flags.go:64] FLAG: --experimental-mounter-path="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906360 4703 flags.go:64] FLAG: --fail-cgroupv1="false" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906364 4703 flags.go:64] FLAG: --fail-swap-on="true" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906367 4703 flags.go:64] FLAG: --feature-gates="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906373 4703 flags.go:64] FLAG: --file-check-frequency="20s" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906377 4703 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906381 4703 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906385 4703 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906389 4703 flags.go:64] FLAG: --healthz-port="10248" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906393 4703 flags.go:64] FLAG: --help="false" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906397 4703 flags.go:64] FLAG: --hostname-override="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906401 4703 flags.go:64] FLAG: --housekeeping-interval="10s" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906406 4703 flags.go:64] FLAG: --http-check-frequency="20s" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906410 4703 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906414 4703 flags.go:64] FLAG: --image-credential-provider-config="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906418 4703 flags.go:64] FLAG: --image-gc-high-threshold="85" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906421 4703 flags.go:64] FLAG: --image-gc-low-threshold="80" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906426 4703 flags.go:64] FLAG: --image-service-endpoint="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906429 4703 flags.go:64] FLAG: --kernel-memcg-notification="false" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906434 4703 flags.go:64] FLAG: --kube-api-burst="100" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906438 4703 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906442 4703 flags.go:64] FLAG: --kube-api-qps="50" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906446 4703 flags.go:64] FLAG: --kube-reserved="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906450 4703 flags.go:64] FLAG: --kube-reserved-cgroup="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906454 4703 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906458 4703 flags.go:64] FLAG: --kubelet-cgroups="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906462 4703 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906467 4703 flags.go:64] FLAG: --lock-file="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906471 4703 flags.go:64] FLAG: --log-cadvisor-usage="false" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906475 4703 flags.go:64] FLAG: --log-flush-frequency="5s" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906479 4703 flags.go:64] FLAG: --log-json-info-buffer-size="0" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906485 4703 flags.go:64] FLAG: --log-json-split-stream="false" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906489 4703 flags.go:64] FLAG: --log-text-info-buffer-size="0" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906493 4703 flags.go:64] FLAG: --log-text-split-stream="false" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906497 4703 flags.go:64] FLAG: --logging-format="text" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906501 4703 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906505 4703 flags.go:64] FLAG: --make-iptables-util-chains="true" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906509 4703 flags.go:64] FLAG: --manifest-url="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906513 4703 flags.go:64] FLAG: --manifest-url-header="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906518 4703 flags.go:64] FLAG: --max-housekeeping-interval="15s" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906523 4703 flags.go:64] FLAG: --max-open-files="1000000" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906528 4703 flags.go:64] FLAG: --max-pods="110" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906532 4703 flags.go:64] FLAG: --maximum-dead-containers="-1" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906540 4703 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906544 4703 flags.go:64] FLAG: --memory-manager-policy="None" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906549 4703 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906553 4703 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906558 4703 flags.go:64] FLAG: --node-ip="192.168.126.11" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906562 4703 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906573 4703 flags.go:64] FLAG: --node-status-max-images="50" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906578 4703 flags.go:64] FLAG: --node-status-update-frequency="10s" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906584 4703 flags.go:64] FLAG: --oom-score-adj="-999" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906588 4703 flags.go:64] FLAG: --pod-cidr="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906592 4703 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906601 4703 flags.go:64] FLAG: --pod-manifest-path="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906605 4703 flags.go:64] FLAG: --pod-max-pids="-1" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906609 4703 flags.go:64] FLAG: --pods-per-core="0" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906613 4703 flags.go:64] FLAG: --port="10250" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906617 4703 flags.go:64] FLAG: --protect-kernel-defaults="false" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906621 4703 flags.go:64] FLAG: --provider-id="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906626 4703 flags.go:64] FLAG: --qos-reserved="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906630 4703 flags.go:64] FLAG: --read-only-port="10255" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906634 4703 flags.go:64] FLAG: --register-node="true" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906638 4703 flags.go:64] FLAG: --register-schedulable="true" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906642 4703 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906649 4703 flags.go:64] FLAG: --registry-burst="10" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906653 4703 flags.go:64] FLAG: --registry-qps="5" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906657 4703 flags.go:64] FLAG: --reserved-cpus="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906661 4703 flags.go:64] FLAG: --reserved-memory="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906666 4703 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906670 4703 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906674 4703 flags.go:64] FLAG: --rotate-certificates="false" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906678 4703 flags.go:64] FLAG: --rotate-server-certificates="false" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906682 4703 flags.go:64] FLAG: --runonce="false" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906686 4703 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906690 4703 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906695 4703 flags.go:64] FLAG: --seccomp-default="false" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906699 4703 flags.go:64] FLAG: --serialize-image-pulls="true" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906703 4703 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906707 4703 flags.go:64] FLAG: --storage-driver-db="cadvisor" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906711 4703 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906715 4703 flags.go:64] FLAG: --storage-driver-password="root" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906719 4703 flags.go:64] FLAG: --storage-driver-secure="false" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906723 4703 flags.go:64] FLAG: --storage-driver-table="stats" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906727 4703 flags.go:64] FLAG: --storage-driver-user="root" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906731 4703 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906735 4703 flags.go:64] FLAG: --sync-frequency="1m0s" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906739 4703 flags.go:64] FLAG: --system-cgroups="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906743 4703 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906749 4703 flags.go:64] FLAG: --system-reserved-cgroup="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906753 4703 flags.go:64] FLAG: --tls-cert-file="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906757 4703 flags.go:64] FLAG: --tls-cipher-suites="[]" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906766 4703 flags.go:64] FLAG: --tls-min-version="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906770 4703 flags.go:64] FLAG: --tls-private-key-file="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906774 4703 flags.go:64] FLAG: --topology-manager-policy="none" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906778 4703 flags.go:64] FLAG: --topology-manager-policy-options="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906782 4703 flags.go:64] FLAG: --topology-manager-scope="container" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906786 4703 flags.go:64] FLAG: --v="2" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906791 4703 flags.go:64] FLAG: --version="false" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906796 4703 flags.go:64] FLAG: --vmodule="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906801 4703 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.906805 4703 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.906901 4703 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.906905 4703 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.906909 4703 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.906912 4703 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.906916 4703 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.906920 4703 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.906923 4703 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.906927 4703 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.906931 4703 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.906935 4703 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.906939 4703 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.906944 4703 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.906947 4703 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.906951 4703 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.906954 4703 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.906959 4703 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.906964 4703 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.906968 4703 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.906972 4703 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.906976 4703 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.906979 4703 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.906983 4703 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.906990 4703 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.906994 4703 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.906999 4703 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.907002 4703 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.907006 4703 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.907010 4703 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.907015 4703 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.907019 4703 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.907023 4703 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.907027 4703 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.907030 4703 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.907034 4703 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.907038 4703 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.907042 4703 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.907045 4703 feature_gate.go:330] unrecognized feature gate: Example Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.907048 4703 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.907052 4703 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.907055 4703 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.907111 4703 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.907115 4703 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.907120 4703 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.907124 4703 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.907128 4703 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.907132 4703 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.907137 4703 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.907141 4703 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.907145 4703 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.907149 4703 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.907153 4703 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.907157 4703 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.907161 4703 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.907166 4703 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.907174 4703 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.907178 4703 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.907198 4703 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.907203 4703 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.907207 4703 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.907211 4703 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.907215 4703 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.907219 4703 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.907223 4703 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.907228 4703 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.907232 4703 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.907237 4703 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.907242 4703 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.907246 4703 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.907250 4703 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.907254 4703 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.907259 4703 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.907405 4703 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.913927 4703 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.913949 4703 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.914128 4703 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.914138 4703 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.914144 4703 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.914149 4703 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.914154 4703 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.914159 4703 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.914166 4703 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.914177 4703 feature_gate.go:330] unrecognized feature gate: Example Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.914182 4703 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.914201 4703 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.914206 4703 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.914211 4703 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.914215 4703 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.914219 4703 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.914224 4703 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.914228 4703 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.914232 4703 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.914236 4703 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.914241 4703 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.914245 4703 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.914254 4703 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.914259 4703 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.914265 4703 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.914270 4703 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.914274 4703 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.914280 4703 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.914285 4703 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.914289 4703 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.914294 4703 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.914298 4703 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.914303 4703 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.914308 4703 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.914316 4703 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.914322 4703 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.914326 4703 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.914330 4703 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.914335 4703 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.914342 4703 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.914347 4703 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.914351 4703 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.914356 4703 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.914360 4703 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.914365 4703 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.914369 4703 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.914375 4703 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.914385 4703 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.914389 4703 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.914394 4703 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.914399 4703 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.914404 4703 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.914411 4703 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.914417 4703 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.914423 4703 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.914429 4703 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.914434 4703 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.914439 4703 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.914443 4703 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.914451 4703 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.914455 4703 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.914459 4703 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.914464 4703 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.914469 4703 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.914473 4703 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.914479 4703 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.914484 4703 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.914490 4703 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.914496 4703 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.914507 4703 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.914512 4703 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.914518 4703 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.914523 4703 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.914534 4703 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.914826 4703 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.914837 4703 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.914843 4703 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.914848 4703 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.914853 4703 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.914858 4703 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.914864 4703 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.914871 4703 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.914882 4703 feature_gate.go:330] unrecognized feature gate: Example Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.914887 4703 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.914903 4703 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.914930 4703 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.914936 4703 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.914961 4703 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.914974 4703 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.914979 4703 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.914984 4703 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.914988 4703 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.914993 4703 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.915141 4703 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.915148 4703 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.915153 4703 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.915158 4703 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.915161 4703 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.915166 4703 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.915170 4703 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.915175 4703 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.915179 4703 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.915196 4703 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.915208 4703 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.915213 4703 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.915216 4703 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.915220 4703 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.915224 4703 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.915228 4703 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.915231 4703 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.915235 4703 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.915239 4703 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.915242 4703 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.915246 4703 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.915250 4703 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.915253 4703 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.915257 4703 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.915260 4703 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.915264 4703 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.915269 4703 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.915274 4703 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.915278 4703 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.915283 4703 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.915288 4703 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.915292 4703 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.915296 4703 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.915300 4703 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.915304 4703 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.915309 4703 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.915312 4703 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.915316 4703 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.915320 4703 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.915323 4703 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.915327 4703 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.915331 4703 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.915335 4703 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.915338 4703 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.915342 4703 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.915347 4703 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.915351 4703 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.915355 4703 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.915359 4703 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.915363 4703 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.915367 4703 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.915371 4703 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.915377 4703 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.915737 4703 server.go:940] "Client rotation is on, will bootstrap in background" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.917920 4703 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.917992 4703 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.918453 4703 server.go:997] "Starting client certificate rotation" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.918477 4703 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.918794 4703 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2026-01-05 05:01:41.79407489 +0000 UTC Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.918863 4703 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 640h56m40.875215829s for next certificate rotation Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.923247 4703 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.926395 4703 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.934569 4703 log.go:25] "Validated CRI v1 runtime API" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.950778 4703 log.go:25] "Validated CRI v1 image API" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.952732 4703 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.955384 4703 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-12-09-12-00-43-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.955413 4703 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.969443 4703 manager.go:217] Machine: {Timestamp:2025-12-09 12:05:00.967077135 +0000 UTC m=+0.215840674 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654132736 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:538480e3-ee75-4c42-9816-5a001726e0b5 BootID:cc650f57-0f1e-4118-b5e8-874027bb4fd3 Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730829824 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827068416 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:bd:8e:42 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:bd:8e:42 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:29:31:13 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:60:9e:fb Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:cf:58:48 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:6e:e9:a0 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:ba:f2:54:87:b5:53 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:12:e9:5c:19:9d:14 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654132736 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.969683 4703 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.970061 4703 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.971465 4703 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.971793 4703 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.971859 4703 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.972130 4703 topology_manager.go:138] "Creating topology manager with none policy" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.972141 4703 container_manager_linux.go:303] "Creating device plugin manager" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.972376 4703 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.972414 4703 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.972735 4703 state_mem.go:36] "Initialized new in-memory state store" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.972831 4703 server.go:1245] "Using root directory" path="/var/lib/kubelet" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.973469 4703 kubelet.go:418] "Attempting to sync node with API server" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.973494 4703 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.973519 4703 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.973535 4703 kubelet.go:324] "Adding apiserver pod source" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.973551 4703 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.975575 4703 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.976011 4703 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.977335 4703 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.977972 4703 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.978003 4703 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.978016 4703 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.978030 4703 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.978051 4703 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.978064 4703 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.978076 4703 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.978097 4703 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.978113 4703 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.978125 4703 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.978143 4703 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.978164 4703 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.978470 4703 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.979154 4703 server.go:1280] "Started kubelet" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.979997 4703 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.980302 4703 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.980550 4703 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.201:6443: connect: connection refused Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.980599 4703 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.201:6443: connect: connection refused Dec 09 12:05:00 crc kubenswrapper[4703]: E1209 12:05:00.980953 4703 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.201:6443: connect: connection refused" logger="UnhandledError" Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.981030 4703 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.201:6443: connect: connection refused Dec 09 12:05:00 crc kubenswrapper[4703]: E1209 12:05:00.981104 4703 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.201:6443: connect: connection refused" logger="UnhandledError" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.981315 4703 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 09 12:05:00 crc systemd[1]: Started Kubernetes Kubelet. Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.982241 4703 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.982629 4703 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.982755 4703 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 14:08:53.790533175 +0000 UTC Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.982806 4703 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 386h3m52.807729009s for next certificate rotation Dec 09 12:05:00 crc kubenswrapper[4703]: E1209 12:05:00.982355 4703 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.201:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187f8a893c495898 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 12:05:00.979116184 +0000 UTC m=+0.227879713,LastTimestamp:2025-12-09 12:05:00.979116184 +0000 UTC m=+0.227879713,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.983229 4703 volume_manager.go:287] "The desired_state_of_world populator starts" Dec 09 12:05:00 crc kubenswrapper[4703]: E1209 12:05:00.983240 4703 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.983245 4703 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.983257 4703 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.983860 4703 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.983879 4703 factory.go:55] Registering systemd factory Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.983890 4703 factory.go:221] Registration of the systemd container factory successfully Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.984211 4703 factory.go:153] Registering CRI-O factory Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.984227 4703 factory.go:221] Registration of the crio container factory successfully Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.984249 4703 factory.go:103] Registering Raw factory Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.984264 4703 manager.go:1196] Started watching for new ooms in manager Dec 09 12:05:00 crc kubenswrapper[4703]: W1209 12:05:00.984286 4703 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.201:6443: connect: connection refused Dec 09 12:05:00 crc kubenswrapper[4703]: E1209 12:05:00.984342 4703 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.201:6443: connect: connection refused" logger="UnhandledError" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.985684 4703 manager.go:319] Starting recovery of all containers Dec 09 12:05:00 crc kubenswrapper[4703]: E1209 12:05:00.986270 4703 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.201:6443: connect: connection refused" interval="200ms" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.989661 4703 server.go:460] "Adding debug handlers to kubelet server" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.996634 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.996709 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.996722 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.996736 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.996746 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.996757 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.996768 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.996778 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.996790 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.996800 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.996809 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.996819 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.996828 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.996841 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.996870 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.996881 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.996891 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.996900 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.996910 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.996918 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.996928 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.996937 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.996946 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.996955 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.996966 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.996978 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.996991 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.997001 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.997011 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.997050 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.997078 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.997091 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.997101 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.997110 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.997119 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.997131 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.997155 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.997168 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.997181 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.997228 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.997240 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.997251 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.997261 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.997270 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.997280 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.997291 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.997301 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.997314 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.997324 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.997334 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.997347 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.997357 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.997373 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.997384 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.997395 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.997406 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.997415 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.997425 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.997435 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.997445 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.997454 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.997465 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.997476 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.997488 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.997499 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.997509 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.997520 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.997529 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.997538 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.997548 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.997561 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.997574 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.997585 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.997597 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.997611 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.997623 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.997635 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.997646 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.997657 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.997668 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.997678 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.997688 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.997699 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.997718 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.997735 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.997750 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.997762 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.997775 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.997789 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.997804 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.997818 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.997833 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.997846 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.997860 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.997872 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.997887 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.997900 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.997913 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.997927 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.997941 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.997951 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.997963 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.997976 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.997991 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.998011 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.998026 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.998039 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.998052 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.998067 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.998080 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.998094 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.998111 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.998122 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.998132 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.998144 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.998153 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.998163 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.998174 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.998184 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.998213 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.998227 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.998247 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Dec 09 12:05:00 crc kubenswrapper[4703]: I1209 12:05:00.998262 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:00.998276 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:00.998288 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:00.998302 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:00.998314 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:00.998329 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:00.998342 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:00.998354 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:00.999834 4703 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:00.999859 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:00.999873 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:00.999889 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:00.999902 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:00.999915 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:00.999927 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:00.999950 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:00.999963 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:00.999976 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:00.999988 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.000001 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.000013 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.000028 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.000040 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.000052 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.000064 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.000076 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.000088 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.000100 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.000111 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.000127 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.000139 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.000151 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.000165 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.000177 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.000215 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.000237 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.000252 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.000266 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.000277 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.000288 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.000300 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.000314 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.000326 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.000349 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.000365 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.000380 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.000392 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.000408 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.000419 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.000433 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.000444 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.000455 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.000473 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.000490 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.000505 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.000518 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.000532 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.000546 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.000563 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.000578 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.000592 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.000605 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.000617 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.000631 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.000643 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.000655 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.000666 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.000680 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.000723 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.000739 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.000751 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.000764 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.000775 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.000786 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.000798 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.000811 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.000822 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.000834 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.000845 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.000859 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.000872 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.000884 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.000896 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.000910 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.000925 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.000943 4703 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.000956 4703 reconstruct.go:97] "Volume reconstruction finished" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.000967 4703 reconciler.go:26] "Reconciler: start to sync state" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.005801 4703 manager.go:324] Recovery completed Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.017818 4703 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.020442 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.020513 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.020531 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.021626 4703 cpu_manager.go:225] "Starting CPU manager" policy="none" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.021652 4703 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.021675 4703 state_mem.go:36] "Initialized new in-memory state store" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.065615 4703 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.067921 4703 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.068112 4703 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.068286 4703 kubelet.go:2335] "Starting kubelet main sync loop" Dec 09 12:05:01 crc kubenswrapper[4703]: E1209 12:05:01.068439 4703 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.068954 4703 policy_none.go:49] "None policy: Start" Dec 09 12:05:01 crc kubenswrapper[4703]: W1209 12:05:01.070269 4703 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.201:6443: connect: connection refused Dec 09 12:05:01 crc kubenswrapper[4703]: E1209 12:05:01.070373 4703 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.201:6443: connect: connection refused" logger="UnhandledError" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.070643 4703 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.070674 4703 state_mem.go:35] "Initializing new in-memory state store" Dec 09 12:05:01 crc kubenswrapper[4703]: E1209 12:05:01.083647 4703 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.135654 4703 manager.go:334] "Starting Device Plugin manager" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.135710 4703 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.135725 4703 server.go:79] "Starting device plugin registration server" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.136142 4703 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.136161 4703 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.136433 4703 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.136706 4703 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.136739 4703 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 09 12:05:01 crc kubenswrapper[4703]: E1209 12:05:01.142777 4703 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.170022 4703 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.170150 4703 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.171390 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.171445 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.171467 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.171568 4703 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.171835 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.171900 4703 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.172534 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.172668 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.172776 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.172896 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.172974 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.172984 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.173177 4703 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.173233 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.173400 4703 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.174448 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.174475 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.174487 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.174557 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.174595 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.174604 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.174612 4703 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.174845 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.174871 4703 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.175334 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.175471 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.175551 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.175709 4703 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.175804 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.175733 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.175854 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.175864 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.175831 4703 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.176940 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.176939 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.177153 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.177166 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.177118 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.177215 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.177348 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.177370 4703 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.178006 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.178090 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.178154 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:01 crc kubenswrapper[4703]: E1209 12:05:01.189385 4703 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.201:6443: connect: connection refused" interval="400ms" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.203719 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.203993 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.204163 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.204224 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.204246 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.204264 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.204282 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.204302 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.204339 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.204365 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.204384 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.204406 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.204422 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.204448 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.204464 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.236983 4703 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.237893 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.237934 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.237944 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.237967 4703 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 09 12:05:01 crc kubenswrapper[4703]: E1209 12:05:01.238485 4703 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.201:6443: connect: connection refused" node="crc" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.305461 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.305626 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.305647 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.305599 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.305719 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.305763 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.305737 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.305801 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.305815 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.305877 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.305883 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.305900 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.305931 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.305946 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.305960 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.305972 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.305996 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.306001 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.306021 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.306035 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.306047 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.306054 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.306075 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.306085 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.306093 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.306102 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.306124 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.306138 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.306167 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.306317 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.438924 4703 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.439970 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.440006 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.440017 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.440037 4703 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 09 12:05:01 crc kubenswrapper[4703]: E1209 12:05:01.440406 4703 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.201:6443: connect: connection refused" node="crc" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.503293 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.518693 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.526565 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 12:05:01 crc kubenswrapper[4703]: W1209 12:05:01.529358 4703 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-0095299d8c8ced44a8ca21b4988e896784c2aa9c4970674d4360c91b09480d35 WatchSource:0}: Error finding container 0095299d8c8ced44a8ca21b4988e896784c2aa9c4970674d4360c91b09480d35: Status 404 returned error can't find the container with id 0095299d8c8ced44a8ca21b4988e896784c2aa9c4970674d4360c91b09480d35 Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.531601 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 12:05:01 crc kubenswrapper[4703]: W1209 12:05:01.545580 4703 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-37923b0ab3181a65c4c8a06755f56539ae5bae2f6426496250ce7aa766ef2aa4 WatchSource:0}: Error finding container 37923b0ab3181a65c4c8a06755f56539ae5bae2f6426496250ce7aa766ef2aa4: Status 404 returned error can't find the container with id 37923b0ab3181a65c4c8a06755f56539ae5bae2f6426496250ce7aa766ef2aa4 Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.560806 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 09 12:05:01 crc kubenswrapper[4703]: W1209 12:05:01.583312 4703 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-065f3872cc7c159ddbd09e915e1403e8e5df6ca1212c41db603f632a51f22c10 WatchSource:0}: Error finding container 065f3872cc7c159ddbd09e915e1403e8e5df6ca1212c41db603f632a51f22c10: Status 404 returned error can't find the container with id 065f3872cc7c159ddbd09e915e1403e8e5df6ca1212c41db603f632a51f22c10 Dec 09 12:05:01 crc kubenswrapper[4703]: E1209 12:05:01.590891 4703 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.201:6443: connect: connection refused" interval="800ms" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.841079 4703 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.842316 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.842351 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.842361 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.842386 4703 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 09 12:05:01 crc kubenswrapper[4703]: E1209 12:05:01.842860 4703 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.201:6443: connect: connection refused" node="crc" Dec 09 12:05:01 crc kubenswrapper[4703]: I1209 12:05:01.982444 4703 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.201:6443: connect: connection refused Dec 09 12:05:02 crc kubenswrapper[4703]: I1209 12:05:02.076850 4703 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="0b2cdfd8cad0173fd775a465149996734b60c1c457ffc602c552103ecbb26f68" exitCode=0 Dec 09 12:05:02 crc kubenswrapper[4703]: I1209 12:05:02.076923 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"0b2cdfd8cad0173fd775a465149996734b60c1c457ffc602c552103ecbb26f68"} Dec 09 12:05:02 crc kubenswrapper[4703]: I1209 12:05:02.076995 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"065f3872cc7c159ddbd09e915e1403e8e5df6ca1212c41db603f632a51f22c10"} Dec 09 12:05:02 crc kubenswrapper[4703]: I1209 12:05:02.077080 4703 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 12:05:02 crc kubenswrapper[4703]: I1209 12:05:02.078040 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:02 crc kubenswrapper[4703]: I1209 12:05:02.078060 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:02 crc kubenswrapper[4703]: I1209 12:05:02.078070 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:02 crc kubenswrapper[4703]: I1209 12:05:02.079367 4703 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="55c7e1e49ab4af3fff08ccfa9a7d9b4f889fbdd1d60592bca6c87990f4623d5d" exitCode=0 Dec 09 12:05:02 crc kubenswrapper[4703]: I1209 12:05:02.079444 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"55c7e1e49ab4af3fff08ccfa9a7d9b4f889fbdd1d60592bca6c87990f4623d5d"} Dec 09 12:05:02 crc kubenswrapper[4703]: I1209 12:05:02.079480 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"271d43ae1fa8995a874b1a39cfb47c8dc540b42c4edd0d5c75271a81e07214f3"} Dec 09 12:05:02 crc kubenswrapper[4703]: I1209 12:05:02.079608 4703 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 12:05:02 crc kubenswrapper[4703]: I1209 12:05:02.080498 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:02 crc kubenswrapper[4703]: I1209 12:05:02.080513 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:02 crc kubenswrapper[4703]: I1209 12:05:02.080521 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:02 crc kubenswrapper[4703]: I1209 12:05:02.080540 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"6b4933db7a36f445ad8289586a5de8f593c8f565d53b4b310292c2a6b436b86b"} Dec 09 12:05:02 crc kubenswrapper[4703]: I1209 12:05:02.080561 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"37923b0ab3181a65c4c8a06755f56539ae5bae2f6426496250ce7aa766ef2aa4"} Dec 09 12:05:02 crc kubenswrapper[4703]: I1209 12:05:02.081441 4703 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 12:05:02 crc kubenswrapper[4703]: I1209 12:05:02.082079 4703 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="c76e6d78cb239cd0e6304e04f332e3d40053b95f7bb2c997b7ce43b6c3205547" exitCode=0 Dec 09 12:05:02 crc kubenswrapper[4703]: I1209 12:05:02.082226 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"c76e6d78cb239cd0e6304e04f332e3d40053b95f7bb2c997b7ce43b6c3205547"} Dec 09 12:05:02 crc kubenswrapper[4703]: I1209 12:05:02.082244 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"94fb3cd8adfb4d0a8e6ada9946e85d540629545df392a5e7aa5e897ffb5f12b1"} Dec 09 12:05:02 crc kubenswrapper[4703]: I1209 12:05:02.082753 4703 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 12:05:02 crc kubenswrapper[4703]: I1209 12:05:02.084971 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:02 crc kubenswrapper[4703]: I1209 12:05:02.085018 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:02 crc kubenswrapper[4703]: I1209 12:05:02.085028 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:02 crc kubenswrapper[4703]: I1209 12:05:02.086331 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:02 crc kubenswrapper[4703]: I1209 12:05:02.086351 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:02 crc kubenswrapper[4703]: I1209 12:05:02.086359 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:02 crc kubenswrapper[4703]: I1209 12:05:02.087612 4703 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="892955233c39ca67e919ba0709618461def13836b90c5a61dd8eb629bea71647" exitCode=0 Dec 09 12:05:02 crc kubenswrapper[4703]: I1209 12:05:02.087657 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"892955233c39ca67e919ba0709618461def13836b90c5a61dd8eb629bea71647"} Dec 09 12:05:02 crc kubenswrapper[4703]: I1209 12:05:02.087688 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"0095299d8c8ced44a8ca21b4988e896784c2aa9c4970674d4360c91b09480d35"} Dec 09 12:05:02 crc kubenswrapper[4703]: I1209 12:05:02.087783 4703 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 12:05:02 crc kubenswrapper[4703]: I1209 12:05:02.089595 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:02 crc kubenswrapper[4703]: I1209 12:05:02.089619 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:02 crc kubenswrapper[4703]: I1209 12:05:02.089627 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:02 crc kubenswrapper[4703]: W1209 12:05:02.146720 4703 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.201:6443: connect: connection refused Dec 09 12:05:02 crc kubenswrapper[4703]: E1209 12:05:02.146897 4703 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.201:6443: connect: connection refused" logger="UnhandledError" Dec 09 12:05:02 crc kubenswrapper[4703]: W1209 12:05:02.179666 4703 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.201:6443: connect: connection refused Dec 09 12:05:02 crc kubenswrapper[4703]: E1209 12:05:02.179763 4703 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.201:6443: connect: connection refused" logger="UnhandledError" Dec 09 12:05:02 crc kubenswrapper[4703]: W1209 12:05:02.273908 4703 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.201:6443: connect: connection refused Dec 09 12:05:02 crc kubenswrapper[4703]: E1209 12:05:02.274003 4703 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.201:6443: connect: connection refused" logger="UnhandledError" Dec 09 12:05:02 crc kubenswrapper[4703]: E1209 12:05:02.391923 4703 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.201:6443: connect: connection refused" interval="1.6s" Dec 09 12:05:02 crc kubenswrapper[4703]: W1209 12:05:02.439038 4703 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.201:6443: connect: connection refused Dec 09 12:05:02 crc kubenswrapper[4703]: E1209 12:05:02.439146 4703 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.201:6443: connect: connection refused" logger="UnhandledError" Dec 09 12:05:02 crc kubenswrapper[4703]: I1209 12:05:02.643130 4703 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 12:05:02 crc kubenswrapper[4703]: I1209 12:05:02.644543 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:02 crc kubenswrapper[4703]: I1209 12:05:02.644691 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:02 crc kubenswrapper[4703]: I1209 12:05:02.644710 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:02 crc kubenswrapper[4703]: I1209 12:05:02.644739 4703 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 09 12:05:02 crc kubenswrapper[4703]: E1209 12:05:02.645414 4703 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.201:6443: connect: connection refused" node="crc" Dec 09 12:05:03 crc kubenswrapper[4703]: I1209 12:05:03.092210 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"fe51d85ebdde170cd7531214b7f4c74c76e435371aa9bdec11400c38d6733d6a"} Dec 09 12:05:03 crc kubenswrapper[4703]: I1209 12:05:03.092339 4703 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 12:05:03 crc kubenswrapper[4703]: I1209 12:05:03.092948 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:03 crc kubenswrapper[4703]: I1209 12:05:03.092979 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:03 crc kubenswrapper[4703]: I1209 12:05:03.092990 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:03 crc kubenswrapper[4703]: I1209 12:05:03.094623 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"22136ab0fb8a3051c168e5b6e650e9b4da83daa189e67b570a453603bd5f4814"} Dec 09 12:05:03 crc kubenswrapper[4703]: I1209 12:05:03.094691 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"a54cbf7d80c64289b2c6534ae640af5e06588f6d89e949a307ecd54f87eb429d"} Dec 09 12:05:03 crc kubenswrapper[4703]: I1209 12:05:03.094706 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"4efcec3e0a1bc588b61035ca4ff2f4932da9a6f39c37e48f2d81d3c7d6999822"} Dec 09 12:05:03 crc kubenswrapper[4703]: I1209 12:05:03.095034 4703 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 12:05:03 crc kubenswrapper[4703]: I1209 12:05:03.096687 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:03 crc kubenswrapper[4703]: I1209 12:05:03.096719 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:03 crc kubenswrapper[4703]: I1209 12:05:03.096729 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:03 crc kubenswrapper[4703]: I1209 12:05:03.097989 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"da708e8829aba2597198cdab64374791885bac9ddf7cb04b296c81d4e81d42ff"} Dec 09 12:05:03 crc kubenswrapper[4703]: I1209 12:05:03.098021 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"917390e847b6ec24f370316f0fbc305db0f667eab946b4f0e749855a62112cc9"} Dec 09 12:05:03 crc kubenswrapper[4703]: I1209 12:05:03.098031 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"8612defa917cc99574e7762a0260e1b670277ea754e0a66bca960ef6b5a97f2d"} Dec 09 12:05:03 crc kubenswrapper[4703]: I1209 12:05:03.098040 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"4dc0c9f4fdf8ce1ac42d54d3823b0137e0fd8ddf4499236ac3b954f9bff7c491"} Dec 09 12:05:03 crc kubenswrapper[4703]: I1209 12:05:03.098052 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"cd9149c63c18f6178219d1f2e4f8e44bc861746b90fb9748105461192a870431"} Dec 09 12:05:03 crc kubenswrapper[4703]: I1209 12:05:03.098122 4703 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 12:05:03 crc kubenswrapper[4703]: I1209 12:05:03.098699 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:03 crc kubenswrapper[4703]: I1209 12:05:03.098723 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:03 crc kubenswrapper[4703]: I1209 12:05:03.098731 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:03 crc kubenswrapper[4703]: I1209 12:05:03.100343 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"feccbfdb1fa7a9935e98927106faf0c924485823950276b473cdb8c06c4b07eb"} Dec 09 12:05:03 crc kubenswrapper[4703]: I1209 12:05:03.100374 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"3dd75d360dcc49fc8db00b8a075ce33da2bd845289f44a0148ab391635ab90d7"} Dec 09 12:05:03 crc kubenswrapper[4703]: I1209 12:05:03.100386 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"cf2e3462f3adecb5658327310dd9a543e1f7fe8b7d477f0f90c77f9a3c1724cf"} Dec 09 12:05:03 crc kubenswrapper[4703]: I1209 12:05:03.100445 4703 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 12:05:03 crc kubenswrapper[4703]: I1209 12:05:03.100983 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:03 crc kubenswrapper[4703]: I1209 12:05:03.101006 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:03 crc kubenswrapper[4703]: I1209 12:05:03.101015 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:03 crc kubenswrapper[4703]: I1209 12:05:03.103176 4703 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="09657f3df2c538127b59a65becbd64662b5ebba1021b2e258bab7ae7a1c160f8" exitCode=0 Dec 09 12:05:03 crc kubenswrapper[4703]: I1209 12:05:03.103229 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"09657f3df2c538127b59a65becbd64662b5ebba1021b2e258bab7ae7a1c160f8"} Dec 09 12:05:03 crc kubenswrapper[4703]: I1209 12:05:03.103308 4703 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 12:05:03 crc kubenswrapper[4703]: I1209 12:05:03.103903 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:03 crc kubenswrapper[4703]: I1209 12:05:03.103929 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:03 crc kubenswrapper[4703]: I1209 12:05:03.103940 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:04 crc kubenswrapper[4703]: I1209 12:05:04.107516 4703 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="76130a5c4f0e966af1c3d2840c4a41b05b0b32b3cc7099a53e2bb39183311415" exitCode=0 Dec 09 12:05:04 crc kubenswrapper[4703]: I1209 12:05:04.107627 4703 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 12:05:04 crc kubenswrapper[4703]: I1209 12:05:04.108130 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"76130a5c4f0e966af1c3d2840c4a41b05b0b32b3cc7099a53e2bb39183311415"} Dec 09 12:05:04 crc kubenswrapper[4703]: I1209 12:05:04.108237 4703 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 12:05:04 crc kubenswrapper[4703]: I1209 12:05:04.108727 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:04 crc kubenswrapper[4703]: I1209 12:05:04.108763 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:04 crc kubenswrapper[4703]: I1209 12:05:04.108775 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:04 crc kubenswrapper[4703]: I1209 12:05:04.109483 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:04 crc kubenswrapper[4703]: I1209 12:05:04.109501 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:04 crc kubenswrapper[4703]: I1209 12:05:04.109509 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:04 crc kubenswrapper[4703]: I1209 12:05:04.245674 4703 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 12:05:04 crc kubenswrapper[4703]: I1209 12:05:04.246960 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:04 crc kubenswrapper[4703]: I1209 12:05:04.247012 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:04 crc kubenswrapper[4703]: I1209 12:05:04.247027 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:04 crc kubenswrapper[4703]: I1209 12:05:04.247063 4703 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 09 12:05:04 crc kubenswrapper[4703]: I1209 12:05:04.308634 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 12:05:04 crc kubenswrapper[4703]: I1209 12:05:04.308788 4703 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 09 12:05:04 crc kubenswrapper[4703]: I1209 12:05:04.308832 4703 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 12:05:04 crc kubenswrapper[4703]: I1209 12:05:04.312896 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:04 crc kubenswrapper[4703]: I1209 12:05:04.312935 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:04 crc kubenswrapper[4703]: I1209 12:05:04.312945 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:04 crc kubenswrapper[4703]: I1209 12:05:04.953553 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 12:05:05 crc kubenswrapper[4703]: I1209 12:05:05.113703 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"113937f0cd549b93c1c4e084ab24af7044c8a3a887db183bcebed112ee2e091e"} Dec 09 12:05:05 crc kubenswrapper[4703]: I1209 12:05:05.113759 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"ae6d08b388f19cf2d4587d19583d5c68d1ab921f809bc154917161a5e2cdc4fc"} Dec 09 12:05:05 crc kubenswrapper[4703]: I1209 12:05:05.113777 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"70b368c179d04e90a5a01543b05fdd71a67c7877d7de57fbdbd416378cd586e3"} Dec 09 12:05:05 crc kubenswrapper[4703]: I1209 12:05:05.113782 4703 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 12:05:05 crc kubenswrapper[4703]: I1209 12:05:05.113787 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"0ff53944c366e95d2f2186c0202fbd21e2465d6d6ca6f2874d56550dc5a5ff6f"} Dec 09 12:05:05 crc kubenswrapper[4703]: I1209 12:05:05.113879 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"94f7f975f9fe265343d0c3c38c4a8310d8b9256bbfedb330fa42525f39c27cdf"} Dec 09 12:05:05 crc kubenswrapper[4703]: I1209 12:05:05.113904 4703 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 12:05:05 crc kubenswrapper[4703]: I1209 12:05:05.114710 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:05 crc kubenswrapper[4703]: I1209 12:05:05.114736 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:05 crc kubenswrapper[4703]: I1209 12:05:05.114745 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:05 crc kubenswrapper[4703]: I1209 12:05:05.114778 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:05 crc kubenswrapper[4703]: I1209 12:05:05.114798 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:05 crc kubenswrapper[4703]: I1209 12:05:05.114807 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:05 crc kubenswrapper[4703]: I1209 12:05:05.488528 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 12:05:05 crc kubenswrapper[4703]: I1209 12:05:05.488670 4703 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 09 12:05:05 crc kubenswrapper[4703]: I1209 12:05:05.488709 4703 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 12:05:05 crc kubenswrapper[4703]: I1209 12:05:05.489797 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:05 crc kubenswrapper[4703]: I1209 12:05:05.489831 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:05 crc kubenswrapper[4703]: I1209 12:05:05.489844 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:05 crc kubenswrapper[4703]: I1209 12:05:05.625842 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Dec 09 12:05:06 crc kubenswrapper[4703]: I1209 12:05:06.116109 4703 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 12:05:06 crc kubenswrapper[4703]: I1209 12:05:06.117251 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:06 crc kubenswrapper[4703]: I1209 12:05:06.117302 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:06 crc kubenswrapper[4703]: I1209 12:05:06.117314 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:07 crc kubenswrapper[4703]: I1209 12:05:07.119140 4703 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 12:05:07 crc kubenswrapper[4703]: I1209 12:05:07.120622 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:07 crc kubenswrapper[4703]: I1209 12:05:07.120709 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:07 crc kubenswrapper[4703]: I1209 12:05:07.120732 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:07 crc kubenswrapper[4703]: I1209 12:05:07.954975 4703 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 09 12:05:07 crc kubenswrapper[4703]: I1209 12:05:07.955099 4703 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 09 12:05:08 crc kubenswrapper[4703]: I1209 12:05:08.301025 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Dec 09 12:05:08 crc kubenswrapper[4703]: I1209 12:05:08.301482 4703 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 12:05:08 crc kubenswrapper[4703]: I1209 12:05:08.303634 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:08 crc kubenswrapper[4703]: I1209 12:05:08.303706 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:08 crc kubenswrapper[4703]: I1209 12:05:08.303723 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:08 crc kubenswrapper[4703]: I1209 12:05:08.533597 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 12:05:08 crc kubenswrapper[4703]: I1209 12:05:08.534278 4703 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 12:05:08 crc kubenswrapper[4703]: I1209 12:05:08.537178 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:08 crc kubenswrapper[4703]: I1209 12:05:08.537306 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:08 crc kubenswrapper[4703]: I1209 12:05:08.537331 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:09 crc kubenswrapper[4703]: I1209 12:05:09.458359 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 12:05:09 crc kubenswrapper[4703]: I1209 12:05:09.458618 4703 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 12:05:09 crc kubenswrapper[4703]: I1209 12:05:09.460155 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:09 crc kubenswrapper[4703]: I1209 12:05:09.460220 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:09 crc kubenswrapper[4703]: I1209 12:05:09.460232 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:11 crc kubenswrapper[4703]: E1209 12:05:11.142871 4703 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 09 12:05:11 crc kubenswrapper[4703]: I1209 12:05:11.455102 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 12:05:11 crc kubenswrapper[4703]: I1209 12:05:11.455334 4703 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 12:05:11 crc kubenswrapper[4703]: I1209 12:05:11.456514 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:11 crc kubenswrapper[4703]: I1209 12:05:11.456542 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:11 crc kubenswrapper[4703]: I1209 12:05:11.456551 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:12 crc kubenswrapper[4703]: I1209 12:05:12.119243 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 09 12:05:12 crc kubenswrapper[4703]: I1209 12:05:12.119425 4703 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 12:05:12 crc kubenswrapper[4703]: I1209 12:05:12.120428 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:12 crc kubenswrapper[4703]: I1209 12:05:12.120477 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:12 crc kubenswrapper[4703]: I1209 12:05:12.120490 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:12 crc kubenswrapper[4703]: I1209 12:05:12.901416 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 12:05:12 crc kubenswrapper[4703]: I1209 12:05:12.901613 4703 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 12:05:12 crc kubenswrapper[4703]: I1209 12:05:12.902756 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:12 crc kubenswrapper[4703]: I1209 12:05:12.902797 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:12 crc kubenswrapper[4703]: I1209 12:05:12.902809 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:12 crc kubenswrapper[4703]: I1209 12:05:12.906169 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 12:05:12 crc kubenswrapper[4703]: I1209 12:05:12.982823 4703 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Dec 09 12:05:13 crc kubenswrapper[4703]: I1209 12:05:13.135151 4703 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 12:05:13 crc kubenswrapper[4703]: I1209 12:05:13.135875 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:13 crc kubenswrapper[4703]: I1209 12:05:13.135900 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:13 crc kubenswrapper[4703]: I1209 12:05:13.135911 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:13 crc kubenswrapper[4703]: I1209 12:05:13.139330 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 12:05:13 crc kubenswrapper[4703]: I1209 12:05:13.257073 4703 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 09 12:05:13 crc kubenswrapper[4703]: I1209 12:05:13.257132 4703 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 09 12:05:13 crc kubenswrapper[4703]: I1209 12:05:13.264911 4703 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 09 12:05:13 crc kubenswrapper[4703]: I1209 12:05:13.264967 4703 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 09 12:05:14 crc kubenswrapper[4703]: I1209 12:05:14.137507 4703 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 12:05:14 crc kubenswrapper[4703]: I1209 12:05:14.138482 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:14 crc kubenswrapper[4703]: I1209 12:05:14.138515 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:14 crc kubenswrapper[4703]: I1209 12:05:14.138529 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:15 crc kubenswrapper[4703]: I1209 12:05:15.495077 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 12:05:15 crc kubenswrapper[4703]: I1209 12:05:15.495226 4703 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 12:05:15 crc kubenswrapper[4703]: I1209 12:05:15.495649 4703 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 09 12:05:15 crc kubenswrapper[4703]: I1209 12:05:15.495710 4703 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 09 12:05:15 crc kubenswrapper[4703]: I1209 12:05:15.496623 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:15 crc kubenswrapper[4703]: I1209 12:05:15.496694 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:15 crc kubenswrapper[4703]: I1209 12:05:15.496709 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:15 crc kubenswrapper[4703]: I1209 12:05:15.499349 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 12:05:15 crc kubenswrapper[4703]: I1209 12:05:15.649758 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Dec 09 12:05:15 crc kubenswrapper[4703]: I1209 12:05:15.650335 4703 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 12:05:15 crc kubenswrapper[4703]: I1209 12:05:15.651416 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:15 crc kubenswrapper[4703]: I1209 12:05:15.651451 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:15 crc kubenswrapper[4703]: I1209 12:05:15.651460 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:15 crc kubenswrapper[4703]: I1209 12:05:15.664032 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Dec 09 12:05:16 crc kubenswrapper[4703]: I1209 12:05:16.141973 4703 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 12:05:16 crc kubenswrapper[4703]: I1209 12:05:16.142256 4703 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 12:05:16 crc kubenswrapper[4703]: I1209 12:05:16.142905 4703 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 09 12:05:16 crc kubenswrapper[4703]: I1209 12:05:16.142996 4703 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 09 12:05:16 crc kubenswrapper[4703]: I1209 12:05:16.143893 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:16 crc kubenswrapper[4703]: I1209 12:05:16.143946 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:16 crc kubenswrapper[4703]: I1209 12:05:16.143969 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:16 crc kubenswrapper[4703]: I1209 12:05:16.144002 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:16 crc kubenswrapper[4703]: I1209 12:05:16.144053 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:16 crc kubenswrapper[4703]: I1209 12:05:16.144064 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:17 crc kubenswrapper[4703]: I1209 12:05:17.954996 4703 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 09 12:05:17 crc kubenswrapper[4703]: I1209 12:05:17.955071 4703 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 09 12:05:18 crc kubenswrapper[4703]: E1209 12:05:18.260677 4703 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="3.2s" Dec 09 12:05:18 crc kubenswrapper[4703]: I1209 12:05:18.262007 4703 trace.go:236] Trace[2021157556]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (09-Dec-2025 12:05:05.292) (total time: 12969ms): Dec 09 12:05:18 crc kubenswrapper[4703]: Trace[2021157556]: ---"Objects listed" error: 12969ms (12:05:18.261) Dec 09 12:05:18 crc kubenswrapper[4703]: Trace[2021157556]: [12.969430856s] [12.969430856s] END Dec 09 12:05:18 crc kubenswrapper[4703]: I1209 12:05:18.262037 4703 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 09 12:05:18 crc kubenswrapper[4703]: I1209 12:05:18.262304 4703 trace.go:236] Trace[400700279]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (09-Dec-2025 12:05:04.165) (total time: 14096ms): Dec 09 12:05:18 crc kubenswrapper[4703]: Trace[400700279]: ---"Objects listed" error: 14096ms (12:05:18.262) Dec 09 12:05:18 crc kubenswrapper[4703]: Trace[400700279]: [14.096550018s] [14.096550018s] END Dec 09 12:05:18 crc kubenswrapper[4703]: I1209 12:05:18.262323 4703 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 09 12:05:18 crc kubenswrapper[4703]: I1209 12:05:18.262824 4703 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Dec 09 12:05:18 crc kubenswrapper[4703]: I1209 12:05:18.263329 4703 trace.go:236] Trace[327756256]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (09-Dec-2025 12:05:04.144) (total time: 14119ms): Dec 09 12:05:18 crc kubenswrapper[4703]: Trace[327756256]: ---"Objects listed" error: 14119ms (12:05:18.263) Dec 09 12:05:18 crc kubenswrapper[4703]: Trace[327756256]: [14.119287162s] [14.119287162s] END Dec 09 12:05:18 crc kubenswrapper[4703]: I1209 12:05:18.263353 4703 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 09 12:05:18 crc kubenswrapper[4703]: I1209 12:05:18.264519 4703 trace.go:236] Trace[1054040718]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (09-Dec-2025 12:05:05.428) (total time: 12835ms): Dec 09 12:05:18 crc kubenswrapper[4703]: Trace[1054040718]: ---"Objects listed" error: 12835ms (12:05:18.264) Dec 09 12:05:18 crc kubenswrapper[4703]: Trace[1054040718]: [12.835498548s] [12.835498548s] END Dec 09 12:05:18 crc kubenswrapper[4703]: I1209 12:05:18.264542 4703 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 09 12:05:18 crc kubenswrapper[4703]: E1209 12:05:18.264843 4703 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Dec 09 12:05:18 crc kubenswrapper[4703]: I1209 12:05:18.617378 4703 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:49834->192.168.126.11:17697: read: connection reset by peer" start-of-body= Dec 09 12:05:18 crc kubenswrapper[4703]: I1209 12:05:18.617469 4703 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:49834->192.168.126.11:17697: read: connection reset by peer" Dec 09 12:05:18 crc kubenswrapper[4703]: I1209 12:05:18.983782 4703 apiserver.go:52] "Watching apiserver" Dec 09 12:05:18 crc kubenswrapper[4703]: I1209 12:05:18.986253 4703 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 09 12:05:18 crc kubenswrapper[4703]: I1209 12:05:18.988294 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-dns/node-resolver-rfmng","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c"] Dec 09 12:05:18 crc kubenswrapper[4703]: I1209 12:05:18.989117 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:05:18 crc kubenswrapper[4703]: I1209 12:05:18.989274 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 09 12:05:18 crc kubenswrapper[4703]: I1209 12:05:18.989274 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 09 12:05:18 crc kubenswrapper[4703]: I1209 12:05:18.989313 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 09 12:05:18 crc kubenswrapper[4703]: I1209 12:05:18.989408 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:05:18 crc kubenswrapper[4703]: E1209 12:05:18.989552 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 12:05:18 crc kubenswrapper[4703]: I1209 12:05:18.989672 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:05:18 crc kubenswrapper[4703]: E1209 12:05:18.989760 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 12:05:18 crc kubenswrapper[4703]: I1209 12:05:18.989867 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-rfmng" Dec 09 12:05:18 crc kubenswrapper[4703]: E1209 12:05:18.990205 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 12:05:18 crc kubenswrapper[4703]: I1209 12:05:18.992409 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 09 12:05:18 crc kubenswrapper[4703]: I1209 12:05:18.992775 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 09 12:05:18 crc kubenswrapper[4703]: I1209 12:05:18.992788 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 09 12:05:18 crc kubenswrapper[4703]: I1209 12:05:18.993593 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 09 12:05:18 crc kubenswrapper[4703]: I1209 12:05:18.993824 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 09 12:05:18 crc kubenswrapper[4703]: I1209 12:05:18.993849 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 09 12:05:18 crc kubenswrapper[4703]: I1209 12:05:18.993856 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 09 12:05:18 crc kubenswrapper[4703]: I1209 12:05:18.993908 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 09 12:05:18 crc kubenswrapper[4703]: I1209 12:05:18.994020 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 09 12:05:18 crc kubenswrapper[4703]: I1209 12:05:18.994276 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 09 12:05:18 crc kubenswrapper[4703]: I1209 12:05:18.995280 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 09 12:05:18 crc kubenswrapper[4703]: I1209 12:05:18.996199 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.008448 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.018072 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.027873 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.044468 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.059468 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.072097 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.084310 4703 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.088806 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.098947 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rfmng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a89eb00-454a-44b2-9b8e-6518b4a9d10c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qf85c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rfmng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.150494 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.152099 4703 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="da708e8829aba2597198cdab64374791885bac9ddf7cb04b296c81d4e81d42ff" exitCode=255 Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.152139 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"da708e8829aba2597198cdab64374791885bac9ddf7cb04b296c81d4e81d42ff"} Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.168732 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.168782 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.168803 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.168829 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.168853 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.168912 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.168930 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.168962 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.168978 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.169001 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.169016 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.169032 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.169049 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.169067 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.169084 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.169101 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.169117 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.169136 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.169152 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.169169 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.169201 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.169219 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.169236 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.169253 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.169270 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.169287 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.169304 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.169321 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.169337 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.169355 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.169370 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.169387 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.169403 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.169419 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.169438 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.169457 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.169473 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.169497 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.169516 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.169535 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.169556 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.169574 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.169590 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.169608 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.169730 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.169760 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.169778 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.169795 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.169812 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.169829 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.169848 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.169901 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.169919 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.169937 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.169957 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.169974 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.169993 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.170011 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.170026 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.170045 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.170063 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.170080 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.170097 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.170112 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.170127 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.170143 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.170160 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.170177 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.170233 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.170250 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.170275 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.170290 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.170308 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.170326 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.170341 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.170358 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.170375 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.170392 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.170411 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.170429 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.170445 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.170462 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.170477 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.170499 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.170515 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.170540 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.170556 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.170573 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.170591 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.170609 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.170627 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.170646 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.170662 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.170678 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.170695 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.170713 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.170729 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.170746 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.170763 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.170784 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.170801 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.170819 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.170842 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.170858 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.170874 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.170893 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.170911 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.170927 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.170944 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.170964 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.171015 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.171037 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.171062 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.171082 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.171100 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.171123 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.171141 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.171161 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.171181 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.171210 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.171225 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.171246 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.171264 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.171281 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.171299 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.171315 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.171330 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.171348 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.171366 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.171384 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.171404 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.171422 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.171448 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.171465 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.171484 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.171502 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.171527 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.171546 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.171565 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.171582 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.171603 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.171621 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.171641 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.171660 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.171679 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.171700 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.171719 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.171738 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.171758 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.171778 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.171796 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.171813 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.171831 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.171848 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.171868 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.171885 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.171904 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.171923 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.171941 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.171958 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.171976 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.171994 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.172012 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.172032 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.172051 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.172068 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.172089 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.172109 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.172128 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.172148 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.172167 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.172199 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.172219 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.172236 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.172255 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.172274 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.172292 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.172310 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.172327 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.172348 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.172366 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.172385 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.172403 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.172421 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.172446 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.172465 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.172483 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.172506 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.172549 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.172567 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.172585 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.172604 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.172620 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.172637 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.172654 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.172708 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.172739 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/1a89eb00-454a-44b2-9b8e-6518b4a9d10c-hosts-file\") pod \"node-resolver-rfmng\" (UID: \"1a89eb00-454a-44b2-9b8e-6518b4a9d10c\") " pod="openshift-dns/node-resolver-rfmng" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.172765 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.172785 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.172805 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qf85c\" (UniqueName: \"kubernetes.io/projected/1a89eb00-454a-44b2-9b8e-6518b4a9d10c-kube-api-access-qf85c\") pod \"node-resolver-rfmng\" (UID: \"1a89eb00-454a-44b2-9b8e-6518b4a9d10c\") " pod="openshift-dns/node-resolver-rfmng" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.172826 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.172847 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.172866 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.172886 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.172949 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.172973 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.172997 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.173017 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.173036 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.173056 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.173075 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.182864 4703 scope.go:117] "RemoveContainer" containerID="da708e8829aba2597198cdab64374791885bac9ddf7cb04b296c81d4e81d42ff" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.169258 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.169902 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.170079 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.170239 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.170382 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.170447 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.170559 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.170690 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.170917 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.171018 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.171227 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.171445 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.171464 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.171685 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.171818 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.171878 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.171955 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.172079 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.172100 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: E1209 12:05:19.173211 4703 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.173403 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.173438 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.173676 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.174355 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.174785 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.175140 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.175268 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.175685 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.176222 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.176627 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.176772 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.176879 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.177222 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.177298 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.177362 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.178835 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.179053 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.179243 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.179439 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.179644 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.179849 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.180266 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.180280 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.180979 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.181046 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.181136 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.181270 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.181301 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.181475 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.181611 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.181661 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.181679 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.181978 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.182218 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.182279 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.182463 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.182521 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.182685 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.183155 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.183473 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.183501 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.183701 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.183819 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.183917 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.183997 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.184199 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.184298 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: E1209 12:05:19.184352 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:05:19.684331196 +0000 UTC m=+18.933094715 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.184613 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.184675 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.184830 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.185081 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.186261 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.186270 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.187150 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.187580 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.187806 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.187890 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.188539 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.188622 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.188912 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.189064 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.189170 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.189361 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.189482 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.189601 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.189680 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.189772 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.189802 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.190178 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.190397 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.190463 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.190645 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.190735 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.190802 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.190812 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.193527 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.193826 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.193897 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.194115 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.194476 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.194477 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.194737 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.194921 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.194968 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.194974 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.195109 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.195469 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.195571 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: E1209 12:05:19.195746 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-09 12:05:19.695688396 +0000 UTC m=+18.944451915 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.196059 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.196243 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.196523 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.197124 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.197476 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: E1209 12:05:19.208835 4703 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 12:05:19 crc kubenswrapper[4703]: E1209 12:05:19.208915 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-09 12:05:19.708896971 +0000 UTC m=+18.957660490 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.209576 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.209891 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.211435 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.212559 4703 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.212572 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.213157 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.213994 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.214169 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.214339 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.214487 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.218226 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.218563 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.218835 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.219940 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.219998 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.220103 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.220505 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.221314 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.224596 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.224881 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.225291 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.225641 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.225830 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.226042 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.226753 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.227007 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.227224 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: E1209 12:05:19.228009 4703 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 12:05:19 crc kubenswrapper[4703]: E1209 12:05:19.228069 4703 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 12:05:19 crc kubenswrapper[4703]: E1209 12:05:19.228086 4703 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 12:05:19 crc kubenswrapper[4703]: E1209 12:05:19.228169 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-09 12:05:19.728148331 +0000 UTC m=+18.976912031 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.233928 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.240259 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.246168 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.247277 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 09 12:05:19 crc kubenswrapper[4703]: E1209 12:05:19.247576 4703 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.247577 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: E1209 12:05:19.247604 4703 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 12:05:19 crc kubenswrapper[4703]: E1209 12:05:19.247658 4703 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.247733 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 09 12:05:19 crc kubenswrapper[4703]: E1209 12:05:19.247771 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-09 12:05:19.747748153 +0000 UTC m=+18.996511882 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.248413 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.248869 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.248883 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.255394 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.255461 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.256431 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.256978 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.257246 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.257392 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.256207 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.259416 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.259663 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.263167 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.263283 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.264038 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.267704 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.268005 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.268137 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.268120 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.268724 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.268736 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.269060 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.271442 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.271792 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.272525 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.274002 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.277791 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.277906 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.277923 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.278022 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.278082 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/1a89eb00-454a-44b2-9b8e-6518b4a9d10c-hosts-file\") pod \"node-resolver-rfmng\" (UID: \"1a89eb00-454a-44b2-9b8e-6518b4a9d10c\") " pod="openshift-dns/node-resolver-rfmng" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.278123 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qf85c\" (UniqueName: \"kubernetes.io/projected/1a89eb00-454a-44b2-9b8e-6518b4a9d10c-kube-api-access-qf85c\") pod \"node-resolver-rfmng\" (UID: \"1a89eb00-454a-44b2-9b8e-6518b4a9d10c\") " pod="openshift-dns/node-resolver-rfmng" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.278173 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.278205 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.278455 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.278534 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.278622 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/1a89eb00-454a-44b2-9b8e-6518b4a9d10c-hosts-file\") pod \"node-resolver-rfmng\" (UID: \"1a89eb00-454a-44b2-9b8e-6518b4a9d10c\") " pod="openshift-dns/node-resolver-rfmng" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.278717 4703 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.278735 4703 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.278749 4703 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.278749 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.278772 4703 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.278784 4703 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.278794 4703 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.278812 4703 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.278824 4703 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.278835 4703 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.278846 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.278862 4703 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.278873 4703 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.278883 4703 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.278893 4703 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.278888 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.278907 4703 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.278954 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.278982 4703 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.278998 4703 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.279043 4703 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.279059 4703 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.279082 4703 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.279098 4703 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.279113 4703 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.279127 4703 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.279144 4703 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.279157 4703 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.279209 4703 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.279226 4703 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.279245 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.279258 4703 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.279271 4703 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.279288 4703 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.279302 4703 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.279332 4703 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.279343 4703 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.279392 4703 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.279405 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.279416 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.279428 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.279444 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.279456 4703 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.279468 4703 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.279483 4703 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.279494 4703 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.279526 4703 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.279540 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.279557 4703 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.279570 4703 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.279583 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.279594 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.279610 4703 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.279623 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.279636 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.279648 4703 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.279664 4703 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.279679 4703 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.279692 4703 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.279707 4703 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.279720 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.279733 4703 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.279745 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.279762 4703 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.279808 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.279823 4703 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.279836 4703 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.279853 4703 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.279884 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.279898 4703 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.279912 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.279928 4703 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.279943 4703 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.279956 4703 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.279973 4703 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.279998 4703 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.280011 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.280024 4703 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.280042 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.280057 4703 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.280073 4703 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.280088 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.280134 4703 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.280148 4703 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.280160 4703 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.280177 4703 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.280449 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.280660 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.280683 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.281158 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.281782 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.282050 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.282488 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.284018 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.284043 4703 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.284221 4703 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.284303 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.284218 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.284512 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.284525 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.284432 4703 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.284539 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.284582 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.284611 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.284622 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.284714 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.284755 4703 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.284779 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.284791 4703 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.284821 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.284832 4703 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.284846 4703 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.284858 4703 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.284871 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.284910 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.284927 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.284930 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.284941 4703 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.284951 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.284985 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.284998 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.285010 4703 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.285026 4703 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.285035 4703 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.285065 4703 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.285077 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.285090 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.285087 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.285100 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.285137 4703 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.285150 4703 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.285400 4703 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.285413 4703 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.285426 4703 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.285440 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.285263 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.285325 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.287084 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.287427 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.287947 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.287967 4703 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.288011 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.288029 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.288044 4703 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.288060 4703 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.288072 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.288085 4703 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.288101 4703 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.288115 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.288128 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.288141 4703 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.288153 4703 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.288165 4703 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.288179 4703 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.288217 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.288230 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.288242 4703 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.288255 4703 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.288267 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.288279 4703 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.288291 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.288303 4703 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.288315 4703 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.288326 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.288361 4703 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.288375 4703 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.288389 4703 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.288401 4703 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.288414 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.288453 4703 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.288467 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.288480 4703 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.288493 4703 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.288505 4703 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.288517 4703 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.288529 4703 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.288541 4703 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.288554 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.288566 4703 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.288580 4703 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.288590 4703 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.288603 4703 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.288615 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.288627 4703 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.289824 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.290101 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.292552 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.292883 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.294350 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.296729 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.296912 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.297134 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.299623 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rfmng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a89eb00-454a-44b2-9b8e-6518b4a9d10c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qf85c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rfmng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.304788 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.310446 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.311862 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.314182 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.314521 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qf85c\" (UniqueName: \"kubernetes.io/projected/1a89eb00-454a-44b2-9b8e-6518b4a9d10c-kube-api-access-qf85c\") pod \"node-resolver-rfmng\" (UID: \"1a89eb00-454a-44b2-9b8e-6518b4a9d10c\") " pod="openshift-dns/node-resolver-rfmng" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.316107 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.321761 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.325210 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-rfmng" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.328254 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.377718 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-ncbbx"] Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.378711 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-ncbbx" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.388964 4703 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.388998 4703 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.389010 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.389021 4703 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.389032 4703 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.389043 4703 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.389055 4703 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.389067 4703 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.389078 4703 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.389095 4703 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.389106 4703 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.389127 4703 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.389140 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.389152 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.389161 4703 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.389171 4703 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.389179 4703 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.389205 4703 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.389213 4703 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.389221 4703 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.389231 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.389239 4703 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.389247 4703 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.389255 4703 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.389264 4703 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.389273 4703 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.389281 4703 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.389290 4703 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.389300 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.389309 4703 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.389319 4703 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.389328 4703 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.389336 4703 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.390320 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.390517 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.390666 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.390811 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.399095 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ncbbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c180fbd9-43db-436b-8166-3cbcb5a14da3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:19Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:19Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22tv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ncbbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.420277 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.432444 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.444766 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.455333 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rfmng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a89eb00-454a-44b2-9b8e-6518b4a9d10c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qf85c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rfmng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.468965 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"687e9c7e-e4b8-4bbf-871e-714260501e27\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd9149c63c18f6178219d1f2e4f8e44bc861746b90fb9748105461192a870431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8612defa917cc99574e7762a0260e1b670277ea754e0a66bca960ef6b5a97f2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc0c9f4fdf8ce1ac42d54d3823b0137e0fd8ddf4499236ac3b954f9bff7c491\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da708e8829aba2597198cdab64374791885bac9ddf7cb04b296c81d4e81d42ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da708e8829aba2597198cdab64374791885bac9ddf7cb04b296c81d4e81d42ff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 12:05:18.266857 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 12:05:18.266993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 12:05:18.268588 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1075686048/tls.crt::/tmp/serving-cert-1075686048/tls.key\\\\\\\"\\\\nI1209 12:05:18.602670 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 12:05:18.604681 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 12:05:18.604730 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 12:05:18.604783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 12:05:18.604808 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 12:05:18.609177 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 12:05:18.609280 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:05:18.609306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:05:18.609328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 12:05:18.609348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 12:05:18.609367 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 12:05:18.609387 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 12:05:18.609237 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 12:05:18.612865 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://917390e847b6ec24f370316f0fbc305db0f667eab946b4f0e749855a62112cc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55c7e1e49ab4af3fff08ccfa9a7d9b4f889fbdd1d60592bca6c87990f4623d5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55c7e1e49ab4af3fff08ccfa9a7d9b4f889fbdd1d60592bca6c87990f4623d5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.484873 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.490336 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/c180fbd9-43db-436b-8166-3cbcb5a14da3-serviceca\") pod \"node-ca-ncbbx\" (UID: \"c180fbd9-43db-436b-8166-3cbcb5a14da3\") " pod="openshift-image-registry/node-ca-ncbbx" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.490390 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c180fbd9-43db-436b-8166-3cbcb5a14da3-host\") pod \"node-ca-ncbbx\" (UID: \"c180fbd9-43db-436b-8166-3cbcb5a14da3\") " pod="openshift-image-registry/node-ca-ncbbx" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.490432 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22tv6\" (UniqueName: \"kubernetes.io/projected/c180fbd9-43db-436b-8166-3cbcb5a14da3-kube-api-access-22tv6\") pod \"node-ca-ncbbx\" (UID: \"c180fbd9-43db-436b-8166-3cbcb5a14da3\") " pod="openshift-image-registry/node-ca-ncbbx" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.496156 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.513936 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.590908 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22tv6\" (UniqueName: \"kubernetes.io/projected/c180fbd9-43db-436b-8166-3cbcb5a14da3-kube-api-access-22tv6\") pod \"node-ca-ncbbx\" (UID: \"c180fbd9-43db-436b-8166-3cbcb5a14da3\") " pod="openshift-image-registry/node-ca-ncbbx" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.590953 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/c180fbd9-43db-436b-8166-3cbcb5a14da3-serviceca\") pod \"node-ca-ncbbx\" (UID: \"c180fbd9-43db-436b-8166-3cbcb5a14da3\") " pod="openshift-image-registry/node-ca-ncbbx" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.590997 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c180fbd9-43db-436b-8166-3cbcb5a14da3-host\") pod \"node-ca-ncbbx\" (UID: \"c180fbd9-43db-436b-8166-3cbcb5a14da3\") " pod="openshift-image-registry/node-ca-ncbbx" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.591055 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c180fbd9-43db-436b-8166-3cbcb5a14da3-host\") pod \"node-ca-ncbbx\" (UID: \"c180fbd9-43db-436b-8166-3cbcb5a14da3\") " pod="openshift-image-registry/node-ca-ncbbx" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.592058 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/c180fbd9-43db-436b-8166-3cbcb5a14da3-serviceca\") pod \"node-ca-ncbbx\" (UID: \"c180fbd9-43db-436b-8166-3cbcb5a14da3\") " pod="openshift-image-registry/node-ca-ncbbx" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.622290 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22tv6\" (UniqueName: \"kubernetes.io/projected/c180fbd9-43db-436b-8166-3cbcb5a14da3-kube-api-access-22tv6\") pod \"node-ca-ncbbx\" (UID: \"c180fbd9-43db-436b-8166-3cbcb5a14da3\") " pod="openshift-image-registry/node-ca-ncbbx" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.627062 4703 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.692110 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:05:19 crc kubenswrapper[4703]: E1209 12:05:19.692281 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:05:20.692253273 +0000 UTC m=+19.941016792 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.708298 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-ncbbx" Dec 09 12:05:19 crc kubenswrapper[4703]: W1209 12:05:19.721229 4703 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc180fbd9_43db_436b_8166_3cbcb5a14da3.slice/crio-6782c10d904cd2c7d71a74e38344fab463d8ddf3735ddeb5ac603a5add3a9921 WatchSource:0}: Error finding container 6782c10d904cd2c7d71a74e38344fab463d8ddf3735ddeb5ac603a5add3a9921: Status 404 returned error can't find the container with id 6782c10d904cd2c7d71a74e38344fab463d8ddf3735ddeb5ac603a5add3a9921 Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.793278 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.793325 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.793377 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:05:19 crc kubenswrapper[4703]: I1209 12:05:19.793400 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:05:19 crc kubenswrapper[4703]: E1209 12:05:19.793464 4703 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 12:05:19 crc kubenswrapper[4703]: E1209 12:05:19.793472 4703 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 12:05:19 crc kubenswrapper[4703]: E1209 12:05:19.793504 4703 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 12:05:19 crc kubenswrapper[4703]: E1209 12:05:19.793524 4703 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 12:05:19 crc kubenswrapper[4703]: E1209 12:05:19.793529 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-09 12:05:20.79351308 +0000 UTC m=+20.042276599 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 12:05:19 crc kubenswrapper[4703]: E1209 12:05:19.793537 4703 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 12:05:19 crc kubenswrapper[4703]: E1209 12:05:19.793506 4703 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 12:05:19 crc kubenswrapper[4703]: E1209 12:05:19.793585 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-09 12:05:20.793557512 +0000 UTC m=+20.042321031 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 12:05:19 crc kubenswrapper[4703]: E1209 12:05:19.793595 4703 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 12:05:19 crc kubenswrapper[4703]: E1209 12:05:19.793653 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-09 12:05:20.793637594 +0000 UTC m=+20.042401113 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 12:05:19 crc kubenswrapper[4703]: E1209 12:05:19.793484 4703 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 12:05:19 crc kubenswrapper[4703]: E1209 12:05:19.793877 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-09 12:05:20.79382571 +0000 UTC m=+20.042589229 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.149414 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-q8sfk"] Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.149763 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.150037 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-4r9tc"] Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.150788 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-4r9tc" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.156834 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-rfmng" event={"ID":"1a89eb00-454a-44b2-9b8e-6518b4a9d10c","Type":"ContainerStarted","Data":"c6a0c3ac19e4c5217f17ea8cd7c742043ee7708b0f086c99341a4bbc062fa53d"} Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.156886 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-rfmng" event={"ID":"1a89eb00-454a-44b2-9b8e-6518b4a9d10c","Type":"ContainerStarted","Data":"b4b3b44661c3c4d16b754b56531c0fae32025d37396f63afb1c99c435b9bb25a"} Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.159248 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"15947b2dcc90c69809979bf2d4a222ee664245d897b1f9d9ca91d29551ad86ac"} Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.159333 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"592c8564eeb877cf87544918cb97d0e32451a6afd1c201efafb4cfec9a1e5857"} Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.159352 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"18c1c5709780d2e934cca7a7ab1e5ed66665d737261cdc1cda4b77e9a563ce6c"} Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.162245 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.164543 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"dd716e645be3932ca219e76ba8d896a8cf27086bbbac8f5e68057873fa68a6f4"} Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.164685 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.165933 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-ncbbx" event={"ID":"c180fbd9-43db-436b-8166-3cbcb5a14da3","Type":"ContainerStarted","Data":"a8f7249f78155fea92f6a72919fd391b841a3cdb25583c9b9bc1083c6e34f087"} Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.165960 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-ncbbx" event={"ID":"c180fbd9-43db-436b-8166-3cbcb5a14da3","Type":"ContainerStarted","Data":"6782c10d904cd2c7d71a74e38344fab463d8ddf3735ddeb5ac603a5add3a9921"} Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.166443 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-9zbgq"] Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.166904 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-9zbgq" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.167356 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"09d67aa387d1038afc00dc4deb2424e2b89ef8675a06f65259e54f2b945f0cb7"} Dec 09 12:05:20 crc kubenswrapper[4703]: W1209 12:05:20.168452 4703 reflector.go:561] object-"openshift-machine-config-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-config-operator": no relationship found between node 'crc' and this object Dec 09 12:05:20 crc kubenswrapper[4703]: E1209 12:05:20.168488 4703 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-config-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 09 12:05:20 crc kubenswrapper[4703]: W1209 12:05:20.168629 4703 reflector.go:561] object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz": failed to list *v1.Secret: secrets "multus-ancillary-tools-dockercfg-vnmsz" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Dec 09 12:05:20 crc kubenswrapper[4703]: W1209 12:05:20.168657 4703 reflector.go:561] object-"openshift-machine-config-operator"/"proxy-tls": failed to list *v1.Secret: secrets "proxy-tls" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-machine-config-operator": no relationship found between node 'crc' and this object Dec 09 12:05:20 crc kubenswrapper[4703]: E1209 12:05:20.168697 4703 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"proxy-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"proxy-tls\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-machine-config-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 09 12:05:20 crc kubenswrapper[4703]: E1209 12:05:20.168654 4703 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-vnmsz\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"multus-ancillary-tools-dockercfg-vnmsz\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.168820 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-7hrm8"] Dec 09 12:05:20 crc kubenswrapper[4703]: W1209 12:05:20.169080 4703 reflector.go:561] object-"openshift-machine-config-operator"/"kube-rbac-proxy": failed to list *v1.ConfigMap: configmaps "kube-rbac-proxy" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-config-operator": no relationship found between node 'crc' and this object Dec 09 12:05:20 crc kubenswrapper[4703]: E1209 12:05:20.169101 4703 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-rbac-proxy\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-config-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 09 12:05:20 crc kubenswrapper[4703]: W1209 12:05:20.169365 4703 reflector.go:561] object-"openshift-machine-config-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-config-operator": no relationship found between node 'crc' and this object Dec 09 12:05:20 crc kubenswrapper[4703]: E1209 12:05:20.169450 4703 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-config-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.169742 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"f05c1cdf163073cebf69a5dcd00e285698d11f7c76995b202500503a9762d467"} Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.169774 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"cd0bbe2d19ea2fed674f3d328710ec89891007e8b8d18e9c8be73cd3fcbaaa31"} Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.169877 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" Dec 09 12:05:20 crc kubenswrapper[4703]: W1209 12:05:20.169896 4703 reflector.go:561] object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq": failed to list *v1.Secret: secrets "machine-config-daemon-dockercfg-r5tcq" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-machine-config-operator": no relationship found between node 'crc' and this object Dec 09 12:05:20 crc kubenswrapper[4703]: E1209 12:05:20.169917 4703 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-r5tcq\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"machine-config-daemon-dockercfg-r5tcq\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-machine-config-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 09 12:05:20 crc kubenswrapper[4703]: W1209 12:05:20.173267 4703 reflector.go:561] object-"openshift-multus"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Dec 09 12:05:20 crc kubenswrapper[4703]: W1209 12:05:20.173281 4703 reflector.go:561] object-"openshift-multus"/"default-cni-sysctl-allowlist": failed to list *v1.ConfigMap: configmaps "default-cni-sysctl-allowlist" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Dec 09 12:05:20 crc kubenswrapper[4703]: E1209 12:05:20.173334 4703 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"default-cni-sysctl-allowlist\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 09 12:05:20 crc kubenswrapper[4703]: E1209 12:05:20.173302 4703 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.173761 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.173882 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.192830 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.193461 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.193680 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.193871 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.194078 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.194380 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.194593 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.194827 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.194996 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.219217 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ncbbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c180fbd9-43db-436b-8166-3cbcb5a14da3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:19Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:19Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22tv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ncbbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:20Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.269356 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:20Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.286904 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:20Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.297711 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e9173444-5181-4ee4-b651-11d92ccab0d0-node-log\") pod \"ovnkube-node-7hrm8\" (UID: \"e9173444-5181-4ee4-b651-11d92ccab0d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.297775 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e9173444-5181-4ee4-b651-11d92ccab0d0-ovn-node-metrics-cert\") pod \"ovnkube-node-7hrm8\" (UID: \"e9173444-5181-4ee4-b651-11d92ccab0d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.297795 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xm2fh\" (UniqueName: \"kubernetes.io/projected/e9173444-5181-4ee4-b651-11d92ccab0d0-kube-api-access-xm2fh\") pod \"ovnkube-node-7hrm8\" (UID: \"e9173444-5181-4ee4-b651-11d92ccab0d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.297817 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/b57e1095-b0e1-4b30-a491-00852a5219e7-multus-daemon-config\") pod \"multus-9zbgq\" (UID: \"b57e1095-b0e1-4b30-a491-00852a5219e7\") " pod="openshift-multus/multus-9zbgq" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.297839 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/65954588-9afb-47ff-8c0b-f83bf290da27-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-4r9tc\" (UID: \"65954588-9afb-47ff-8c0b-f83bf290da27\") " pod="openshift-multus/multus-additional-cni-plugins-4r9tc" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.297858 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szdpx\" (UniqueName: \"kubernetes.io/projected/65954588-9afb-47ff-8c0b-f83bf290da27-kube-api-access-szdpx\") pod \"multus-additional-cni-plugins-4r9tc\" (UID: \"65954588-9afb-47ff-8c0b-f83bf290da27\") " pod="openshift-multus/multus-additional-cni-plugins-4r9tc" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.297984 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e9173444-5181-4ee4-b651-11d92ccab0d0-host-cni-netd\") pod \"ovnkube-node-7hrm8\" (UID: \"e9173444-5181-4ee4-b651-11d92ccab0d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.298049 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e9173444-5181-4ee4-b651-11d92ccab0d0-run-openvswitch\") pod \"ovnkube-node-7hrm8\" (UID: \"e9173444-5181-4ee4-b651-11d92ccab0d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.298072 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e9173444-5181-4ee4-b651-11d92ccab0d0-run-ovn\") pod \"ovnkube-node-7hrm8\" (UID: \"e9173444-5181-4ee4-b651-11d92ccab0d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.298089 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b57e1095-b0e1-4b30-a491-00852a5219e7-host-run-netns\") pod \"multus-9zbgq\" (UID: \"b57e1095-b0e1-4b30-a491-00852a5219e7\") " pod="openshift-multus/multus-9zbgq" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.298104 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b57e1095-b0e1-4b30-a491-00852a5219e7-etc-kubernetes\") pod \"multus-9zbgq\" (UID: \"b57e1095-b0e1-4b30-a491-00852a5219e7\") " pod="openshift-multus/multus-9zbgq" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.298122 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/65954588-9afb-47ff-8c0b-f83bf290da27-cnibin\") pod \"multus-additional-cni-plugins-4r9tc\" (UID: \"65954588-9afb-47ff-8c0b-f83bf290da27\") " pod="openshift-multus/multus-additional-cni-plugins-4r9tc" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.298138 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e9173444-5181-4ee4-b651-11d92ccab0d0-ovnkube-script-lib\") pod \"ovnkube-node-7hrm8\" (UID: \"e9173444-5181-4ee4-b651-11d92ccab0d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.298154 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/b57e1095-b0e1-4b30-a491-00852a5219e7-hostroot\") pod \"multus-9zbgq\" (UID: \"b57e1095-b0e1-4b30-a491-00852a5219e7\") " pod="openshift-multus/multus-9zbgq" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.298180 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6sdw\" (UniqueName: \"kubernetes.io/projected/b57e1095-b0e1-4b30-a491-00852a5219e7-kube-api-access-v6sdw\") pod \"multus-9zbgq\" (UID: \"b57e1095-b0e1-4b30-a491-00852a5219e7\") " pod="openshift-multus/multus-9zbgq" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.298229 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/65954588-9afb-47ff-8c0b-f83bf290da27-cni-binary-copy\") pod \"multus-additional-cni-plugins-4r9tc\" (UID: \"65954588-9afb-47ff-8c0b-f83bf290da27\") " pod="openshift-multus/multus-additional-cni-plugins-4r9tc" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.298318 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/65954588-9afb-47ff-8c0b-f83bf290da27-tuning-conf-dir\") pod \"multus-additional-cni-plugins-4r9tc\" (UID: \"65954588-9afb-47ff-8c0b-f83bf290da27\") " pod="openshift-multus/multus-additional-cni-plugins-4r9tc" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.298367 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e9173444-5181-4ee4-b651-11d92ccab0d0-run-systemd\") pod \"ovnkube-node-7hrm8\" (UID: \"e9173444-5181-4ee4-b651-11d92ccab0d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.298391 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e9173444-5181-4ee4-b651-11d92ccab0d0-log-socket\") pod \"ovnkube-node-7hrm8\" (UID: \"e9173444-5181-4ee4-b651-11d92ccab0d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.298409 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/65954588-9afb-47ff-8c0b-f83bf290da27-os-release\") pod \"multus-additional-cni-plugins-4r9tc\" (UID: \"65954588-9afb-47ff-8c0b-f83bf290da27\") " pod="openshift-multus/multus-additional-cni-plugins-4r9tc" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.298504 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e9173444-5181-4ee4-b651-11d92ccab0d0-env-overrides\") pod \"ovnkube-node-7hrm8\" (UID: \"e9173444-5181-4ee4-b651-11d92ccab0d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.298571 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b57e1095-b0e1-4b30-a491-00852a5219e7-multus-cni-dir\") pod \"multus-9zbgq\" (UID: \"b57e1095-b0e1-4b30-a491-00852a5219e7\") " pod="openshift-multus/multus-9zbgq" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.298591 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e9173444-5181-4ee4-b651-11d92ccab0d0-host-slash\") pod \"ovnkube-node-7hrm8\" (UID: \"e9173444-5181-4ee4-b651-11d92ccab0d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.298605 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e9173444-5181-4ee4-b651-11d92ccab0d0-host-cni-bin\") pod \"ovnkube-node-7hrm8\" (UID: \"e9173444-5181-4ee4-b651-11d92ccab0d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.298646 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b57e1095-b0e1-4b30-a491-00852a5219e7-host-var-lib-kubelet\") pod \"multus-9zbgq\" (UID: \"b57e1095-b0e1-4b30-a491-00852a5219e7\") " pod="openshift-multus/multus-9zbgq" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.298693 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/65954588-9afb-47ff-8c0b-f83bf290da27-system-cni-dir\") pod \"multus-additional-cni-plugins-4r9tc\" (UID: \"65954588-9afb-47ff-8c0b-f83bf290da27\") " pod="openshift-multus/multus-additional-cni-plugins-4r9tc" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.298736 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tv4nx\" (UniqueName: \"kubernetes.io/projected/32956ceb-8540-406e-8693-e86efb46cd42-kube-api-access-tv4nx\") pod \"machine-config-daemon-q8sfk\" (UID: \"32956ceb-8540-406e-8693-e86efb46cd42\") " pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.298761 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e9173444-5181-4ee4-b651-11d92ccab0d0-var-lib-openvswitch\") pod \"ovnkube-node-7hrm8\" (UID: \"e9173444-5181-4ee4-b651-11d92ccab0d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.298785 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e9173444-5181-4ee4-b651-11d92ccab0d0-etc-openvswitch\") pod \"ovnkube-node-7hrm8\" (UID: \"e9173444-5181-4ee4-b651-11d92ccab0d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.298808 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b57e1095-b0e1-4b30-a491-00852a5219e7-system-cni-dir\") pod \"multus-9zbgq\" (UID: \"b57e1095-b0e1-4b30-a491-00852a5219e7\") " pod="openshift-multus/multus-9zbgq" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.298828 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/b57e1095-b0e1-4b30-a491-00852a5219e7-multus-socket-dir-parent\") pod \"multus-9zbgq\" (UID: \"b57e1095-b0e1-4b30-a491-00852a5219e7\") " pod="openshift-multus/multus-9zbgq" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.298851 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b57e1095-b0e1-4b30-a491-00852a5219e7-host-var-lib-cni-bin\") pod \"multus-9zbgq\" (UID: \"b57e1095-b0e1-4b30-a491-00852a5219e7\") " pod="openshift-multus/multus-9zbgq" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.298917 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b57e1095-b0e1-4b30-a491-00852a5219e7-cni-binary-copy\") pod \"multus-9zbgq\" (UID: \"b57e1095-b0e1-4b30-a491-00852a5219e7\") " pod="openshift-multus/multus-9zbgq" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.298957 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/b57e1095-b0e1-4b30-a491-00852a5219e7-host-run-k8s-cni-cncf-io\") pod \"multus-9zbgq\" (UID: \"b57e1095-b0e1-4b30-a491-00852a5219e7\") " pod="openshift-multus/multus-9zbgq" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.299014 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e9173444-5181-4ee4-b651-11d92ccab0d0-ovnkube-config\") pod \"ovnkube-node-7hrm8\" (UID: \"e9173444-5181-4ee4-b651-11d92ccab0d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.299067 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b57e1095-b0e1-4b30-a491-00852a5219e7-cnibin\") pod \"multus-9zbgq\" (UID: \"b57e1095-b0e1-4b30-a491-00852a5219e7\") " pod="openshift-multus/multus-9zbgq" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.299107 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/b57e1095-b0e1-4b30-a491-00852a5219e7-host-var-lib-cni-multus\") pod \"multus-9zbgq\" (UID: \"b57e1095-b0e1-4b30-a491-00852a5219e7\") " pod="openshift-multus/multus-9zbgq" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.299130 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/b57e1095-b0e1-4b30-a491-00852a5219e7-host-run-multus-certs\") pod \"multus-9zbgq\" (UID: \"b57e1095-b0e1-4b30-a491-00852a5219e7\") " pod="openshift-multus/multus-9zbgq" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.299156 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e9173444-5181-4ee4-b651-11d92ccab0d0-systemd-units\") pod \"ovnkube-node-7hrm8\" (UID: \"e9173444-5181-4ee4-b651-11d92ccab0d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.299200 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e9173444-5181-4ee4-b651-11d92ccab0d0-host-run-netns\") pod \"ovnkube-node-7hrm8\" (UID: \"e9173444-5181-4ee4-b651-11d92ccab0d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.299229 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b57e1095-b0e1-4b30-a491-00852a5219e7-os-release\") pod \"multus-9zbgq\" (UID: \"b57e1095-b0e1-4b30-a491-00852a5219e7\") " pod="openshift-multus/multus-9zbgq" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.299279 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/32956ceb-8540-406e-8693-e86efb46cd42-mcd-auth-proxy-config\") pod \"machine-config-daemon-q8sfk\" (UID: \"32956ceb-8540-406e-8693-e86efb46cd42\") " pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.299307 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e9173444-5181-4ee4-b651-11d92ccab0d0-host-kubelet\") pod \"ovnkube-node-7hrm8\" (UID: \"e9173444-5181-4ee4-b651-11d92ccab0d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.299332 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e9173444-5181-4ee4-b651-11d92ccab0d0-host-run-ovn-kubernetes\") pod \"ovnkube-node-7hrm8\" (UID: \"e9173444-5181-4ee4-b651-11d92ccab0d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.299427 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/32956ceb-8540-406e-8693-e86efb46cd42-proxy-tls\") pod \"machine-config-daemon-q8sfk\" (UID: \"32956ceb-8540-406e-8693-e86efb46cd42\") " pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.299472 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b57e1095-b0e1-4b30-a491-00852a5219e7-multus-conf-dir\") pod \"multus-9zbgq\" (UID: \"b57e1095-b0e1-4b30-a491-00852a5219e7\") " pod="openshift-multus/multus-9zbgq" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.299591 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/32956ceb-8540-406e-8693-e86efb46cd42-rootfs\") pod \"machine-config-daemon-q8sfk\" (UID: \"32956ceb-8540-406e-8693-e86efb46cd42\") " pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.299711 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e9173444-5181-4ee4-b651-11d92ccab0d0-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-7hrm8\" (UID: \"e9173444-5181-4ee4-b651-11d92ccab0d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.302576 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:20Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.315949 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:20Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.325852 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rfmng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a89eb00-454a-44b2-9b8e-6518b4a9d10c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qf85c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rfmng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:20Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.339925 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:20Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.354268 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"687e9c7e-e4b8-4bbf-871e-714260501e27\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd9149c63c18f6178219d1f2e4f8e44bc861746b90fb9748105461192a870431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8612defa917cc99574e7762a0260e1b670277ea754e0a66bca960ef6b5a97f2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc0c9f4fdf8ce1ac42d54d3823b0137e0fd8ddf4499236ac3b954f9bff7c491\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da708e8829aba2597198cdab64374791885bac9ddf7cb04b296c81d4e81d42ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da708e8829aba2597198cdab64374791885bac9ddf7cb04b296c81d4e81d42ff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 12:05:18.266857 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 12:05:18.266993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 12:05:18.268588 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1075686048/tls.crt::/tmp/serving-cert-1075686048/tls.key\\\\\\\"\\\\nI1209 12:05:18.602670 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 12:05:18.604681 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 12:05:18.604730 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 12:05:18.604783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 12:05:18.604808 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 12:05:18.609177 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 12:05:18.609280 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:05:18.609306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:05:18.609328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 12:05:18.609348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 12:05:18.609367 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 12:05:18.609387 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 12:05:18.609237 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 12:05:18.612865 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://917390e847b6ec24f370316f0fbc305db0f667eab946b4f0e749855a62112cc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55c7e1e49ab4af3fff08ccfa9a7d9b4f889fbdd1d60592bca6c87990f4623d5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55c7e1e49ab4af3fff08ccfa9a7d9b4f889fbdd1d60592bca6c87990f4623d5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:20Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.366428 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:20Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.376796 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32956ceb-8540-406e-8693-e86efb46cd42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tv4nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tv4nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q8sfk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:20Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.389949 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"687e9c7e-e4b8-4bbf-871e-714260501e27\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd9149c63c18f6178219d1f2e4f8e44bc861746b90fb9748105461192a870431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8612defa917cc99574e7762a0260e1b670277ea754e0a66bca960ef6b5a97f2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc0c9f4fdf8ce1ac42d54d3823b0137e0fd8ddf4499236ac3b954f9bff7c491\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd716e645be3932ca219e76ba8d896a8cf27086bbbac8f5e68057873fa68a6f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da708e8829aba2597198cdab64374791885bac9ddf7cb04b296c81d4e81d42ff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 12:05:18.266857 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 12:05:18.266993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 12:05:18.268588 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1075686048/tls.crt::/tmp/serving-cert-1075686048/tls.key\\\\\\\"\\\\nI1209 12:05:18.602670 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 12:05:18.604681 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 12:05:18.604730 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 12:05:18.604783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 12:05:18.604808 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 12:05:18.609177 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 12:05:18.609280 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:05:18.609306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:05:18.609328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 12:05:18.609348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 12:05:18.609367 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 12:05:18.609387 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 12:05:18.609237 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 12:05:18.612865 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://917390e847b6ec24f370316f0fbc305db0f667eab946b4f0e749855a62112cc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55c7e1e49ab4af3fff08ccfa9a7d9b4f889fbdd1d60592bca6c87990f4623d5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55c7e1e49ab4af3fff08ccfa9a7d9b4f889fbdd1d60592bca6c87990f4623d5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:20Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.399978 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:20Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.400386 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/32956ceb-8540-406e-8693-e86efb46cd42-proxy-tls\") pod \"machine-config-daemon-q8sfk\" (UID: \"32956ceb-8540-406e-8693-e86efb46cd42\") " pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.400452 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e9173444-5181-4ee4-b651-11d92ccab0d0-host-kubelet\") pod \"ovnkube-node-7hrm8\" (UID: \"e9173444-5181-4ee4-b651-11d92ccab0d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.400479 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e9173444-5181-4ee4-b651-11d92ccab0d0-host-run-ovn-kubernetes\") pod \"ovnkube-node-7hrm8\" (UID: \"e9173444-5181-4ee4-b651-11d92ccab0d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.400507 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b57e1095-b0e1-4b30-a491-00852a5219e7-multus-conf-dir\") pod \"multus-9zbgq\" (UID: \"b57e1095-b0e1-4b30-a491-00852a5219e7\") " pod="openshift-multus/multus-9zbgq" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.400530 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/32956ceb-8540-406e-8693-e86efb46cd42-rootfs\") pod \"machine-config-daemon-q8sfk\" (UID: \"32956ceb-8540-406e-8693-e86efb46cd42\") " pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.400538 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e9173444-5181-4ee4-b651-11d92ccab0d0-host-run-ovn-kubernetes\") pod \"ovnkube-node-7hrm8\" (UID: \"e9173444-5181-4ee4-b651-11d92ccab0d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.400578 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e9173444-5181-4ee4-b651-11d92ccab0d0-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-7hrm8\" (UID: \"e9173444-5181-4ee4-b651-11d92ccab0d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.400596 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/32956ceb-8540-406e-8693-e86efb46cd42-rootfs\") pod \"machine-config-daemon-q8sfk\" (UID: \"32956ceb-8540-406e-8693-e86efb46cd42\") " pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.400611 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b57e1095-b0e1-4b30-a491-00852a5219e7-multus-conf-dir\") pod \"multus-9zbgq\" (UID: \"b57e1095-b0e1-4b30-a491-00852a5219e7\") " pod="openshift-multus/multus-9zbgq" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.400562 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e9173444-5181-4ee4-b651-11d92ccab0d0-host-kubelet\") pod \"ovnkube-node-7hrm8\" (UID: \"e9173444-5181-4ee4-b651-11d92ccab0d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.400664 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e9173444-5181-4ee4-b651-11d92ccab0d0-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-7hrm8\" (UID: \"e9173444-5181-4ee4-b651-11d92ccab0d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.400633 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e9173444-5181-4ee4-b651-11d92ccab0d0-node-log\") pod \"ovnkube-node-7hrm8\" (UID: \"e9173444-5181-4ee4-b651-11d92ccab0d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.400700 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e9173444-5181-4ee4-b651-11d92ccab0d0-node-log\") pod \"ovnkube-node-7hrm8\" (UID: \"e9173444-5181-4ee4-b651-11d92ccab0d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.400805 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/65954588-9afb-47ff-8c0b-f83bf290da27-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-4r9tc\" (UID: \"65954588-9afb-47ff-8c0b-f83bf290da27\") " pod="openshift-multus/multus-additional-cni-plugins-4r9tc" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.400855 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szdpx\" (UniqueName: \"kubernetes.io/projected/65954588-9afb-47ff-8c0b-f83bf290da27-kube-api-access-szdpx\") pod \"multus-additional-cni-plugins-4r9tc\" (UID: \"65954588-9afb-47ff-8c0b-f83bf290da27\") " pod="openshift-multus/multus-additional-cni-plugins-4r9tc" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.400884 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e9173444-5181-4ee4-b651-11d92ccab0d0-host-cni-netd\") pod \"ovnkube-node-7hrm8\" (UID: \"e9173444-5181-4ee4-b651-11d92ccab0d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.400902 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e9173444-5181-4ee4-b651-11d92ccab0d0-ovn-node-metrics-cert\") pod \"ovnkube-node-7hrm8\" (UID: \"e9173444-5181-4ee4-b651-11d92ccab0d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.400921 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xm2fh\" (UniqueName: \"kubernetes.io/projected/e9173444-5181-4ee4-b651-11d92ccab0d0-kube-api-access-xm2fh\") pod \"ovnkube-node-7hrm8\" (UID: \"e9173444-5181-4ee4-b651-11d92ccab0d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.400937 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/b57e1095-b0e1-4b30-a491-00852a5219e7-multus-daemon-config\") pod \"multus-9zbgq\" (UID: \"b57e1095-b0e1-4b30-a491-00852a5219e7\") " pod="openshift-multus/multus-9zbgq" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.400973 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e9173444-5181-4ee4-b651-11d92ccab0d0-host-cni-netd\") pod \"ovnkube-node-7hrm8\" (UID: \"e9173444-5181-4ee4-b651-11d92ccab0d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.400984 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b57e1095-b0e1-4b30-a491-00852a5219e7-host-run-netns\") pod \"multus-9zbgq\" (UID: \"b57e1095-b0e1-4b30-a491-00852a5219e7\") " pod="openshift-multus/multus-9zbgq" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.401005 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b57e1095-b0e1-4b30-a491-00852a5219e7-etc-kubernetes\") pod \"multus-9zbgq\" (UID: \"b57e1095-b0e1-4b30-a491-00852a5219e7\") " pod="openshift-multus/multus-9zbgq" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.401031 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e9173444-5181-4ee4-b651-11d92ccab0d0-run-openvswitch\") pod \"ovnkube-node-7hrm8\" (UID: \"e9173444-5181-4ee4-b651-11d92ccab0d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.401049 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e9173444-5181-4ee4-b651-11d92ccab0d0-run-ovn\") pod \"ovnkube-node-7hrm8\" (UID: \"e9173444-5181-4ee4-b651-11d92ccab0d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.401083 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/65954588-9afb-47ff-8c0b-f83bf290da27-cnibin\") pod \"multus-additional-cni-plugins-4r9tc\" (UID: \"65954588-9afb-47ff-8c0b-f83bf290da27\") " pod="openshift-multus/multus-additional-cni-plugins-4r9tc" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.401099 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e9173444-5181-4ee4-b651-11d92ccab0d0-ovnkube-script-lib\") pod \"ovnkube-node-7hrm8\" (UID: \"e9173444-5181-4ee4-b651-11d92ccab0d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.401134 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/b57e1095-b0e1-4b30-a491-00852a5219e7-hostroot\") pod \"multus-9zbgq\" (UID: \"b57e1095-b0e1-4b30-a491-00852a5219e7\") " pod="openshift-multus/multus-9zbgq" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.401151 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6sdw\" (UniqueName: \"kubernetes.io/projected/b57e1095-b0e1-4b30-a491-00852a5219e7-kube-api-access-v6sdw\") pod \"multus-9zbgq\" (UID: \"b57e1095-b0e1-4b30-a491-00852a5219e7\") " pod="openshift-multus/multus-9zbgq" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.401206 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e9173444-5181-4ee4-b651-11d92ccab0d0-run-systemd\") pod \"ovnkube-node-7hrm8\" (UID: \"e9173444-5181-4ee4-b651-11d92ccab0d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.401223 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e9173444-5181-4ee4-b651-11d92ccab0d0-log-socket\") pod \"ovnkube-node-7hrm8\" (UID: \"e9173444-5181-4ee4-b651-11d92ccab0d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.401244 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/65954588-9afb-47ff-8c0b-f83bf290da27-cnibin\") pod \"multus-additional-cni-plugins-4r9tc\" (UID: \"65954588-9afb-47ff-8c0b-f83bf290da27\") " pod="openshift-multus/multus-additional-cni-plugins-4r9tc" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.401255 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/65954588-9afb-47ff-8c0b-f83bf290da27-cni-binary-copy\") pod \"multus-additional-cni-plugins-4r9tc\" (UID: \"65954588-9afb-47ff-8c0b-f83bf290da27\") " pod="openshift-multus/multus-additional-cni-plugins-4r9tc" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.401274 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/65954588-9afb-47ff-8c0b-f83bf290da27-tuning-conf-dir\") pod \"multus-additional-cni-plugins-4r9tc\" (UID: \"65954588-9afb-47ff-8c0b-f83bf290da27\") " pod="openshift-multus/multus-additional-cni-plugins-4r9tc" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.401282 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b57e1095-b0e1-4b30-a491-00852a5219e7-host-run-netns\") pod \"multus-9zbgq\" (UID: \"b57e1095-b0e1-4b30-a491-00852a5219e7\") " pod="openshift-multus/multus-9zbgq" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.401317 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b57e1095-b0e1-4b30-a491-00852a5219e7-etc-kubernetes\") pod \"multus-9zbgq\" (UID: \"b57e1095-b0e1-4b30-a491-00852a5219e7\") " pod="openshift-multus/multus-9zbgq" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.401356 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e9173444-5181-4ee4-b651-11d92ccab0d0-run-openvswitch\") pod \"ovnkube-node-7hrm8\" (UID: \"e9173444-5181-4ee4-b651-11d92ccab0d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.401392 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e9173444-5181-4ee4-b651-11d92ccab0d0-run-ovn\") pod \"ovnkube-node-7hrm8\" (UID: \"e9173444-5181-4ee4-b651-11d92ccab0d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.401409 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e9173444-5181-4ee4-b651-11d92ccab0d0-env-overrides\") pod \"ovnkube-node-7hrm8\" (UID: \"e9173444-5181-4ee4-b651-11d92ccab0d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.401426 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b57e1095-b0e1-4b30-a491-00852a5219e7-multus-cni-dir\") pod \"multus-9zbgq\" (UID: \"b57e1095-b0e1-4b30-a491-00852a5219e7\") " pod="openshift-multus/multus-9zbgq" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.401461 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/65954588-9afb-47ff-8c0b-f83bf290da27-os-release\") pod \"multus-additional-cni-plugins-4r9tc\" (UID: \"65954588-9afb-47ff-8c0b-f83bf290da27\") " pod="openshift-multus/multus-additional-cni-plugins-4r9tc" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.401492 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/65954588-9afb-47ff-8c0b-f83bf290da27-system-cni-dir\") pod \"multus-additional-cni-plugins-4r9tc\" (UID: \"65954588-9afb-47ff-8c0b-f83bf290da27\") " pod="openshift-multus/multus-additional-cni-plugins-4r9tc" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.401528 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e9173444-5181-4ee4-b651-11d92ccab0d0-host-slash\") pod \"ovnkube-node-7hrm8\" (UID: \"e9173444-5181-4ee4-b651-11d92ccab0d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.401590 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e9173444-5181-4ee4-b651-11d92ccab0d0-host-cni-bin\") pod \"ovnkube-node-7hrm8\" (UID: \"e9173444-5181-4ee4-b651-11d92ccab0d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.401606 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b57e1095-b0e1-4b30-a491-00852a5219e7-host-var-lib-kubelet\") pod \"multus-9zbgq\" (UID: \"b57e1095-b0e1-4b30-a491-00852a5219e7\") " pod="openshift-multus/multus-9zbgq" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.401627 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tv4nx\" (UniqueName: \"kubernetes.io/projected/32956ceb-8540-406e-8693-e86efb46cd42-kube-api-access-tv4nx\") pod \"machine-config-daemon-q8sfk\" (UID: \"32956ceb-8540-406e-8693-e86efb46cd42\") " pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.401642 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e9173444-5181-4ee4-b651-11d92ccab0d0-var-lib-openvswitch\") pod \"ovnkube-node-7hrm8\" (UID: \"e9173444-5181-4ee4-b651-11d92ccab0d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.401647 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/b57e1095-b0e1-4b30-a491-00852a5219e7-hostroot\") pod \"multus-9zbgq\" (UID: \"b57e1095-b0e1-4b30-a491-00852a5219e7\") " pod="openshift-multus/multus-9zbgq" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.401657 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e9173444-5181-4ee4-b651-11d92ccab0d0-etc-openvswitch\") pod \"ovnkube-node-7hrm8\" (UID: \"e9173444-5181-4ee4-b651-11d92ccab0d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.401709 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e9173444-5181-4ee4-b651-11d92ccab0d0-etc-openvswitch\") pod \"ovnkube-node-7hrm8\" (UID: \"e9173444-5181-4ee4-b651-11d92ccab0d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.401749 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b57e1095-b0e1-4b30-a491-00852a5219e7-system-cni-dir\") pod \"multus-9zbgq\" (UID: \"b57e1095-b0e1-4b30-a491-00852a5219e7\") " pod="openshift-multus/multus-9zbgq" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.401760 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b57e1095-b0e1-4b30-a491-00852a5219e7-multus-cni-dir\") pod \"multus-9zbgq\" (UID: \"b57e1095-b0e1-4b30-a491-00852a5219e7\") " pod="openshift-multus/multus-9zbgq" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.401773 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/b57e1095-b0e1-4b30-a491-00852a5219e7-multus-socket-dir-parent\") pod \"multus-9zbgq\" (UID: \"b57e1095-b0e1-4b30-a491-00852a5219e7\") " pod="openshift-multus/multus-9zbgq" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.401797 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e9173444-5181-4ee4-b651-11d92ccab0d0-run-systemd\") pod \"ovnkube-node-7hrm8\" (UID: \"e9173444-5181-4ee4-b651-11d92ccab0d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.401797 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b57e1095-b0e1-4b30-a491-00852a5219e7-host-var-lib-cni-bin\") pod \"multus-9zbgq\" (UID: \"b57e1095-b0e1-4b30-a491-00852a5219e7\") " pod="openshift-multus/multus-9zbgq" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.401825 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b57e1095-b0e1-4b30-a491-00852a5219e7-host-var-lib-cni-bin\") pod \"multus-9zbgq\" (UID: \"b57e1095-b0e1-4b30-a491-00852a5219e7\") " pod="openshift-multus/multus-9zbgq" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.401828 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b57e1095-b0e1-4b30-a491-00852a5219e7-cni-binary-copy\") pod \"multus-9zbgq\" (UID: \"b57e1095-b0e1-4b30-a491-00852a5219e7\") " pod="openshift-multus/multus-9zbgq" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.401847 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/b57e1095-b0e1-4b30-a491-00852a5219e7-host-run-k8s-cni-cncf-io\") pod \"multus-9zbgq\" (UID: \"b57e1095-b0e1-4b30-a491-00852a5219e7\") " pod="openshift-multus/multus-9zbgq" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.401856 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/65954588-9afb-47ff-8c0b-f83bf290da27-os-release\") pod \"multus-additional-cni-plugins-4r9tc\" (UID: \"65954588-9afb-47ff-8c0b-f83bf290da27\") " pod="openshift-multus/multus-additional-cni-plugins-4r9tc" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.401865 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b57e1095-b0e1-4b30-a491-00852a5219e7-system-cni-dir\") pod \"multus-9zbgq\" (UID: \"b57e1095-b0e1-4b30-a491-00852a5219e7\") " pod="openshift-multus/multus-9zbgq" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.401886 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e9173444-5181-4ee4-b651-11d92ccab0d0-log-socket\") pod \"ovnkube-node-7hrm8\" (UID: \"e9173444-5181-4ee4-b651-11d92ccab0d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.401908 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e9173444-5181-4ee4-b651-11d92ccab0d0-host-cni-bin\") pod \"ovnkube-node-7hrm8\" (UID: \"e9173444-5181-4ee4-b651-11d92ccab0d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.401930 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b57e1095-b0e1-4b30-a491-00852a5219e7-host-var-lib-kubelet\") pod \"multus-9zbgq\" (UID: \"b57e1095-b0e1-4b30-a491-00852a5219e7\") " pod="openshift-multus/multus-9zbgq" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.401956 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/b57e1095-b0e1-4b30-a491-00852a5219e7-multus-daemon-config\") pod \"multus-9zbgq\" (UID: \"b57e1095-b0e1-4b30-a491-00852a5219e7\") " pod="openshift-multus/multus-9zbgq" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.401863 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/b57e1095-b0e1-4b30-a491-00852a5219e7-host-var-lib-cni-multus\") pod \"multus-9zbgq\" (UID: \"b57e1095-b0e1-4b30-a491-00852a5219e7\") " pod="openshift-multus/multus-9zbgq" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.402208 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/65954588-9afb-47ff-8c0b-f83bf290da27-system-cni-dir\") pod \"multus-additional-cni-plugins-4r9tc\" (UID: \"65954588-9afb-47ff-8c0b-f83bf290da27\") " pod="openshift-multus/multus-additional-cni-plugins-4r9tc" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.402234 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e9173444-5181-4ee4-b651-11d92ccab0d0-ovnkube-script-lib\") pod \"ovnkube-node-7hrm8\" (UID: \"e9173444-5181-4ee4-b651-11d92ccab0d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.402247 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/b57e1095-b0e1-4b30-a491-00852a5219e7-host-run-multus-certs\") pod \"multus-9zbgq\" (UID: \"b57e1095-b0e1-4b30-a491-00852a5219e7\") " pod="openshift-multus/multus-9zbgq" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.402278 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/b57e1095-b0e1-4b30-a491-00852a5219e7-multus-socket-dir-parent\") pod \"multus-9zbgq\" (UID: \"b57e1095-b0e1-4b30-a491-00852a5219e7\") " pod="openshift-multus/multus-9zbgq" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.402267 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/b57e1095-b0e1-4b30-a491-00852a5219e7-host-run-multus-certs\") pod \"multus-9zbgq\" (UID: \"b57e1095-b0e1-4b30-a491-00852a5219e7\") " pod="openshift-multus/multus-9zbgq" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.402282 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/b57e1095-b0e1-4b30-a491-00852a5219e7-host-run-k8s-cni-cncf-io\") pod \"multus-9zbgq\" (UID: \"b57e1095-b0e1-4b30-a491-00852a5219e7\") " pod="openshift-multus/multus-9zbgq" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.402313 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/b57e1095-b0e1-4b30-a491-00852a5219e7-host-var-lib-cni-multus\") pod \"multus-9zbgq\" (UID: \"b57e1095-b0e1-4b30-a491-00852a5219e7\") " pod="openshift-multus/multus-9zbgq" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.402319 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e9173444-5181-4ee4-b651-11d92ccab0d0-host-slash\") pod \"ovnkube-node-7hrm8\" (UID: \"e9173444-5181-4ee4-b651-11d92ccab0d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.402326 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e9173444-5181-4ee4-b651-11d92ccab0d0-var-lib-openvswitch\") pod \"ovnkube-node-7hrm8\" (UID: \"e9173444-5181-4ee4-b651-11d92ccab0d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.402358 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e9173444-5181-4ee4-b651-11d92ccab0d0-ovnkube-config\") pod \"ovnkube-node-7hrm8\" (UID: \"e9173444-5181-4ee4-b651-11d92ccab0d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.402382 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b57e1095-b0e1-4b30-a491-00852a5219e7-cnibin\") pod \"multus-9zbgq\" (UID: \"b57e1095-b0e1-4b30-a491-00852a5219e7\") " pod="openshift-multus/multus-9zbgq" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.402404 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e9173444-5181-4ee4-b651-11d92ccab0d0-env-overrides\") pod \"ovnkube-node-7hrm8\" (UID: \"e9173444-5181-4ee4-b651-11d92ccab0d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.402435 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b57e1095-b0e1-4b30-a491-00852a5219e7-cnibin\") pod \"multus-9zbgq\" (UID: \"b57e1095-b0e1-4b30-a491-00852a5219e7\") " pod="openshift-multus/multus-9zbgq" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.402469 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b57e1095-b0e1-4b30-a491-00852a5219e7-cni-binary-copy\") pod \"multus-9zbgq\" (UID: \"b57e1095-b0e1-4b30-a491-00852a5219e7\") " pod="openshift-multus/multus-9zbgq" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.402491 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/32956ceb-8540-406e-8693-e86efb46cd42-mcd-auth-proxy-config\") pod \"machine-config-daemon-q8sfk\" (UID: \"32956ceb-8540-406e-8693-e86efb46cd42\") " pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.402514 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e9173444-5181-4ee4-b651-11d92ccab0d0-systemd-units\") pod \"ovnkube-node-7hrm8\" (UID: \"e9173444-5181-4ee4-b651-11d92ccab0d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.402547 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e9173444-5181-4ee4-b651-11d92ccab0d0-host-run-netns\") pod \"ovnkube-node-7hrm8\" (UID: \"e9173444-5181-4ee4-b651-11d92ccab0d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.402561 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e9173444-5181-4ee4-b651-11d92ccab0d0-systemd-units\") pod \"ovnkube-node-7hrm8\" (UID: \"e9173444-5181-4ee4-b651-11d92ccab0d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.402567 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b57e1095-b0e1-4b30-a491-00852a5219e7-os-release\") pod \"multus-9zbgq\" (UID: \"b57e1095-b0e1-4b30-a491-00852a5219e7\") " pod="openshift-multus/multus-9zbgq" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.402617 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b57e1095-b0e1-4b30-a491-00852a5219e7-os-release\") pod \"multus-9zbgq\" (UID: \"b57e1095-b0e1-4b30-a491-00852a5219e7\") " pod="openshift-multus/multus-9zbgq" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.402645 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e9173444-5181-4ee4-b651-11d92ccab0d0-host-run-netns\") pod \"ovnkube-node-7hrm8\" (UID: \"e9173444-5181-4ee4-b651-11d92ccab0d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.402682 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/65954588-9afb-47ff-8c0b-f83bf290da27-tuning-conf-dir\") pod \"multus-additional-cni-plugins-4r9tc\" (UID: \"65954588-9afb-47ff-8c0b-f83bf290da27\") " pod="openshift-multus/multus-additional-cni-plugins-4r9tc" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.402957 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e9173444-5181-4ee4-b651-11d92ccab0d0-ovnkube-config\") pod \"ovnkube-node-7hrm8\" (UID: \"e9173444-5181-4ee4-b651-11d92ccab0d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.402968 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/65954588-9afb-47ff-8c0b-f83bf290da27-cni-binary-copy\") pod \"multus-additional-cni-plugins-4r9tc\" (UID: \"65954588-9afb-47ff-8c0b-f83bf290da27\") " pod="openshift-multus/multus-additional-cni-plugins-4r9tc" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.405551 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e9173444-5181-4ee4-b651-11d92ccab0d0-ovn-node-metrics-cert\") pod \"ovnkube-node-7hrm8\" (UID: \"e9173444-5181-4ee4-b651-11d92ccab0d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.418473 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32956ceb-8540-406e-8693-e86efb46cd42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tv4nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tv4nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q8sfk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:20Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.423556 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xm2fh\" (UniqueName: \"kubernetes.io/projected/e9173444-5181-4ee4-b651-11d92ccab0d0-kube-api-access-xm2fh\") pod \"ovnkube-node-7hrm8\" (UID: \"e9173444-5181-4ee4-b651-11d92ccab0d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.431327 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:20Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.440572 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ncbbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c180fbd9-43db-436b-8166-3cbcb5a14da3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8f7249f78155fea92f6a72919fd391b841a3cdb25583c9b9bc1083c6e34f087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22tv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ncbbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:20Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.458507 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9173444-5181-4ee4-b651-11d92ccab0d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7hrm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:20Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.470386 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f05c1cdf163073cebf69a5dcd00e285698d11f7c76995b202500503a9762d467\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:20Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.490890 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:20Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.497861 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.505530 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15947b2dcc90c69809979bf2d4a222ee664245d897b1f9d9ca91d29551ad86ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://592c8564eeb877cf87544918cb97d0e32451a6afd1c201efafb4cfec9a1e5857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:20Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:20 crc kubenswrapper[4703]: W1209 12:05:20.513214 4703 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode9173444_5181_4ee4_b651_11d92ccab0d0.slice/crio-ce5f7a9dcad5dfa3d5115162b3a187e3c3cc4bc52dbb530fcf8c0a2d0b7efa59 WatchSource:0}: Error finding container ce5f7a9dcad5dfa3d5115162b3a187e3c3cc4bc52dbb530fcf8c0a2d0b7efa59: Status 404 returned error can't find the container with id ce5f7a9dcad5dfa3d5115162b3a187e3c3cc4bc52dbb530fcf8c0a2d0b7efa59 Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.519947 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rfmng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a89eb00-454a-44b2-9b8e-6518b4a9d10c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a0c3ac19e4c5217f17ea8cd7c742043ee7708b0f086c99341a4bbc062fa53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qf85c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rfmng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:20Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.542069 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:20Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.562214 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4r9tc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65954588-9afb-47ff-8c0b-f83bf290da27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4r9tc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:20Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.575056 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9zbgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b57e1095-b0e1-4b30-a491-00852a5219e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6sdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9zbgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:20Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.704736 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:05:20 crc kubenswrapper[4703]: E1209 12:05:20.705021 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:05:22.704978699 +0000 UTC m=+21.953742218 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.805845 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.805926 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.805977 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.806014 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:05:20 crc kubenswrapper[4703]: E1209 12:05:20.806082 4703 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 12:05:20 crc kubenswrapper[4703]: E1209 12:05:20.806133 4703 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 12:05:20 crc kubenswrapper[4703]: E1209 12:05:20.806142 4703 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 12:05:20 crc kubenswrapper[4703]: E1209 12:05:20.806148 4703 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 12:05:20 crc kubenswrapper[4703]: E1209 12:05:20.806157 4703 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 12:05:20 crc kubenswrapper[4703]: E1209 12:05:20.806219 4703 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 12:05:20 crc kubenswrapper[4703]: E1209 12:05:20.806237 4703 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 12:05:20 crc kubenswrapper[4703]: E1209 12:05:20.806269 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-09 12:05:22.806219736 +0000 UTC m=+22.054983255 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 12:05:20 crc kubenswrapper[4703]: E1209 12:05:20.806172 4703 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 12:05:20 crc kubenswrapper[4703]: E1209 12:05:20.806304 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-09 12:05:22.806283578 +0000 UTC m=+22.055047267 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 12:05:20 crc kubenswrapper[4703]: E1209 12:05:20.806329 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-09 12:05:22.806318909 +0000 UTC m=+22.055082518 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 12:05:20 crc kubenswrapper[4703]: E1209 12:05:20.806346 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-09 12:05:22.806337889 +0000 UTC m=+22.055101518 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 12:05:20 crc kubenswrapper[4703]: I1209 12:05:20.979780 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.069432 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.069486 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.069523 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:05:21 crc kubenswrapper[4703]: E1209 12:05:21.069638 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 12:05:21 crc kubenswrapper[4703]: E1209 12:05:21.069981 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 12:05:21 crc kubenswrapper[4703]: E1209 12:05:21.070065 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.075063 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.075943 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.077511 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.078350 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.079536 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.080216 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rfmng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a89eb00-454a-44b2-9b8e-6518b4a9d10c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a0c3ac19e4c5217f17ea8cd7c742043ee7708b0f086c99341a4bbc062fa53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qf85c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rfmng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:21Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.080400 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.081137 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.082354 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.083201 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.084378 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.085012 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.086449 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.087123 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.087840 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.089027 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.089754 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.091025 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.091774 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.092595 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.093907 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.094071 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f05c1cdf163073cebf69a5dcd00e285698d11f7c76995b202500503a9762d467\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:21Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.094696 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.095917 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.096474 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.097749 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.098845 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.099851 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.101454 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.102115 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.103364 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.103947 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.104989 4703 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.105184 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.107149 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.108227 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.109308 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.111356 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.112397 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.113591 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.114493 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.115982 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.116649 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.117697 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.118415 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.119540 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.120114 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.121271 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.121823 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.123151 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.123772 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.124913 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.125525 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.126296 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:21Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.126672 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.127330 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.128119 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.150552 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.150911 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15947b2dcc90c69809979bf2d4a222ee664245d897b1f9d9ca91d29551ad86ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://592c8564eeb877cf87544918cb97d0e32451a6afd1c201efafb4cfec9a1e5857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:21Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.166781 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szdpx\" (UniqueName: \"kubernetes.io/projected/65954588-9afb-47ff-8c0b-f83bf290da27-kube-api-access-szdpx\") pod \"multus-additional-cni-plugins-4r9tc\" (UID: \"65954588-9afb-47ff-8c0b-f83bf290da27\") " pod="openshift-multus/multus-additional-cni-plugins-4r9tc" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.168760 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6sdw\" (UniqueName: \"kubernetes.io/projected/b57e1095-b0e1-4b30-a491-00852a5219e7-kube-api-access-v6sdw\") pod \"multus-9zbgq\" (UID: \"b57e1095-b0e1-4b30-a491-00852a5219e7\") " pod="openshift-multus/multus-9zbgq" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.176435 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:21Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.186696 4703 generic.go:334] "Generic (PLEG): container finished" podID="e9173444-5181-4ee4-b651-11d92ccab0d0" containerID="f192db4b078c6bf3ffea6db58842ce7e8b4ad76c12fa8422496288bc0cfe0c8a" exitCode=0 Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.186833 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" event={"ID":"e9173444-5181-4ee4-b651-11d92ccab0d0","Type":"ContainerDied","Data":"f192db4b078c6bf3ffea6db58842ce7e8b4ad76c12fa8422496288bc0cfe0c8a"} Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.186867 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" event={"ID":"e9173444-5181-4ee4-b651-11d92ccab0d0","Type":"ContainerStarted","Data":"ce5f7a9dcad5dfa3d5115162b3a187e3c3cc4bc52dbb530fcf8c0a2d0b7efa59"} Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.203179 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4r9tc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65954588-9afb-47ff-8c0b-f83bf290da27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4r9tc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:21Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.222331 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9zbgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b57e1095-b0e1-4b30-a491-00852a5219e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6sdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9zbgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:21Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.237962 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32956ceb-8540-406e-8693-e86efb46cd42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tv4nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tv4nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q8sfk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:21Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.259502 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"687e9c7e-e4b8-4bbf-871e-714260501e27\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd9149c63c18f6178219d1f2e4f8e44bc861746b90fb9748105461192a870431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8612defa917cc99574e7762a0260e1b670277ea754e0a66bca960ef6b5a97f2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc0c9f4fdf8ce1ac42d54d3823b0137e0fd8ddf4499236ac3b954f9bff7c491\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd716e645be3932ca219e76ba8d896a8cf27086bbbac8f5e68057873fa68a6f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da708e8829aba2597198cdab64374791885bac9ddf7cb04b296c81d4e81d42ff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 12:05:18.266857 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 12:05:18.266993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 12:05:18.268588 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1075686048/tls.crt::/tmp/serving-cert-1075686048/tls.key\\\\\\\"\\\\nI1209 12:05:18.602670 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 12:05:18.604681 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 12:05:18.604730 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 12:05:18.604783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 12:05:18.604808 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 12:05:18.609177 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 12:05:18.609280 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:05:18.609306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:05:18.609328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 12:05:18.609348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 12:05:18.609367 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 12:05:18.609387 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 12:05:18.609237 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 12:05:18.612865 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://917390e847b6ec24f370316f0fbc305db0f667eab946b4f0e749855a62112cc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55c7e1e49ab4af3fff08ccfa9a7d9b4f889fbdd1d60592bca6c87990f4623d5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55c7e1e49ab4af3fff08ccfa9a7d9b4f889fbdd1d60592bca6c87990f4623d5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:21Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.273330 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:21Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.291885 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:21Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.306957 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ncbbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c180fbd9-43db-436b-8166-3cbcb5a14da3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8f7249f78155fea92f6a72919fd391b841a3cdb25583c9b9bc1083c6e34f087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22tv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ncbbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:21Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.331302 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9173444-5181-4ee4-b651-11d92ccab0d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7hrm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:21Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.351781 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"687e9c7e-e4b8-4bbf-871e-714260501e27\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd9149c63c18f6178219d1f2e4f8e44bc861746b90fb9748105461192a870431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8612defa917cc99574e7762a0260e1b670277ea754e0a66bca960ef6b5a97f2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc0c9f4fdf8ce1ac42d54d3823b0137e0fd8ddf4499236ac3b954f9bff7c491\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd716e645be3932ca219e76ba8d896a8cf27086bbbac8f5e68057873fa68a6f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da708e8829aba2597198cdab64374791885bac9ddf7cb04b296c81d4e81d42ff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 12:05:18.266857 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 12:05:18.266993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 12:05:18.268588 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1075686048/tls.crt::/tmp/serving-cert-1075686048/tls.key\\\\\\\"\\\\nI1209 12:05:18.602670 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 12:05:18.604681 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 12:05:18.604730 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 12:05:18.604783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 12:05:18.604808 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 12:05:18.609177 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 12:05:18.609280 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:05:18.609306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:05:18.609328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 12:05:18.609348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 12:05:18.609367 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 12:05:18.609387 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 12:05:18.609237 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 12:05:18.612865 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://917390e847b6ec24f370316f0fbc305db0f667eab946b4f0e749855a62112cc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55c7e1e49ab4af3fff08ccfa9a7d9b4f889fbdd1d60592bca6c87990f4623d5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55c7e1e49ab4af3fff08ccfa9a7d9b4f889fbdd1d60592bca6c87990f4623d5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:21Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.367532 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:21Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.381115 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32956ceb-8540-406e-8693-e86efb46cd42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tv4nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tv4nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q8sfk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:21Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.381573 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-9zbgq" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.385388 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 09 12:05:21 crc kubenswrapper[4703]: W1209 12:05:21.394518 4703 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb57e1095_b0e1_4b30_a491_00852a5219e7.slice/crio-a7a132b8242ea23395be8de89b67f215e0d6e4c55929a1d164705039dc307033 WatchSource:0}: Error finding container a7a132b8242ea23395be8de89b67f215e0d6e4c55929a1d164705039dc307033: Status 404 returned error can't find the container with id a7a132b8242ea23395be8de89b67f215e0d6e4c55929a1d164705039dc307033 Dec 09 12:05:21 crc kubenswrapper[4703]: E1209 12:05:21.401321 4703 configmap.go:193] Couldn't get configMap openshift-multus/default-cni-sysctl-allowlist: failed to sync configmap cache: timed out waiting for the condition Dec 09 12:05:21 crc kubenswrapper[4703]: E1209 12:05:21.401474 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/65954588-9afb-47ff-8c0b-f83bf290da27-cni-sysctl-allowlist podName:65954588-9afb-47ff-8c0b-f83bf290da27 nodeName:}" failed. No retries permitted until 2025-12-09 12:05:21.901449901 +0000 UTC m=+21.150213420 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cni-sysctl-allowlist" (UniqueName: "kubernetes.io/configmap/65954588-9afb-47ff-8c0b-f83bf290da27-cni-sysctl-allowlist") pod "multus-additional-cni-plugins-4r9tc" (UID: "65954588-9afb-47ff-8c0b-f83bf290da27") : failed to sync configmap cache: timed out waiting for the condition Dec 09 12:05:21 crc kubenswrapper[4703]: E1209 12:05:21.401350 4703 secret.go:188] Couldn't get secret openshift-machine-config-operator/proxy-tls: failed to sync secret cache: timed out waiting for the condition Dec 09 12:05:21 crc kubenswrapper[4703]: E1209 12:05:21.401792 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/32956ceb-8540-406e-8693-e86efb46cd42-proxy-tls podName:32956ceb-8540-406e-8693-e86efb46cd42 nodeName:}" failed. No retries permitted until 2025-12-09 12:05:21.90175902 +0000 UTC m=+21.150522729 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/32956ceb-8540-406e-8693-e86efb46cd42-proxy-tls") pod "machine-config-daemon-q8sfk" (UID: "32956ceb-8540-406e-8693-e86efb46cd42") : failed to sync secret cache: timed out waiting for the condition Dec 09 12:05:21 crc kubenswrapper[4703]: E1209 12:05:21.403559 4703 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Dec 09 12:05:21 crc kubenswrapper[4703]: E1209 12:05:21.403665 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/32956ceb-8540-406e-8693-e86efb46cd42-mcd-auth-proxy-config podName:32956ceb-8540-406e-8693-e86efb46cd42 nodeName:}" failed. No retries permitted until 2025-12-09 12:05:21.903639058 +0000 UTC m=+21.152402767 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "mcd-auth-proxy-config" (UniqueName: "kubernetes.io/configmap/32956ceb-8540-406e-8693-e86efb46cd42-mcd-auth-proxy-config") pod "machine-config-daemon-q8sfk" (UID: "32956ceb-8540-406e-8693-e86efb46cd42") : failed to sync configmap cache: timed out waiting for the condition Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.404305 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9173444-5181-4ee4-b651-11d92ccab0d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f192db4b078c6bf3ffea6db58842ce7e8b4ad76c12fa8422496288bc0cfe0c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f192db4b078c6bf3ffea6db58842ce7e8b4ad76c12fa8422496288bc0cfe0c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7hrm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:21Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:21 crc kubenswrapper[4703]: E1209 12:05:21.418342 4703 projected.go:288] Couldn't get configMap openshift-machine-config-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Dec 09 12:05:21 crc kubenswrapper[4703]: E1209 12:05:21.418494 4703 projected.go:194] Error preparing data for projected volume kube-api-access-tv4nx for pod openshift-machine-config-operator/machine-config-daemon-q8sfk: failed to sync configmap cache: timed out waiting for the condition Dec 09 12:05:21 crc kubenswrapper[4703]: E1209 12:05:21.418640 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/32956ceb-8540-406e-8693-e86efb46cd42-kube-api-access-tv4nx podName:32956ceb-8540-406e-8693-e86efb46cd42 nodeName:}" failed. No retries permitted until 2025-12-09 12:05:21.918615857 +0000 UTC m=+21.167379386 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-tv4nx" (UniqueName: "kubernetes.io/projected/32956ceb-8540-406e-8693-e86efb46cd42-kube-api-access-tv4nx") pod "machine-config-daemon-q8sfk" (UID: "32956ceb-8540-406e-8693-e86efb46cd42") : failed to sync configmap cache: timed out waiting for the condition Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.419148 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:21Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.431570 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ncbbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c180fbd9-43db-436b-8166-3cbcb5a14da3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8f7249f78155fea92f6a72919fd391b841a3cdb25583c9b9bc1083c6e34f087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22tv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ncbbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:21Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.446648 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f05c1cdf163073cebf69a5dcd00e285698d11f7c76995b202500503a9762d467\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:21Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.465634 4703 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.466094 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:21Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.467378 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.467409 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.467418 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.467578 4703 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.478304 4703 kubelet_node_status.go:115] "Node was previously registered" node="crc" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.478612 4703 kubelet_node_status.go:79] "Successfully registered node" node="crc" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.478637 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.480042 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.480070 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.480081 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.480103 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.480114 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:21Z","lastTransitionTime":"2025-12-09T12:05:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.484455 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15947b2dcc90c69809979bf2d4a222ee664245d897b1f9d9ca91d29551ad86ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://592c8564eeb877cf87544918cb97d0e32451a6afd1c201efafb4cfec9a1e5857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:21Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.498845 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.499331 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rfmng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a89eb00-454a-44b2-9b8e-6518b4a9d10c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a0c3ac19e4c5217f17ea8cd7c742043ee7708b0f086c99341a4bbc062fa53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qf85c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rfmng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:21Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:21 crc kubenswrapper[4703]: E1209 12:05:21.501240 4703 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cc650f57-0f1e-4118-b5e8-874027bb4fd3\\\",\\\"systemUUID\\\":\\\"538480e3-ee75-4c42-9816-5a001726e0b5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:21Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.503428 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.504840 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.504896 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.504910 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.504933 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.504947 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:21Z","lastTransitionTime":"2025-12-09T12:05:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.514485 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:21Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:21 crc kubenswrapper[4703]: E1209 12:05:21.517392 4703 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cc650f57-0f1e-4118-b5e8-874027bb4fd3\\\",\\\"systemUUID\\\":\\\"538480e3-ee75-4c42-9816-5a001726e0b5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:21Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.531377 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.531423 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.531449 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.531473 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.531483 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:21Z","lastTransitionTime":"2025-12-09T12:05:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.541155 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4r9tc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65954588-9afb-47ff-8c0b-f83bf290da27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4r9tc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:21Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:21 crc kubenswrapper[4703]: E1209 12:05:21.551452 4703 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cc650f57-0f1e-4118-b5e8-874027bb4fd3\\\",\\\"systemUUID\\\":\\\"538480e3-ee75-4c42-9816-5a001726e0b5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:21Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.554803 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9zbgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b57e1095-b0e1-4b30-a491-00852a5219e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6sdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9zbgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:21Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.555572 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.555625 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.555641 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.555662 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.555676 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:21Z","lastTransitionTime":"2025-12-09T12:05:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:21 crc kubenswrapper[4703]: E1209 12:05:21.567347 4703 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cc650f57-0f1e-4118-b5e8-874027bb4fd3\\\",\\\"systemUUID\\\":\\\"538480e3-ee75-4c42-9816-5a001726e0b5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:21Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.571409 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.571462 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.571473 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.571493 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.571504 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:21Z","lastTransitionTime":"2025-12-09T12:05:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:21 crc kubenswrapper[4703]: E1209 12:05:21.582178 4703 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cc650f57-0f1e-4118-b5e8-874027bb4fd3\\\",\\\"systemUUID\\\":\\\"538480e3-ee75-4c42-9816-5a001726e0b5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:21Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:21 crc kubenswrapper[4703]: E1209 12:05:21.582353 4703 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.584965 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.584994 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.585004 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.585022 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.585037 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:21Z","lastTransitionTime":"2025-12-09T12:05:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.669556 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.686592 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.686630 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.686643 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.686658 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.686670 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:21Z","lastTransitionTime":"2025-12-09T12:05:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.730415 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.789725 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.789770 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.789782 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.789805 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.789817 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:21Z","lastTransitionTime":"2025-12-09T12:05:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.893870 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.893911 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.893920 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.893934 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.893944 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:21Z","lastTransitionTime":"2025-12-09T12:05:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.926781 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tv4nx\" (UniqueName: \"kubernetes.io/projected/32956ceb-8540-406e-8693-e86efb46cd42-kube-api-access-tv4nx\") pod \"machine-config-daemon-q8sfk\" (UID: \"32956ceb-8540-406e-8693-e86efb46cd42\") " pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.926851 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/32956ceb-8540-406e-8693-e86efb46cd42-mcd-auth-proxy-config\") pod \"machine-config-daemon-q8sfk\" (UID: \"32956ceb-8540-406e-8693-e86efb46cd42\") " pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.926885 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/32956ceb-8540-406e-8693-e86efb46cd42-proxy-tls\") pod \"machine-config-daemon-q8sfk\" (UID: \"32956ceb-8540-406e-8693-e86efb46cd42\") " pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.926924 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/65954588-9afb-47ff-8c0b-f83bf290da27-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-4r9tc\" (UID: \"65954588-9afb-47ff-8c0b-f83bf290da27\") " pod="openshift-multus/multus-additional-cni-plugins-4r9tc" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.927662 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/65954588-9afb-47ff-8c0b-f83bf290da27-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-4r9tc\" (UID: \"65954588-9afb-47ff-8c0b-f83bf290da27\") " pod="openshift-multus/multus-additional-cni-plugins-4r9tc" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.927677 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/32956ceb-8540-406e-8693-e86efb46cd42-mcd-auth-proxy-config\") pod \"machine-config-daemon-q8sfk\" (UID: \"32956ceb-8540-406e-8693-e86efb46cd42\") " pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.931356 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tv4nx\" (UniqueName: \"kubernetes.io/projected/32956ceb-8540-406e-8693-e86efb46cd42-kube-api-access-tv4nx\") pod \"machine-config-daemon-q8sfk\" (UID: \"32956ceb-8540-406e-8693-e86efb46cd42\") " pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.931435 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/32956ceb-8540-406e-8693-e86efb46cd42-proxy-tls\") pod \"machine-config-daemon-q8sfk\" (UID: \"32956ceb-8540-406e-8693-e86efb46cd42\") " pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.961032 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.966328 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-4r9tc" Dec 09 12:05:21 crc kubenswrapper[4703]: W1209 12:05:21.975845 4703 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod32956ceb_8540_406e_8693_e86efb46cd42.slice/crio-2b02d6921941ac8e7f6c41b6c40f16c26530101503553965f72ceb0cd991cda8 WatchSource:0}: Error finding container 2b02d6921941ac8e7f6c41b6c40f16c26530101503553965f72ceb0cd991cda8: Status 404 returned error can't find the container with id 2b02d6921941ac8e7f6c41b6c40f16c26530101503553965f72ceb0cd991cda8 Dec 09 12:05:21 crc kubenswrapper[4703]: W1209 12:05:21.979832 4703 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod65954588_9afb_47ff_8c0b_f83bf290da27.slice/crio-57ddecb3927db844b353f9a7cf804e4c5a836052841ae6118dac39317a0ca9f2 WatchSource:0}: Error finding container 57ddecb3927db844b353f9a7cf804e4c5a836052841ae6118dac39317a0ca9f2: Status 404 returned error can't find the container with id 57ddecb3927db844b353f9a7cf804e4c5a836052841ae6118dac39317a0ca9f2 Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.998969 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.999028 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.999040 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.999058 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:21 crc kubenswrapper[4703]: I1209 12:05:21.999070 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:21Z","lastTransitionTime":"2025-12-09T12:05:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:22 crc kubenswrapper[4703]: I1209 12:05:22.103500 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:22 crc kubenswrapper[4703]: I1209 12:05:22.103540 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:22 crc kubenswrapper[4703]: I1209 12:05:22.103553 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:22 crc kubenswrapper[4703]: I1209 12:05:22.103578 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:22 crc kubenswrapper[4703]: I1209 12:05:22.103594 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:22Z","lastTransitionTime":"2025-12-09T12:05:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:22 crc kubenswrapper[4703]: I1209 12:05:22.193623 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" event={"ID":"32956ceb-8540-406e-8693-e86efb46cd42","Type":"ContainerStarted","Data":"e1535901888e91bc0037177a3150033b17d6e2d143faa5a082dba0d096cad474"} Dec 09 12:05:22 crc kubenswrapper[4703]: I1209 12:05:22.193701 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" event={"ID":"32956ceb-8540-406e-8693-e86efb46cd42","Type":"ContainerStarted","Data":"d149dfa980c5e9cee99cd7ba8a51bca19529368e037a1fd8e9b8dbea04179378"} Dec 09 12:05:22 crc kubenswrapper[4703]: I1209 12:05:22.193711 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" event={"ID":"32956ceb-8540-406e-8693-e86efb46cd42","Type":"ContainerStarted","Data":"2b02d6921941ac8e7f6c41b6c40f16c26530101503553965f72ceb0cd991cda8"} Dec 09 12:05:22 crc kubenswrapper[4703]: I1209 12:05:22.196250 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-4r9tc" event={"ID":"65954588-9afb-47ff-8c0b-f83bf290da27","Type":"ContainerStarted","Data":"57ddecb3927db844b353f9a7cf804e4c5a836052841ae6118dac39317a0ca9f2"} Dec 09 12:05:22 crc kubenswrapper[4703]: I1209 12:05:22.200751 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" event={"ID":"e9173444-5181-4ee4-b651-11d92ccab0d0","Type":"ContainerStarted","Data":"5989b9b8e8fb619a07c3242f7c4310e042a281648593a05edd6cb16449480428"} Dec 09 12:05:22 crc kubenswrapper[4703]: I1209 12:05:22.200787 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" event={"ID":"e9173444-5181-4ee4-b651-11d92ccab0d0","Type":"ContainerStarted","Data":"9652e0d783249140796d5c8586e300acb7918f3ced79a1daec01da21eac363c8"} Dec 09 12:05:22 crc kubenswrapper[4703]: I1209 12:05:22.200800 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" event={"ID":"e9173444-5181-4ee4-b651-11d92ccab0d0","Type":"ContainerStarted","Data":"2ce8423b98a9127fea76c23c865d9840069fe97e1d10dc9e97816c916192bf8e"} Dec 09 12:05:22 crc kubenswrapper[4703]: I1209 12:05:22.200810 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" event={"ID":"e9173444-5181-4ee4-b651-11d92ccab0d0","Type":"ContainerStarted","Data":"382198a5fbf52c39a98bc79aa4e917d50dfbcb3ff73d831389901544d400817c"} Dec 09 12:05:22 crc kubenswrapper[4703]: I1209 12:05:22.200820 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" event={"ID":"e9173444-5181-4ee4-b651-11d92ccab0d0","Type":"ContainerStarted","Data":"06ebee5cde81f7c5fb299c4a346f466d3bd4667be4dd2e89712dcae660a1e08c"} Dec 09 12:05:22 crc kubenswrapper[4703]: I1209 12:05:22.200832 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" event={"ID":"e9173444-5181-4ee4-b651-11d92ccab0d0","Type":"ContainerStarted","Data":"02b8fb5400b3ed69b7e65ae9f09aac617bd51972cfa0d5aec436c9f59eb65ac2"} Dec 09 12:05:22 crc kubenswrapper[4703]: I1209 12:05:22.206438 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:22 crc kubenswrapper[4703]: I1209 12:05:22.206505 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:22 crc kubenswrapper[4703]: I1209 12:05:22.206524 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:22 crc kubenswrapper[4703]: I1209 12:05:22.206548 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:22 crc kubenswrapper[4703]: I1209 12:05:22.206564 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:22Z","lastTransitionTime":"2025-12-09T12:05:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:22 crc kubenswrapper[4703]: I1209 12:05:22.209085 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9zbgq" event={"ID":"b57e1095-b0e1-4b30-a491-00852a5219e7","Type":"ContainerStarted","Data":"c902e8b479756afb7cf929d0963afc0f950288658427c1d49ff22ee4319cda70"} Dec 09 12:05:22 crc kubenswrapper[4703]: I1209 12:05:22.209165 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9zbgq" event={"ID":"b57e1095-b0e1-4b30-a491-00852a5219e7","Type":"ContainerStarted","Data":"a7a132b8242ea23395be8de89b67f215e0d6e4c55929a1d164705039dc307033"} Dec 09 12:05:22 crc kubenswrapper[4703]: I1209 12:05:22.211555 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"998159f494b646ac88f1f1ffb09789bd146d7b3554b3a5106174d7c72f3ff583"} Dec 09 12:05:22 crc kubenswrapper[4703]: I1209 12:05:22.223246 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"687e9c7e-e4b8-4bbf-871e-714260501e27\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd9149c63c18f6178219d1f2e4f8e44bc861746b90fb9748105461192a870431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8612defa917cc99574e7762a0260e1b670277ea754e0a66bca960ef6b5a97f2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc0c9f4fdf8ce1ac42d54d3823b0137e0fd8ddf4499236ac3b954f9bff7c491\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd716e645be3932ca219e76ba8d896a8cf27086bbbac8f5e68057873fa68a6f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da708e8829aba2597198cdab64374791885bac9ddf7cb04b296c81d4e81d42ff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 12:05:18.266857 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 12:05:18.266993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 12:05:18.268588 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1075686048/tls.crt::/tmp/serving-cert-1075686048/tls.key\\\\\\\"\\\\nI1209 12:05:18.602670 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 12:05:18.604681 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 12:05:18.604730 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 12:05:18.604783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 12:05:18.604808 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 12:05:18.609177 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 12:05:18.609280 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:05:18.609306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:05:18.609328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 12:05:18.609348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 12:05:18.609367 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 12:05:18.609387 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 12:05:18.609237 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 12:05:18.612865 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://917390e847b6ec24f370316f0fbc305db0f667eab946b4f0e749855a62112cc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55c7e1e49ab4af3fff08ccfa9a7d9b4f889fbdd1d60592bca6c87990f4623d5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55c7e1e49ab4af3fff08ccfa9a7d9b4f889fbdd1d60592bca6c87990f4623d5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:22Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:22 crc kubenswrapper[4703]: I1209 12:05:22.252677 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:22Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:22 crc kubenswrapper[4703]: I1209 12:05:22.272349 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32956ceb-8540-406e-8693-e86efb46cd42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1535901888e91bc0037177a3150033b17d6e2d143faa5a082dba0d096cad474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tv4nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d149dfa980c5e9cee99cd7ba8a51bca19529368e037a1fd8e9b8dbea04179378\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tv4nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q8sfk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:22Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:22 crc kubenswrapper[4703]: I1209 12:05:22.287532 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:22Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:22 crc kubenswrapper[4703]: I1209 12:05:22.301876 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ncbbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c180fbd9-43db-436b-8166-3cbcb5a14da3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8f7249f78155fea92f6a72919fd391b841a3cdb25583c9b9bc1083c6e34f087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22tv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ncbbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:22Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:22 crc kubenswrapper[4703]: I1209 12:05:22.309592 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:22 crc kubenswrapper[4703]: I1209 12:05:22.309676 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:22 crc kubenswrapper[4703]: I1209 12:05:22.309697 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:22 crc kubenswrapper[4703]: I1209 12:05:22.309736 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:22 crc kubenswrapper[4703]: I1209 12:05:22.309754 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:22Z","lastTransitionTime":"2025-12-09T12:05:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:22 crc kubenswrapper[4703]: I1209 12:05:22.328764 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9173444-5181-4ee4-b651-11d92ccab0d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f192db4b078c6bf3ffea6db58842ce7e8b4ad76c12fa8422496288bc0cfe0c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f192db4b078c6bf3ffea6db58842ce7e8b4ad76c12fa8422496288bc0cfe0c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7hrm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:22Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:22 crc kubenswrapper[4703]: I1209 12:05:22.343844 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f05c1cdf163073cebf69a5dcd00e285698d11f7c76995b202500503a9762d467\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:22Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:22 crc kubenswrapper[4703]: I1209 12:05:22.355023 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:22Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:22 crc kubenswrapper[4703]: I1209 12:05:22.370690 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15947b2dcc90c69809979bf2d4a222ee664245d897b1f9d9ca91d29551ad86ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://592c8564eeb877cf87544918cb97d0e32451a6afd1c201efafb4cfec9a1e5857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:22Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:22 crc kubenswrapper[4703]: I1209 12:05:22.382937 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rfmng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a89eb00-454a-44b2-9b8e-6518b4a9d10c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a0c3ac19e4c5217f17ea8cd7c742043ee7708b0f086c99341a4bbc062fa53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qf85c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rfmng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:22Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:22 crc kubenswrapper[4703]: I1209 12:05:22.396585 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:22Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:22 crc kubenswrapper[4703]: I1209 12:05:22.411322 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4r9tc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65954588-9afb-47ff-8c0b-f83bf290da27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4r9tc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:22Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:22 crc kubenswrapper[4703]: I1209 12:05:22.413333 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:22 crc kubenswrapper[4703]: I1209 12:05:22.413370 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:22 crc kubenswrapper[4703]: I1209 12:05:22.413381 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:22 crc kubenswrapper[4703]: I1209 12:05:22.413399 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:22 crc kubenswrapper[4703]: I1209 12:05:22.413412 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:22Z","lastTransitionTime":"2025-12-09T12:05:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:22 crc kubenswrapper[4703]: I1209 12:05:22.425043 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9zbgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b57e1095-b0e1-4b30-a491-00852a5219e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6sdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9zbgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:22Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:22 crc kubenswrapper[4703]: I1209 12:05:22.439711 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"687e9c7e-e4b8-4bbf-871e-714260501e27\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd9149c63c18f6178219d1f2e4f8e44bc861746b90fb9748105461192a870431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8612defa917cc99574e7762a0260e1b670277ea754e0a66bca960ef6b5a97f2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc0c9f4fdf8ce1ac42d54d3823b0137e0fd8ddf4499236ac3b954f9bff7c491\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd716e645be3932ca219e76ba8d896a8cf27086bbbac8f5e68057873fa68a6f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da708e8829aba2597198cdab64374791885bac9ddf7cb04b296c81d4e81d42ff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 12:05:18.266857 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 12:05:18.266993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 12:05:18.268588 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1075686048/tls.crt::/tmp/serving-cert-1075686048/tls.key\\\\\\\"\\\\nI1209 12:05:18.602670 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 12:05:18.604681 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 12:05:18.604730 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 12:05:18.604783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 12:05:18.604808 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 12:05:18.609177 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 12:05:18.609280 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:05:18.609306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:05:18.609328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 12:05:18.609348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 12:05:18.609367 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 12:05:18.609387 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 12:05:18.609237 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 12:05:18.612865 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://917390e847b6ec24f370316f0fbc305db0f667eab946b4f0e749855a62112cc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55c7e1e49ab4af3fff08ccfa9a7d9b4f889fbdd1d60592bca6c87990f4623d5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55c7e1e49ab4af3fff08ccfa9a7d9b4f889fbdd1d60592bca6c87990f4623d5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:22Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:22 crc kubenswrapper[4703]: I1209 12:05:22.482841 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998159f494b646ac88f1f1ffb09789bd146d7b3554b3a5106174d7c72f3ff583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:22Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:22 crc kubenswrapper[4703]: I1209 12:05:22.507157 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32956ceb-8540-406e-8693-e86efb46cd42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1535901888e91bc0037177a3150033b17d6e2d143faa5a082dba0d096cad474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tv4nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d149dfa980c5e9cee99cd7ba8a51bca19529368e037a1fd8e9b8dbea04179378\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tv4nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q8sfk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:22Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:22 crc kubenswrapper[4703]: I1209 12:05:22.516218 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:22 crc kubenswrapper[4703]: I1209 12:05:22.516259 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:22 crc kubenswrapper[4703]: I1209 12:05:22.516270 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:22 crc kubenswrapper[4703]: I1209 12:05:22.516290 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:22 crc kubenswrapper[4703]: I1209 12:05:22.516303 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:22Z","lastTransitionTime":"2025-12-09T12:05:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:22 crc kubenswrapper[4703]: I1209 12:05:22.526070 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ncbbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c180fbd9-43db-436b-8166-3cbcb5a14da3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8f7249f78155fea92f6a72919fd391b841a3cdb25583c9b9bc1083c6e34f087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22tv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ncbbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:22Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:22 crc kubenswrapper[4703]: I1209 12:05:22.557918 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9173444-5181-4ee4-b651-11d92ccab0d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f192db4b078c6bf3ffea6db58842ce7e8b4ad76c12fa8422496288bc0cfe0c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f192db4b078c6bf3ffea6db58842ce7e8b4ad76c12fa8422496288bc0cfe0c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7hrm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:22Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:22 crc kubenswrapper[4703]: I1209 12:05:22.581679 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:22Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:22 crc kubenswrapper[4703]: I1209 12:05:22.598911 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f05c1cdf163073cebf69a5dcd00e285698d11f7c76995b202500503a9762d467\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:22Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:22 crc kubenswrapper[4703]: I1209 12:05:22.612721 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:22Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:22 crc kubenswrapper[4703]: I1209 12:05:22.619344 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:22 crc kubenswrapper[4703]: I1209 12:05:22.619381 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:22 crc kubenswrapper[4703]: I1209 12:05:22.619392 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:22 crc kubenswrapper[4703]: I1209 12:05:22.619413 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:22 crc kubenswrapper[4703]: I1209 12:05:22.619424 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:22Z","lastTransitionTime":"2025-12-09T12:05:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:22 crc kubenswrapper[4703]: I1209 12:05:22.627966 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15947b2dcc90c69809979bf2d4a222ee664245d897b1f9d9ca91d29551ad86ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://592c8564eeb877cf87544918cb97d0e32451a6afd1c201efafb4cfec9a1e5857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:22Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:22 crc kubenswrapper[4703]: I1209 12:05:22.638944 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rfmng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a89eb00-454a-44b2-9b8e-6518b4a9d10c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a0c3ac19e4c5217f17ea8cd7c742043ee7708b0f086c99341a4bbc062fa53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qf85c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rfmng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:22Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:22 crc kubenswrapper[4703]: I1209 12:05:22.651721 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:22Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:22 crc kubenswrapper[4703]: I1209 12:05:22.667925 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4r9tc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65954588-9afb-47ff-8c0b-f83bf290da27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4r9tc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:22Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:22 crc kubenswrapper[4703]: I1209 12:05:22.682008 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9zbgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b57e1095-b0e1-4b30-a491-00852a5219e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c902e8b479756afb7cf929d0963afc0f950288658427c1d49ff22ee4319cda70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6sdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9zbgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:22Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:22 crc kubenswrapper[4703]: I1209 12:05:22.722001 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:22 crc kubenswrapper[4703]: I1209 12:05:22.722055 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:22 crc kubenswrapper[4703]: I1209 12:05:22.722066 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:22 crc kubenswrapper[4703]: I1209 12:05:22.722084 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:22 crc kubenswrapper[4703]: I1209 12:05:22.722095 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:22Z","lastTransitionTime":"2025-12-09T12:05:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:22 crc kubenswrapper[4703]: I1209 12:05:22.732668 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:05:22 crc kubenswrapper[4703]: E1209 12:05:22.732876 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:05:26.732841316 +0000 UTC m=+25.981604835 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:05:22 crc kubenswrapper[4703]: I1209 12:05:22.824339 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:22 crc kubenswrapper[4703]: I1209 12:05:22.824388 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:22 crc kubenswrapper[4703]: I1209 12:05:22.824402 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:22 crc kubenswrapper[4703]: I1209 12:05:22.824448 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:22 crc kubenswrapper[4703]: I1209 12:05:22.824463 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:22Z","lastTransitionTime":"2025-12-09T12:05:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:22 crc kubenswrapper[4703]: I1209 12:05:22.833942 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:05:22 crc kubenswrapper[4703]: I1209 12:05:22.834005 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:05:22 crc kubenswrapper[4703]: I1209 12:05:22.834039 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:05:22 crc kubenswrapper[4703]: I1209 12:05:22.834076 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:05:22 crc kubenswrapper[4703]: E1209 12:05:22.834148 4703 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 12:05:22 crc kubenswrapper[4703]: E1209 12:05:22.834199 4703 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 12:05:22 crc kubenswrapper[4703]: E1209 12:05:22.834214 4703 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 12:05:22 crc kubenswrapper[4703]: E1209 12:05:22.834230 4703 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 12:05:22 crc kubenswrapper[4703]: E1209 12:05:22.834232 4703 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 12:05:22 crc kubenswrapper[4703]: E1209 12:05:22.834251 4703 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 12:05:22 crc kubenswrapper[4703]: E1209 12:05:22.834267 4703 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 12:05:22 crc kubenswrapper[4703]: E1209 12:05:22.834271 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-09 12:05:26.834252867 +0000 UTC m=+26.083016446 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 12:05:22 crc kubenswrapper[4703]: E1209 12:05:22.834151 4703 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 12:05:22 crc kubenswrapper[4703]: E1209 12:05:22.834289 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-09 12:05:26.834281448 +0000 UTC m=+26.083045057 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 12:05:22 crc kubenswrapper[4703]: E1209 12:05:22.834309 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-09 12:05:26.834298009 +0000 UTC m=+26.083061528 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 12:05:22 crc kubenswrapper[4703]: E1209 12:05:22.834328 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-09 12:05:26.834319399 +0000 UTC m=+26.083082918 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 12:05:22 crc kubenswrapper[4703]: I1209 12:05:22.927231 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:22 crc kubenswrapper[4703]: I1209 12:05:22.927269 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:22 crc kubenswrapper[4703]: I1209 12:05:22.927280 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:22 crc kubenswrapper[4703]: I1209 12:05:22.927294 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:22 crc kubenswrapper[4703]: I1209 12:05:22.927302 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:22Z","lastTransitionTime":"2025-12-09T12:05:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:23 crc kubenswrapper[4703]: I1209 12:05:23.029688 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:23 crc kubenswrapper[4703]: I1209 12:05:23.029735 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:23 crc kubenswrapper[4703]: I1209 12:05:23.029755 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:23 crc kubenswrapper[4703]: I1209 12:05:23.029772 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:23 crc kubenswrapper[4703]: I1209 12:05:23.029785 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:23Z","lastTransitionTime":"2025-12-09T12:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:23 crc kubenswrapper[4703]: I1209 12:05:23.069132 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:05:23 crc kubenswrapper[4703]: I1209 12:05:23.069151 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:05:23 crc kubenswrapper[4703]: E1209 12:05:23.069330 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 12:05:23 crc kubenswrapper[4703]: E1209 12:05:23.069592 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 12:05:23 crc kubenswrapper[4703]: I1209 12:05:23.069680 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:05:23 crc kubenswrapper[4703]: E1209 12:05:23.069747 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 12:05:23 crc kubenswrapper[4703]: I1209 12:05:23.132214 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:23 crc kubenswrapper[4703]: I1209 12:05:23.132254 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:23 crc kubenswrapper[4703]: I1209 12:05:23.132265 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:23 crc kubenswrapper[4703]: I1209 12:05:23.132281 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:23 crc kubenswrapper[4703]: I1209 12:05:23.132293 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:23Z","lastTransitionTime":"2025-12-09T12:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:23 crc kubenswrapper[4703]: I1209 12:05:23.215887 4703 generic.go:334] "Generic (PLEG): container finished" podID="65954588-9afb-47ff-8c0b-f83bf290da27" containerID="aeaa589b823d27c4d29e522f951c5578acd67848c14381e3f61821458f74a966" exitCode=0 Dec 09 12:05:23 crc kubenswrapper[4703]: I1209 12:05:23.216339 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-4r9tc" event={"ID":"65954588-9afb-47ff-8c0b-f83bf290da27","Type":"ContainerDied","Data":"aeaa589b823d27c4d29e522f951c5578acd67848c14381e3f61821458f74a966"} Dec 09 12:05:23 crc kubenswrapper[4703]: I1209 12:05:23.229729 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rfmng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a89eb00-454a-44b2-9b8e-6518b4a9d10c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a0c3ac19e4c5217f17ea8cd7c742043ee7708b0f086c99341a4bbc062fa53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qf85c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rfmng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:23Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:23 crc kubenswrapper[4703]: I1209 12:05:23.236491 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:23 crc kubenswrapper[4703]: I1209 12:05:23.236919 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:23 crc kubenswrapper[4703]: I1209 12:05:23.237051 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:23 crc kubenswrapper[4703]: I1209 12:05:23.237151 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:23 crc kubenswrapper[4703]: I1209 12:05:23.237264 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:23Z","lastTransitionTime":"2025-12-09T12:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:23 crc kubenswrapper[4703]: I1209 12:05:23.244122 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f05c1cdf163073cebf69a5dcd00e285698d11f7c76995b202500503a9762d467\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:23Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:23 crc kubenswrapper[4703]: I1209 12:05:23.257107 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:23Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:23 crc kubenswrapper[4703]: I1209 12:05:23.270069 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15947b2dcc90c69809979bf2d4a222ee664245d897b1f9d9ca91d29551ad86ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://592c8564eeb877cf87544918cb97d0e32451a6afd1c201efafb4cfec9a1e5857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:23Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:23 crc kubenswrapper[4703]: I1209 12:05:23.282630 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:23Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:23 crc kubenswrapper[4703]: I1209 12:05:23.297732 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4r9tc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65954588-9afb-47ff-8c0b-f83bf290da27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aeaa589b823d27c4d29e522f951c5578acd67848c14381e3f61821458f74a966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aeaa589b823d27c4d29e522f951c5578acd67848c14381e3f61821458f74a966\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4r9tc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:23Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:23 crc kubenswrapper[4703]: I1209 12:05:23.311347 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9zbgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b57e1095-b0e1-4b30-a491-00852a5219e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c902e8b479756afb7cf929d0963afc0f950288658427c1d49ff22ee4319cda70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6sdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9zbgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:23Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:23 crc kubenswrapper[4703]: I1209 12:05:23.324519 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32956ceb-8540-406e-8693-e86efb46cd42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1535901888e91bc0037177a3150033b17d6e2d143faa5a082dba0d096cad474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tv4nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d149dfa980c5e9cee99cd7ba8a51bca19529368e037a1fd8e9b8dbea04179378\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tv4nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q8sfk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:23Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:23 crc kubenswrapper[4703]: I1209 12:05:23.339587 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"687e9c7e-e4b8-4bbf-871e-714260501e27\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd9149c63c18f6178219d1f2e4f8e44bc861746b90fb9748105461192a870431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8612defa917cc99574e7762a0260e1b670277ea754e0a66bca960ef6b5a97f2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc0c9f4fdf8ce1ac42d54d3823b0137e0fd8ddf4499236ac3b954f9bff7c491\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd716e645be3932ca219e76ba8d896a8cf27086bbbac8f5e68057873fa68a6f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da708e8829aba2597198cdab64374791885bac9ddf7cb04b296c81d4e81d42ff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 12:05:18.266857 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 12:05:18.266993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 12:05:18.268588 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1075686048/tls.crt::/tmp/serving-cert-1075686048/tls.key\\\\\\\"\\\\nI1209 12:05:18.602670 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 12:05:18.604681 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 12:05:18.604730 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 12:05:18.604783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 12:05:18.604808 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 12:05:18.609177 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 12:05:18.609280 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:05:18.609306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:05:18.609328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 12:05:18.609348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 12:05:18.609367 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 12:05:18.609387 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 12:05:18.609237 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 12:05:18.612865 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://917390e847b6ec24f370316f0fbc305db0f667eab946b4f0e749855a62112cc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55c7e1e49ab4af3fff08ccfa9a7d9b4f889fbdd1d60592bca6c87990f4623d5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55c7e1e49ab4af3fff08ccfa9a7d9b4f889fbdd1d60592bca6c87990f4623d5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:23Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:23 crc kubenswrapper[4703]: I1209 12:05:23.340039 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:23 crc kubenswrapper[4703]: I1209 12:05:23.340080 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:23 crc kubenswrapper[4703]: I1209 12:05:23.340090 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:23 crc kubenswrapper[4703]: I1209 12:05:23.340108 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:23 crc kubenswrapper[4703]: I1209 12:05:23.340119 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:23Z","lastTransitionTime":"2025-12-09T12:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:23 crc kubenswrapper[4703]: I1209 12:05:23.351211 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998159f494b646ac88f1f1ffb09789bd146d7b3554b3a5106174d7c72f3ff583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:23Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:23 crc kubenswrapper[4703]: I1209 12:05:23.362870 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:23Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:23 crc kubenswrapper[4703]: I1209 12:05:23.373014 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ncbbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c180fbd9-43db-436b-8166-3cbcb5a14da3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8f7249f78155fea92f6a72919fd391b841a3cdb25583c9b9bc1083c6e34f087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22tv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ncbbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:23Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:23 crc kubenswrapper[4703]: I1209 12:05:23.390160 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9173444-5181-4ee4-b651-11d92ccab0d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f192db4b078c6bf3ffea6db58842ce7e8b4ad76c12fa8422496288bc0cfe0c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f192db4b078c6bf3ffea6db58842ce7e8b4ad76c12fa8422496288bc0cfe0c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7hrm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:23Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:23 crc kubenswrapper[4703]: I1209 12:05:23.441703 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:23 crc kubenswrapper[4703]: I1209 12:05:23.441735 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:23 crc kubenswrapper[4703]: I1209 12:05:23.441746 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:23 crc kubenswrapper[4703]: I1209 12:05:23.441762 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:23 crc kubenswrapper[4703]: I1209 12:05:23.441773 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:23Z","lastTransitionTime":"2025-12-09T12:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:23 crc kubenswrapper[4703]: I1209 12:05:23.545106 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:23 crc kubenswrapper[4703]: I1209 12:05:23.545152 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:23 crc kubenswrapper[4703]: I1209 12:05:23.545165 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:23 crc kubenswrapper[4703]: I1209 12:05:23.545199 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:23 crc kubenswrapper[4703]: I1209 12:05:23.545214 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:23Z","lastTransitionTime":"2025-12-09T12:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:23 crc kubenswrapper[4703]: I1209 12:05:23.647887 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:23 crc kubenswrapper[4703]: I1209 12:05:23.647925 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:23 crc kubenswrapper[4703]: I1209 12:05:23.647933 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:23 crc kubenswrapper[4703]: I1209 12:05:23.647947 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:23 crc kubenswrapper[4703]: I1209 12:05:23.647956 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:23Z","lastTransitionTime":"2025-12-09T12:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:23 crc kubenswrapper[4703]: I1209 12:05:23.750292 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:23 crc kubenswrapper[4703]: I1209 12:05:23.750331 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:23 crc kubenswrapper[4703]: I1209 12:05:23.750339 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:23 crc kubenswrapper[4703]: I1209 12:05:23.750354 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:23 crc kubenswrapper[4703]: I1209 12:05:23.750364 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:23Z","lastTransitionTime":"2025-12-09T12:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:23 crc kubenswrapper[4703]: I1209 12:05:23.852604 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:23 crc kubenswrapper[4703]: I1209 12:05:23.852637 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:23 crc kubenswrapper[4703]: I1209 12:05:23.852645 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:23 crc kubenswrapper[4703]: I1209 12:05:23.852658 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:23 crc kubenswrapper[4703]: I1209 12:05:23.852668 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:23Z","lastTransitionTime":"2025-12-09T12:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:23 crc kubenswrapper[4703]: I1209 12:05:23.954752 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:23 crc kubenswrapper[4703]: I1209 12:05:23.954787 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:23 crc kubenswrapper[4703]: I1209 12:05:23.954796 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:23 crc kubenswrapper[4703]: I1209 12:05:23.954809 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:23 crc kubenswrapper[4703]: I1209 12:05:23.954820 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:23Z","lastTransitionTime":"2025-12-09T12:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:24 crc kubenswrapper[4703]: I1209 12:05:24.057124 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:24 crc kubenswrapper[4703]: I1209 12:05:24.057166 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:24 crc kubenswrapper[4703]: I1209 12:05:24.057179 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:24 crc kubenswrapper[4703]: I1209 12:05:24.057215 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:24 crc kubenswrapper[4703]: I1209 12:05:24.057251 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:24Z","lastTransitionTime":"2025-12-09T12:05:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:24 crc kubenswrapper[4703]: I1209 12:05:24.159007 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:24 crc kubenswrapper[4703]: I1209 12:05:24.159039 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:24 crc kubenswrapper[4703]: I1209 12:05:24.159046 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:24 crc kubenswrapper[4703]: I1209 12:05:24.159059 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:24 crc kubenswrapper[4703]: I1209 12:05:24.159069 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:24Z","lastTransitionTime":"2025-12-09T12:05:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:24 crc kubenswrapper[4703]: I1209 12:05:24.223052 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" event={"ID":"e9173444-5181-4ee4-b651-11d92ccab0d0","Type":"ContainerStarted","Data":"88a462e310fdbdf79496906a304fcf953c4e48c9cab9137f52cf5aafa9c4054c"} Dec 09 12:05:24 crc kubenswrapper[4703]: I1209 12:05:24.224772 4703 generic.go:334] "Generic (PLEG): container finished" podID="65954588-9afb-47ff-8c0b-f83bf290da27" containerID="9e7f1224d3313cfd9900663c77a41898ee26d0f91591e2f641a17a6a543c69b4" exitCode=0 Dec 09 12:05:24 crc kubenswrapper[4703]: I1209 12:05:24.224844 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-4r9tc" event={"ID":"65954588-9afb-47ff-8c0b-f83bf290da27","Type":"ContainerDied","Data":"9e7f1224d3313cfd9900663c77a41898ee26d0f91591e2f641a17a6a543c69b4"} Dec 09 12:05:24 crc kubenswrapper[4703]: I1209 12:05:24.238344 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:24Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:24 crc kubenswrapper[4703]: I1209 12:05:24.249301 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ncbbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c180fbd9-43db-436b-8166-3cbcb5a14da3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8f7249f78155fea92f6a72919fd391b841a3cdb25583c9b9bc1083c6e34f087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22tv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ncbbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:24Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:24 crc kubenswrapper[4703]: I1209 12:05:24.260907 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:24 crc kubenswrapper[4703]: I1209 12:05:24.260932 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:24 crc kubenswrapper[4703]: I1209 12:05:24.260940 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:24 crc kubenswrapper[4703]: I1209 12:05:24.260952 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:24 crc kubenswrapper[4703]: I1209 12:05:24.260963 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:24Z","lastTransitionTime":"2025-12-09T12:05:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:24 crc kubenswrapper[4703]: I1209 12:05:24.266899 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9173444-5181-4ee4-b651-11d92ccab0d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f192db4b078c6bf3ffea6db58842ce7e8b4ad76c12fa8422496288bc0cfe0c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f192db4b078c6bf3ffea6db58842ce7e8b4ad76c12fa8422496288bc0cfe0c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7hrm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:24Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:24 crc kubenswrapper[4703]: I1209 12:05:24.282252 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f05c1cdf163073cebf69a5dcd00e285698d11f7c76995b202500503a9762d467\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:24Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:24 crc kubenswrapper[4703]: I1209 12:05:24.298271 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:24Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:24 crc kubenswrapper[4703]: I1209 12:05:24.314923 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15947b2dcc90c69809979bf2d4a222ee664245d897b1f9d9ca91d29551ad86ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://592c8564eeb877cf87544918cb97d0e32451a6afd1c201efafb4cfec9a1e5857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:24Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:24 crc kubenswrapper[4703]: I1209 12:05:24.328452 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rfmng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a89eb00-454a-44b2-9b8e-6518b4a9d10c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a0c3ac19e4c5217f17ea8cd7c742043ee7708b0f086c99341a4bbc062fa53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qf85c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rfmng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:24Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:24 crc kubenswrapper[4703]: I1209 12:05:24.343264 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:24Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:24 crc kubenswrapper[4703]: I1209 12:05:24.362703 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4r9tc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65954588-9afb-47ff-8c0b-f83bf290da27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aeaa589b823d27c4d29e522f951c5578acd67848c14381e3f61821458f74a966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aeaa589b823d27c4d29e522f951c5578acd67848c14381e3f61821458f74a966\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e7f1224d3313cfd9900663c77a41898ee26d0f91591e2f641a17a6a543c69b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e7f1224d3313cfd9900663c77a41898ee26d0f91591e2f641a17a6a543c69b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4r9tc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:24Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:24 crc kubenswrapper[4703]: I1209 12:05:24.364002 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:24 crc kubenswrapper[4703]: I1209 12:05:24.364047 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:24 crc kubenswrapper[4703]: I1209 12:05:24.364057 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:24 crc kubenswrapper[4703]: I1209 12:05:24.364076 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:24 crc kubenswrapper[4703]: I1209 12:05:24.364087 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:24Z","lastTransitionTime":"2025-12-09T12:05:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:24 crc kubenswrapper[4703]: I1209 12:05:24.379857 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9zbgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b57e1095-b0e1-4b30-a491-00852a5219e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c902e8b479756afb7cf929d0963afc0f950288658427c1d49ff22ee4319cda70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6sdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9zbgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:24Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:24 crc kubenswrapper[4703]: I1209 12:05:24.395072 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"687e9c7e-e4b8-4bbf-871e-714260501e27\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd9149c63c18f6178219d1f2e4f8e44bc861746b90fb9748105461192a870431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8612defa917cc99574e7762a0260e1b670277ea754e0a66bca960ef6b5a97f2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc0c9f4fdf8ce1ac42d54d3823b0137e0fd8ddf4499236ac3b954f9bff7c491\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd716e645be3932ca219e76ba8d896a8cf27086bbbac8f5e68057873fa68a6f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da708e8829aba2597198cdab64374791885bac9ddf7cb04b296c81d4e81d42ff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 12:05:18.266857 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 12:05:18.266993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 12:05:18.268588 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1075686048/tls.crt::/tmp/serving-cert-1075686048/tls.key\\\\\\\"\\\\nI1209 12:05:18.602670 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 12:05:18.604681 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 12:05:18.604730 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 12:05:18.604783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 12:05:18.604808 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 12:05:18.609177 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 12:05:18.609280 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:05:18.609306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:05:18.609328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 12:05:18.609348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 12:05:18.609367 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 12:05:18.609387 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 12:05:18.609237 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 12:05:18.612865 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://917390e847b6ec24f370316f0fbc305db0f667eab946b4f0e749855a62112cc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55c7e1e49ab4af3fff08ccfa9a7d9b4f889fbdd1d60592bca6c87990f4623d5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55c7e1e49ab4af3fff08ccfa9a7d9b4f889fbdd1d60592bca6c87990f4623d5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:24Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:24 crc kubenswrapper[4703]: I1209 12:05:24.408298 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998159f494b646ac88f1f1ffb09789bd146d7b3554b3a5106174d7c72f3ff583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:24Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:24 crc kubenswrapper[4703]: I1209 12:05:24.419217 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32956ceb-8540-406e-8693-e86efb46cd42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1535901888e91bc0037177a3150033b17d6e2d143faa5a082dba0d096cad474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tv4nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d149dfa980c5e9cee99cd7ba8a51bca19529368e037a1fd8e9b8dbea04179378\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tv4nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q8sfk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:24Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:24 crc kubenswrapper[4703]: I1209 12:05:24.466908 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:24 crc kubenswrapper[4703]: I1209 12:05:24.467031 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:24 crc kubenswrapper[4703]: I1209 12:05:24.467038 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:24 crc kubenswrapper[4703]: I1209 12:05:24.467052 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:24 crc kubenswrapper[4703]: I1209 12:05:24.467062 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:24Z","lastTransitionTime":"2025-12-09T12:05:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:24 crc kubenswrapper[4703]: I1209 12:05:24.569808 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:24 crc kubenswrapper[4703]: I1209 12:05:24.569849 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:24 crc kubenswrapper[4703]: I1209 12:05:24.569861 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:24 crc kubenswrapper[4703]: I1209 12:05:24.569877 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:24 crc kubenswrapper[4703]: I1209 12:05:24.569885 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:24Z","lastTransitionTime":"2025-12-09T12:05:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:24 crc kubenswrapper[4703]: I1209 12:05:24.671734 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:24 crc kubenswrapper[4703]: I1209 12:05:24.671771 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:24 crc kubenswrapper[4703]: I1209 12:05:24.671784 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:24 crc kubenswrapper[4703]: I1209 12:05:24.671819 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:24 crc kubenswrapper[4703]: I1209 12:05:24.671833 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:24Z","lastTransitionTime":"2025-12-09T12:05:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:24 crc kubenswrapper[4703]: I1209 12:05:24.773900 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:24 crc kubenswrapper[4703]: I1209 12:05:24.773935 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:24 crc kubenswrapper[4703]: I1209 12:05:24.773945 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:24 crc kubenswrapper[4703]: I1209 12:05:24.773958 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:24 crc kubenswrapper[4703]: I1209 12:05:24.773966 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:24Z","lastTransitionTime":"2025-12-09T12:05:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:24 crc kubenswrapper[4703]: I1209 12:05:24.876325 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:24 crc kubenswrapper[4703]: I1209 12:05:24.876366 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:24 crc kubenswrapper[4703]: I1209 12:05:24.876377 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:24 crc kubenswrapper[4703]: I1209 12:05:24.876393 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:24 crc kubenswrapper[4703]: I1209 12:05:24.876402 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:24Z","lastTransitionTime":"2025-12-09T12:05:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:24 crc kubenswrapper[4703]: I1209 12:05:24.958394 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 12:05:24 crc kubenswrapper[4703]: I1209 12:05:24.962230 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 12:05:24 crc kubenswrapper[4703]: I1209 12:05:24.965322 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Dec 09 12:05:24 crc kubenswrapper[4703]: I1209 12:05:24.970622 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15947b2dcc90c69809979bf2d4a222ee664245d897b1f9d9ca91d29551ad86ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://592c8564eeb877cf87544918cb97d0e32451a6afd1c201efafb4cfec9a1e5857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:24Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:24 crc kubenswrapper[4703]: I1209 12:05:24.979834 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rfmng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a89eb00-454a-44b2-9b8e-6518b4a9d10c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a0c3ac19e4c5217f17ea8cd7c742043ee7708b0f086c99341a4bbc062fa53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qf85c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rfmng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:24Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:24 crc kubenswrapper[4703]: I1209 12:05:24.979997 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:24 crc kubenswrapper[4703]: I1209 12:05:24.980012 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:24 crc kubenswrapper[4703]: I1209 12:05:24.980020 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:24 crc kubenswrapper[4703]: I1209 12:05:24.980033 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:24 crc kubenswrapper[4703]: I1209 12:05:24.980043 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:24Z","lastTransitionTime":"2025-12-09T12:05:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:24 crc kubenswrapper[4703]: I1209 12:05:24.993232 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f05c1cdf163073cebf69a5dcd00e285698d11f7c76995b202500503a9762d467\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:24Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:25 crc kubenswrapper[4703]: I1209 12:05:25.047247 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:25Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:25 crc kubenswrapper[4703]: I1209 12:05:25.058286 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9zbgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b57e1095-b0e1-4b30-a491-00852a5219e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c902e8b479756afb7cf929d0963afc0f950288658427c1d49ff22ee4319cda70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6sdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9zbgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:25Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:25 crc kubenswrapper[4703]: I1209 12:05:25.068632 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:05:25 crc kubenswrapper[4703]: I1209 12:05:25.068650 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:05:25 crc kubenswrapper[4703]: I1209 12:05:25.068721 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:05:25 crc kubenswrapper[4703]: E1209 12:05:25.068752 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 12:05:25 crc kubenswrapper[4703]: I1209 12:05:25.068612 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:25Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:25 crc kubenswrapper[4703]: E1209 12:05:25.068863 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 12:05:25 crc kubenswrapper[4703]: E1209 12:05:25.068936 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 12:05:25 crc kubenswrapper[4703]: I1209 12:05:25.082045 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:25 crc kubenswrapper[4703]: I1209 12:05:25.082077 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:25 crc kubenswrapper[4703]: I1209 12:05:25.082135 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:25 crc kubenswrapper[4703]: I1209 12:05:25.082149 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:25 crc kubenswrapper[4703]: I1209 12:05:25.082159 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:25Z","lastTransitionTime":"2025-12-09T12:05:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:25 crc kubenswrapper[4703]: I1209 12:05:25.083614 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4r9tc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65954588-9afb-47ff-8c0b-f83bf290da27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aeaa589b823d27c4d29e522f951c5578acd67848c14381e3f61821458f74a966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aeaa589b823d27c4d29e522f951c5578acd67848c14381e3f61821458f74a966\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e7f1224d3313cfd9900663c77a41898ee26d0f91591e2f641a17a6a543c69b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e7f1224d3313cfd9900663c77a41898ee26d0f91591e2f641a17a6a543c69b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4r9tc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:25Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:25 crc kubenswrapper[4703]: I1209 12:05:25.094625 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998159f494b646ac88f1f1ffb09789bd146d7b3554b3a5106174d7c72f3ff583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:25Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:25 crc kubenswrapper[4703]: I1209 12:05:25.104887 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32956ceb-8540-406e-8693-e86efb46cd42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1535901888e91bc0037177a3150033b17d6e2d143faa5a082dba0d096cad474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tv4nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d149dfa980c5e9cee99cd7ba8a51bca19529368e037a1fd8e9b8dbea04179378\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tv4nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q8sfk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:25Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:25 crc kubenswrapper[4703]: I1209 12:05:25.118514 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"687e9c7e-e4b8-4bbf-871e-714260501e27\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd9149c63c18f6178219d1f2e4f8e44bc861746b90fb9748105461192a870431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8612defa917cc99574e7762a0260e1b670277ea754e0a66bca960ef6b5a97f2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc0c9f4fdf8ce1ac42d54d3823b0137e0fd8ddf4499236ac3b954f9bff7c491\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd716e645be3932ca219e76ba8d896a8cf27086bbbac8f5e68057873fa68a6f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da708e8829aba2597198cdab64374791885bac9ddf7cb04b296c81d4e81d42ff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 12:05:18.266857 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 12:05:18.266993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 12:05:18.268588 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1075686048/tls.crt::/tmp/serving-cert-1075686048/tls.key\\\\\\\"\\\\nI1209 12:05:18.602670 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 12:05:18.604681 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 12:05:18.604730 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 12:05:18.604783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 12:05:18.604808 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 12:05:18.609177 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 12:05:18.609280 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:05:18.609306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:05:18.609328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 12:05:18.609348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 12:05:18.609367 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 12:05:18.609387 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 12:05:18.609237 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 12:05:18.612865 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://917390e847b6ec24f370316f0fbc305db0f667eab946b4f0e749855a62112cc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55c7e1e49ab4af3fff08ccfa9a7d9b4f889fbdd1d60592bca6c87990f4623d5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55c7e1e49ab4af3fff08ccfa9a7d9b4f889fbdd1d60592bca6c87990f4623d5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:25Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:25 crc kubenswrapper[4703]: I1209 12:05:25.129892 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:25Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:25 crc kubenswrapper[4703]: I1209 12:05:25.141930 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ncbbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c180fbd9-43db-436b-8166-3cbcb5a14da3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8f7249f78155fea92f6a72919fd391b841a3cdb25583c9b9bc1083c6e34f087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22tv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ncbbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:25Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:25 crc kubenswrapper[4703]: I1209 12:05:25.158157 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9173444-5181-4ee4-b651-11d92ccab0d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f192db4b078c6bf3ffea6db58842ce7e8b4ad76c12fa8422496288bc0cfe0c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f192db4b078c6bf3ffea6db58842ce7e8b4ad76c12fa8422496288bc0cfe0c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7hrm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:25Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:25 crc kubenswrapper[4703]: I1209 12:05:25.169636 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f05c1cdf163073cebf69a5dcd00e285698d11f7c76995b202500503a9762d467\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:25Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:25 crc kubenswrapper[4703]: I1209 12:05:25.181248 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:25Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:25 crc kubenswrapper[4703]: I1209 12:05:25.184599 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:25 crc kubenswrapper[4703]: I1209 12:05:25.184637 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:25 crc kubenswrapper[4703]: I1209 12:05:25.184647 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:25 crc kubenswrapper[4703]: I1209 12:05:25.184663 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:25 crc kubenswrapper[4703]: I1209 12:05:25.184685 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:25Z","lastTransitionTime":"2025-12-09T12:05:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:25 crc kubenswrapper[4703]: I1209 12:05:25.193555 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15947b2dcc90c69809979bf2d4a222ee664245d897b1f9d9ca91d29551ad86ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://592c8564eeb877cf87544918cb97d0e32451a6afd1c201efafb4cfec9a1e5857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:25Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:25 crc kubenswrapper[4703]: I1209 12:05:25.203853 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rfmng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a89eb00-454a-44b2-9b8e-6518b4a9d10c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a0c3ac19e4c5217f17ea8cd7c742043ee7708b0f086c99341a4bbc062fa53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qf85c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rfmng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:25Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:25 crc kubenswrapper[4703]: I1209 12:05:25.215366 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:25Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:25 crc kubenswrapper[4703]: I1209 12:05:25.228951 4703 generic.go:334] "Generic (PLEG): container finished" podID="65954588-9afb-47ff-8c0b-f83bf290da27" containerID="4f86761f861d07fe4e22b95651d0ed069799b95020e8e168527fd838ea6cc1e2" exitCode=0 Dec 09 12:05:25 crc kubenswrapper[4703]: I1209 12:05:25.229077 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-4r9tc" event={"ID":"65954588-9afb-47ff-8c0b-f83bf290da27","Type":"ContainerDied","Data":"4f86761f861d07fe4e22b95651d0ed069799b95020e8e168527fd838ea6cc1e2"} Dec 09 12:05:25 crc kubenswrapper[4703]: I1209 12:05:25.236778 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4r9tc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65954588-9afb-47ff-8c0b-f83bf290da27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aeaa589b823d27c4d29e522f951c5578acd67848c14381e3f61821458f74a966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aeaa589b823d27c4d29e522f951c5578acd67848c14381e3f61821458f74a966\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e7f1224d3313cfd9900663c77a41898ee26d0f91591e2f641a17a6a543c69b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e7f1224d3313cfd9900663c77a41898ee26d0f91591e2f641a17a6a543c69b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4r9tc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:25Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:25 crc kubenswrapper[4703]: I1209 12:05:25.250468 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9zbgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b57e1095-b0e1-4b30-a491-00852a5219e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c902e8b479756afb7cf929d0963afc0f950288658427c1d49ff22ee4319cda70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6sdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9zbgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:25Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:25 crc kubenswrapper[4703]: I1209 12:05:25.262098 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"687e9c7e-e4b8-4bbf-871e-714260501e27\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd9149c63c18f6178219d1f2e4f8e44bc861746b90fb9748105461192a870431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8612defa917cc99574e7762a0260e1b670277ea754e0a66bca960ef6b5a97f2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc0c9f4fdf8ce1ac42d54d3823b0137e0fd8ddf4499236ac3b954f9bff7c491\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd716e645be3932ca219e76ba8d896a8cf27086bbbac8f5e68057873fa68a6f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da708e8829aba2597198cdab64374791885bac9ddf7cb04b296c81d4e81d42ff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 12:05:18.266857 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 12:05:18.266993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 12:05:18.268588 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1075686048/tls.crt::/tmp/serving-cert-1075686048/tls.key\\\\\\\"\\\\nI1209 12:05:18.602670 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 12:05:18.604681 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 12:05:18.604730 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 12:05:18.604783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 12:05:18.604808 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 12:05:18.609177 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 12:05:18.609280 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:05:18.609306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:05:18.609328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 12:05:18.609348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 12:05:18.609367 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 12:05:18.609387 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 12:05:18.609237 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 12:05:18.612865 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://917390e847b6ec24f370316f0fbc305db0f667eab946b4f0e749855a62112cc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55c7e1e49ab4af3fff08ccfa9a7d9b4f889fbdd1d60592bca6c87990f4623d5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55c7e1e49ab4af3fff08ccfa9a7d9b4f889fbdd1d60592bca6c87990f4623d5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:25Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:25 crc kubenswrapper[4703]: I1209 12:05:25.271400 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998159f494b646ac88f1f1ffb09789bd146d7b3554b3a5106174d7c72f3ff583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:25Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:25 crc kubenswrapper[4703]: I1209 12:05:25.281692 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32956ceb-8540-406e-8693-e86efb46cd42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1535901888e91bc0037177a3150033b17d6e2d143faa5a082dba0d096cad474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tv4nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d149dfa980c5e9cee99cd7ba8a51bca19529368e037a1fd8e9b8dbea04179378\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tv4nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q8sfk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:25Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:25 crc kubenswrapper[4703]: I1209 12:05:25.287417 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:25 crc kubenswrapper[4703]: I1209 12:05:25.287440 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:25 crc kubenswrapper[4703]: I1209 12:05:25.287449 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:25 crc kubenswrapper[4703]: I1209 12:05:25.287462 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:25 crc kubenswrapper[4703]: I1209 12:05:25.287471 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:25Z","lastTransitionTime":"2025-12-09T12:05:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:25 crc kubenswrapper[4703]: I1209 12:05:25.292855 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e02414b1-6edb-41df-bef5-da5638c58c13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf2e3462f3adecb5658327310dd9a543e1f7fe8b7d477f0f90c77f9a3c1724cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4933db7a36f445ad8289586a5de8f593c8f565d53b4b310292c2a6b436b86b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dd75d360dcc49fc8db00b8a075ce33da2bd845289f44a0148ab391635ab90d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://feccbfdb1fa7a9935e98927106faf0c924485823950276b473cdb8c06c4b07eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:25Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:25 crc kubenswrapper[4703]: I1209 12:05:25.303903 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:25Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:25 crc kubenswrapper[4703]: I1209 12:05:25.313367 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ncbbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c180fbd9-43db-436b-8166-3cbcb5a14da3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8f7249f78155fea92f6a72919fd391b841a3cdb25583c9b9bc1083c6e34f087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22tv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ncbbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:25Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:25 crc kubenswrapper[4703]: I1209 12:05:25.330792 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9173444-5181-4ee4-b651-11d92ccab0d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f192db4b078c6bf3ffea6db58842ce7e8b4ad76c12fa8422496288bc0cfe0c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f192db4b078c6bf3ffea6db58842ce7e8b4ad76c12fa8422496288bc0cfe0c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7hrm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:25Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:25 crc kubenswrapper[4703]: I1209 12:05:25.342973 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f05c1cdf163073cebf69a5dcd00e285698d11f7c76995b202500503a9762d467\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:25Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:25 crc kubenswrapper[4703]: I1209 12:05:25.354329 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:25Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:25 crc kubenswrapper[4703]: I1209 12:05:25.365505 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15947b2dcc90c69809979bf2d4a222ee664245d897b1f9d9ca91d29551ad86ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://592c8564eeb877cf87544918cb97d0e32451a6afd1c201efafb4cfec9a1e5857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:25Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:25 crc kubenswrapper[4703]: I1209 12:05:25.374750 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rfmng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a89eb00-454a-44b2-9b8e-6518b4a9d10c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a0c3ac19e4c5217f17ea8cd7c742043ee7708b0f086c99341a4bbc062fa53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qf85c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rfmng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:25Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:25 crc kubenswrapper[4703]: I1209 12:05:25.384893 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:25Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:25 crc kubenswrapper[4703]: I1209 12:05:25.390727 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:25 crc kubenswrapper[4703]: I1209 12:05:25.390757 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:25 crc kubenswrapper[4703]: I1209 12:05:25.390768 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:25 crc kubenswrapper[4703]: I1209 12:05:25.390784 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:25 crc kubenswrapper[4703]: I1209 12:05:25.390798 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:25Z","lastTransitionTime":"2025-12-09T12:05:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:25 crc kubenswrapper[4703]: I1209 12:05:25.397160 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4r9tc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65954588-9afb-47ff-8c0b-f83bf290da27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aeaa589b823d27c4d29e522f951c5578acd67848c14381e3f61821458f74a966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aeaa589b823d27c4d29e522f951c5578acd67848c14381e3f61821458f74a966\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e7f1224d3313cfd9900663c77a41898ee26d0f91591e2f641a17a6a543c69b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e7f1224d3313cfd9900663c77a41898ee26d0f91591e2f641a17a6a543c69b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f86761f861d07fe4e22b95651d0ed069799b95020e8e168527fd838ea6cc1e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f86761f861d07fe4e22b95651d0ed069799b95020e8e168527fd838ea6cc1e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4r9tc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:25Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:25 crc kubenswrapper[4703]: I1209 12:05:25.409485 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9zbgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b57e1095-b0e1-4b30-a491-00852a5219e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c902e8b479756afb7cf929d0963afc0f950288658427c1d49ff22ee4319cda70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6sdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9zbgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:25Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:25 crc kubenswrapper[4703]: I1209 12:05:25.423841 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"687e9c7e-e4b8-4bbf-871e-714260501e27\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd9149c63c18f6178219d1f2e4f8e44bc861746b90fb9748105461192a870431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8612defa917cc99574e7762a0260e1b670277ea754e0a66bca960ef6b5a97f2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc0c9f4fdf8ce1ac42d54d3823b0137e0fd8ddf4499236ac3b954f9bff7c491\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd716e645be3932ca219e76ba8d896a8cf27086bbbac8f5e68057873fa68a6f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da708e8829aba2597198cdab64374791885bac9ddf7cb04b296c81d4e81d42ff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 12:05:18.266857 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 12:05:18.266993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 12:05:18.268588 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1075686048/tls.crt::/tmp/serving-cert-1075686048/tls.key\\\\\\\"\\\\nI1209 12:05:18.602670 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 12:05:18.604681 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 12:05:18.604730 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 12:05:18.604783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 12:05:18.604808 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 12:05:18.609177 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 12:05:18.609280 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:05:18.609306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:05:18.609328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 12:05:18.609348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 12:05:18.609367 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 12:05:18.609387 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 12:05:18.609237 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 12:05:18.612865 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://917390e847b6ec24f370316f0fbc305db0f667eab946b4f0e749855a62112cc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55c7e1e49ab4af3fff08ccfa9a7d9b4f889fbdd1d60592bca6c87990f4623d5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55c7e1e49ab4af3fff08ccfa9a7d9b4f889fbdd1d60592bca6c87990f4623d5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:25Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:25 crc kubenswrapper[4703]: I1209 12:05:25.434553 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998159f494b646ac88f1f1ffb09789bd146d7b3554b3a5106174d7c72f3ff583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:25Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:25 crc kubenswrapper[4703]: I1209 12:05:25.445664 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32956ceb-8540-406e-8693-e86efb46cd42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1535901888e91bc0037177a3150033b17d6e2d143faa5a082dba0d096cad474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tv4nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d149dfa980c5e9cee99cd7ba8a51bca19529368e037a1fd8e9b8dbea04179378\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tv4nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q8sfk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:25Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:25 crc kubenswrapper[4703]: I1209 12:05:25.456684 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e02414b1-6edb-41df-bef5-da5638c58c13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf2e3462f3adecb5658327310dd9a543e1f7fe8b7d477f0f90c77f9a3c1724cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4933db7a36f445ad8289586a5de8f593c8f565d53b4b310292c2a6b436b86b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dd75d360dcc49fc8db00b8a075ce33da2bd845289f44a0148ab391635ab90d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://feccbfdb1fa7a9935e98927106faf0c924485823950276b473cdb8c06c4b07eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:25Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:25 crc kubenswrapper[4703]: I1209 12:05:25.467529 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:25Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:25 crc kubenswrapper[4703]: I1209 12:05:25.476954 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ncbbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c180fbd9-43db-436b-8166-3cbcb5a14da3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8f7249f78155fea92f6a72919fd391b841a3cdb25583c9b9bc1083c6e34f087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22tv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ncbbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:25Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:25 crc kubenswrapper[4703]: I1209 12:05:25.493936 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:25 crc kubenswrapper[4703]: I1209 12:05:25.493967 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:25 crc kubenswrapper[4703]: I1209 12:05:25.493977 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:25 crc kubenswrapper[4703]: I1209 12:05:25.493991 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:25 crc kubenswrapper[4703]: I1209 12:05:25.494000 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:25Z","lastTransitionTime":"2025-12-09T12:05:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:25 crc kubenswrapper[4703]: I1209 12:05:25.495077 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9173444-5181-4ee4-b651-11d92ccab0d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f192db4b078c6bf3ffea6db58842ce7e8b4ad76c12fa8422496288bc0cfe0c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f192db4b078c6bf3ffea6db58842ce7e8b4ad76c12fa8422496288bc0cfe0c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7hrm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:25Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:25 crc kubenswrapper[4703]: I1209 12:05:25.596081 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:25 crc kubenswrapper[4703]: I1209 12:05:25.596128 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:25 crc kubenswrapper[4703]: I1209 12:05:25.596140 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:25 crc kubenswrapper[4703]: I1209 12:05:25.596158 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:25 crc kubenswrapper[4703]: I1209 12:05:25.596254 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:25Z","lastTransitionTime":"2025-12-09T12:05:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:25 crc kubenswrapper[4703]: I1209 12:05:25.699430 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:25 crc kubenswrapper[4703]: I1209 12:05:25.699455 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:25 crc kubenswrapper[4703]: I1209 12:05:25.699464 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:25 crc kubenswrapper[4703]: I1209 12:05:25.699478 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:25 crc kubenswrapper[4703]: I1209 12:05:25.699491 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:25Z","lastTransitionTime":"2025-12-09T12:05:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:25 crc kubenswrapper[4703]: I1209 12:05:25.802772 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:25 crc kubenswrapper[4703]: I1209 12:05:25.802817 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:25 crc kubenswrapper[4703]: I1209 12:05:25.802834 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:25 crc kubenswrapper[4703]: I1209 12:05:25.802851 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:25 crc kubenswrapper[4703]: I1209 12:05:25.802862 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:25Z","lastTransitionTime":"2025-12-09T12:05:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:25 crc kubenswrapper[4703]: I1209 12:05:25.905901 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:25 crc kubenswrapper[4703]: I1209 12:05:25.905940 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:25 crc kubenswrapper[4703]: I1209 12:05:25.905949 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:25 crc kubenswrapper[4703]: I1209 12:05:25.905965 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:25 crc kubenswrapper[4703]: I1209 12:05:25.905974 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:25Z","lastTransitionTime":"2025-12-09T12:05:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:26 crc kubenswrapper[4703]: I1209 12:05:26.008213 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:26 crc kubenswrapper[4703]: I1209 12:05:26.008653 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:26 crc kubenswrapper[4703]: I1209 12:05:26.008665 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:26 crc kubenswrapper[4703]: I1209 12:05:26.008685 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:26 crc kubenswrapper[4703]: I1209 12:05:26.008696 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:26Z","lastTransitionTime":"2025-12-09T12:05:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:26 crc kubenswrapper[4703]: I1209 12:05:26.111935 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:26 crc kubenswrapper[4703]: I1209 12:05:26.111971 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:26 crc kubenswrapper[4703]: I1209 12:05:26.111980 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:26 crc kubenswrapper[4703]: I1209 12:05:26.111999 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:26 crc kubenswrapper[4703]: I1209 12:05:26.112007 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:26Z","lastTransitionTime":"2025-12-09T12:05:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:26 crc kubenswrapper[4703]: I1209 12:05:26.218638 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:26 crc kubenswrapper[4703]: I1209 12:05:26.219152 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:26 crc kubenswrapper[4703]: I1209 12:05:26.219164 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:26 crc kubenswrapper[4703]: I1209 12:05:26.219181 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:26 crc kubenswrapper[4703]: I1209 12:05:26.219217 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:26Z","lastTransitionTime":"2025-12-09T12:05:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:26 crc kubenswrapper[4703]: I1209 12:05:26.237864 4703 generic.go:334] "Generic (PLEG): container finished" podID="65954588-9afb-47ff-8c0b-f83bf290da27" containerID="13c0a351532024238eb4c192066350cdd4f94945cfa549586ced1c91a2379490" exitCode=0 Dec 09 12:05:26 crc kubenswrapper[4703]: I1209 12:05:26.237925 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-4r9tc" event={"ID":"65954588-9afb-47ff-8c0b-f83bf290da27","Type":"ContainerDied","Data":"13c0a351532024238eb4c192066350cdd4f94945cfa549586ced1c91a2379490"} Dec 09 12:05:26 crc kubenswrapper[4703]: I1209 12:05:26.242817 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" event={"ID":"e9173444-5181-4ee4-b651-11d92ccab0d0","Type":"ContainerStarted","Data":"305779d1504b21f7379fef82e624a34cad28dfb1fc1a224705497a52fc489093"} Dec 09 12:05:26 crc kubenswrapper[4703]: I1209 12:05:26.243157 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" Dec 09 12:05:26 crc kubenswrapper[4703]: I1209 12:05:26.243352 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" Dec 09 12:05:26 crc kubenswrapper[4703]: I1209 12:05:26.251028 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:26Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:26 crc kubenswrapper[4703]: I1209 12:05:26.267684 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4r9tc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65954588-9afb-47ff-8c0b-f83bf290da27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aeaa589b823d27c4d29e522f951c5578acd67848c14381e3f61821458f74a966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aeaa589b823d27c4d29e522f951c5578acd67848c14381e3f61821458f74a966\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e7f1224d3313cfd9900663c77a41898ee26d0f91591e2f641a17a6a543c69b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e7f1224d3313cfd9900663c77a41898ee26d0f91591e2f641a17a6a543c69b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f86761f861d07fe4e22b95651d0ed069799b95020e8e168527fd838ea6cc1e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f86761f861d07fe4e22b95651d0ed069799b95020e8e168527fd838ea6cc1e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c0a351532024238eb4c192066350cdd4f94945cfa549586ced1c91a2379490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c0a351532024238eb4c192066350cdd4f94945cfa549586ced1c91a2379490\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4r9tc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:26Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:26 crc kubenswrapper[4703]: I1209 12:05:26.270565 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" Dec 09 12:05:26 crc kubenswrapper[4703]: I1209 12:05:26.270955 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" Dec 09 12:05:26 crc kubenswrapper[4703]: I1209 12:05:26.280644 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9zbgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b57e1095-b0e1-4b30-a491-00852a5219e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c902e8b479756afb7cf929d0963afc0f950288658427c1d49ff22ee4319cda70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6sdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9zbgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:26Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:26 crc kubenswrapper[4703]: I1209 12:05:26.292248 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"687e9c7e-e4b8-4bbf-871e-714260501e27\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd9149c63c18f6178219d1f2e4f8e44bc861746b90fb9748105461192a870431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8612defa917cc99574e7762a0260e1b670277ea754e0a66bca960ef6b5a97f2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc0c9f4fdf8ce1ac42d54d3823b0137e0fd8ddf4499236ac3b954f9bff7c491\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd716e645be3932ca219e76ba8d896a8cf27086bbbac8f5e68057873fa68a6f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da708e8829aba2597198cdab64374791885bac9ddf7cb04b296c81d4e81d42ff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 12:05:18.266857 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 12:05:18.266993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 12:05:18.268588 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1075686048/tls.crt::/tmp/serving-cert-1075686048/tls.key\\\\\\\"\\\\nI1209 12:05:18.602670 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 12:05:18.604681 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 12:05:18.604730 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 12:05:18.604783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 12:05:18.604808 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 12:05:18.609177 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 12:05:18.609280 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:05:18.609306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:05:18.609328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 12:05:18.609348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 12:05:18.609367 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 12:05:18.609387 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 12:05:18.609237 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 12:05:18.612865 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://917390e847b6ec24f370316f0fbc305db0f667eab946b4f0e749855a62112cc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55c7e1e49ab4af3fff08ccfa9a7d9b4f889fbdd1d60592bca6c87990f4623d5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55c7e1e49ab4af3fff08ccfa9a7d9b4f889fbdd1d60592bca6c87990f4623d5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:26Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:26 crc kubenswrapper[4703]: I1209 12:05:26.303206 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998159f494b646ac88f1f1ffb09789bd146d7b3554b3a5106174d7c72f3ff583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:26Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:26 crc kubenswrapper[4703]: I1209 12:05:26.312245 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32956ceb-8540-406e-8693-e86efb46cd42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1535901888e91bc0037177a3150033b17d6e2d143faa5a082dba0d096cad474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tv4nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d149dfa980c5e9cee99cd7ba8a51bca19529368e037a1fd8e9b8dbea04179378\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tv4nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q8sfk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:26Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:26 crc kubenswrapper[4703]: I1209 12:05:26.320813 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:26 crc kubenswrapper[4703]: I1209 12:05:26.320856 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:26 crc kubenswrapper[4703]: I1209 12:05:26.320869 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:26 crc kubenswrapper[4703]: I1209 12:05:26.320883 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:26 crc kubenswrapper[4703]: I1209 12:05:26.320897 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:26Z","lastTransitionTime":"2025-12-09T12:05:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:26 crc kubenswrapper[4703]: I1209 12:05:26.328726 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9173444-5181-4ee4-b651-11d92ccab0d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f192db4b078c6bf3ffea6db58842ce7e8b4ad76c12fa8422496288bc0cfe0c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f192db4b078c6bf3ffea6db58842ce7e8b4ad76c12fa8422496288bc0cfe0c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7hrm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:26Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:26 crc kubenswrapper[4703]: I1209 12:05:26.355287 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e02414b1-6edb-41df-bef5-da5638c58c13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf2e3462f3adecb5658327310dd9a543e1f7fe8b7d477f0f90c77f9a3c1724cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4933db7a36f445ad8289586a5de8f593c8f565d53b4b310292c2a6b436b86b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dd75d360dcc49fc8db00b8a075ce33da2bd845289f44a0148ab391635ab90d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://feccbfdb1fa7a9935e98927106faf0c924485823950276b473cdb8c06c4b07eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:26Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:26 crc kubenswrapper[4703]: I1209 12:05:26.380046 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:26Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:26 crc kubenswrapper[4703]: I1209 12:05:26.397250 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ncbbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c180fbd9-43db-436b-8166-3cbcb5a14da3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8f7249f78155fea92f6a72919fd391b841a3cdb25583c9b9bc1083c6e34f087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22tv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ncbbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:26Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:26 crc kubenswrapper[4703]: I1209 12:05:26.409544 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f05c1cdf163073cebf69a5dcd00e285698d11f7c76995b202500503a9762d467\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:26Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:26 crc kubenswrapper[4703]: I1209 12:05:26.423516 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:26 crc kubenswrapper[4703]: I1209 12:05:26.423559 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:26 crc kubenswrapper[4703]: I1209 12:05:26.423568 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:26 crc kubenswrapper[4703]: I1209 12:05:26.423585 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:26 crc kubenswrapper[4703]: I1209 12:05:26.423597 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:26Z","lastTransitionTime":"2025-12-09T12:05:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:26 crc kubenswrapper[4703]: I1209 12:05:26.426886 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:26Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:26 crc kubenswrapper[4703]: I1209 12:05:26.439211 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15947b2dcc90c69809979bf2d4a222ee664245d897b1f9d9ca91d29551ad86ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://592c8564eeb877cf87544918cb97d0e32451a6afd1c201efafb4cfec9a1e5857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:26Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:26 crc kubenswrapper[4703]: I1209 12:05:26.452100 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rfmng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a89eb00-454a-44b2-9b8e-6518b4a9d10c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a0c3ac19e4c5217f17ea8cd7c742043ee7708b0f086c99341a4bbc062fa53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qf85c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rfmng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:26Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:26 crc kubenswrapper[4703]: I1209 12:05:26.467436 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"687e9c7e-e4b8-4bbf-871e-714260501e27\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd9149c63c18f6178219d1f2e4f8e44bc861746b90fb9748105461192a870431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8612defa917cc99574e7762a0260e1b670277ea754e0a66bca960ef6b5a97f2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc0c9f4fdf8ce1ac42d54d3823b0137e0fd8ddf4499236ac3b954f9bff7c491\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd716e645be3932ca219e76ba8d896a8cf27086bbbac8f5e68057873fa68a6f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da708e8829aba2597198cdab64374791885bac9ddf7cb04b296c81d4e81d42ff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 12:05:18.266857 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 12:05:18.266993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 12:05:18.268588 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1075686048/tls.crt::/tmp/serving-cert-1075686048/tls.key\\\\\\\"\\\\nI1209 12:05:18.602670 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 12:05:18.604681 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 12:05:18.604730 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 12:05:18.604783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 12:05:18.604808 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 12:05:18.609177 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 12:05:18.609280 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:05:18.609306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:05:18.609328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 12:05:18.609348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 12:05:18.609367 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 12:05:18.609387 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 12:05:18.609237 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 12:05:18.612865 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://917390e847b6ec24f370316f0fbc305db0f667eab946b4f0e749855a62112cc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55c7e1e49ab4af3fff08ccfa9a7d9b4f889fbdd1d60592bca6c87990f4623d5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55c7e1e49ab4af3fff08ccfa9a7d9b4f889fbdd1d60592bca6c87990f4623d5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:26Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:26 crc kubenswrapper[4703]: I1209 12:05:26.480288 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998159f494b646ac88f1f1ffb09789bd146d7b3554b3a5106174d7c72f3ff583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:26Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:26 crc kubenswrapper[4703]: I1209 12:05:26.499783 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32956ceb-8540-406e-8693-e86efb46cd42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1535901888e91bc0037177a3150033b17d6e2d143faa5a082dba0d096cad474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tv4nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d149dfa980c5e9cee99cd7ba8a51bca19529368e037a1fd8e9b8dbea04179378\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tv4nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q8sfk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:26Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:26 crc kubenswrapper[4703]: I1209 12:05:26.509871 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e02414b1-6edb-41df-bef5-da5638c58c13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf2e3462f3adecb5658327310dd9a543e1f7fe8b7d477f0f90c77f9a3c1724cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4933db7a36f445ad8289586a5de8f593c8f565d53b4b310292c2a6b436b86b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dd75d360dcc49fc8db00b8a075ce33da2bd845289f44a0148ab391635ab90d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://feccbfdb1fa7a9935e98927106faf0c924485823950276b473cdb8c06c4b07eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:26Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:26 crc kubenswrapper[4703]: I1209 12:05:26.520166 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:26Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:26 crc kubenswrapper[4703]: I1209 12:05:26.525434 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:26 crc kubenswrapper[4703]: I1209 12:05:26.525470 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:26 crc kubenswrapper[4703]: I1209 12:05:26.525486 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:26 crc kubenswrapper[4703]: I1209 12:05:26.525504 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:26 crc kubenswrapper[4703]: I1209 12:05:26.525517 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:26Z","lastTransitionTime":"2025-12-09T12:05:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:26 crc kubenswrapper[4703]: I1209 12:05:26.528685 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ncbbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c180fbd9-43db-436b-8166-3cbcb5a14da3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8f7249f78155fea92f6a72919fd391b841a3cdb25583c9b9bc1083c6e34f087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22tv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ncbbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:26Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:26 crc kubenswrapper[4703]: I1209 12:05:26.546044 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9173444-5181-4ee4-b651-11d92ccab0d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://382198a5fbf52c39a98bc79aa4e917d50dfbcb3ff73d831389901544d400817c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce8423b98a9127fea76c23c865d9840069fe97e1d10dc9e97816c916192bf8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5989b9b8e8fb619a07c3242f7c4310e042a281648593a05edd6cb16449480428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9652e0d783249140796d5c8586e300acb7918f3ced79a1daec01da21eac363c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06ebee5cde81f7c5fb299c4a346f466d3bd4667be4dd2e89712dcae660a1e08c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02b8fb5400b3ed69b7e65ae9f09aac617bd51972cfa0d5aec436c9f59eb65ac2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://305779d1504b21f7379fef82e624a34cad28dfb1fc1a224705497a52fc489093\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88a462e310fdbdf79496906a304fcf953c4e48c9cab9137f52cf5aafa9c4054c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f192db4b078c6bf3ffea6db58842ce7e8b4ad76c12fa8422496288bc0cfe0c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f192db4b078c6bf3ffea6db58842ce7e8b4ad76c12fa8422496288bc0cfe0c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7hrm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:26Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:26 crc kubenswrapper[4703]: I1209 12:05:26.558116 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:26Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:26 crc kubenswrapper[4703]: I1209 12:05:26.570595 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15947b2dcc90c69809979bf2d4a222ee664245d897b1f9d9ca91d29551ad86ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://592c8564eeb877cf87544918cb97d0e32451a6afd1c201efafb4cfec9a1e5857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:26Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:26 crc kubenswrapper[4703]: I1209 12:05:26.582087 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rfmng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a89eb00-454a-44b2-9b8e-6518b4a9d10c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a0c3ac19e4c5217f17ea8cd7c742043ee7708b0f086c99341a4bbc062fa53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qf85c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rfmng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:26Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:26 crc kubenswrapper[4703]: I1209 12:05:26.597616 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f05c1cdf163073cebf69a5dcd00e285698d11f7c76995b202500503a9762d467\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:26Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:26 crc kubenswrapper[4703]: I1209 12:05:26.613953 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4r9tc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65954588-9afb-47ff-8c0b-f83bf290da27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aeaa589b823d27c4d29e522f951c5578acd67848c14381e3f61821458f74a966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aeaa589b823d27c4d29e522f951c5578acd67848c14381e3f61821458f74a966\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e7f1224d3313cfd9900663c77a41898ee26d0f91591e2f641a17a6a543c69b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e7f1224d3313cfd9900663c77a41898ee26d0f91591e2f641a17a6a543c69b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f86761f861d07fe4e22b95651d0ed069799b95020e8e168527fd838ea6cc1e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f86761f861d07fe4e22b95651d0ed069799b95020e8e168527fd838ea6cc1e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c0a351532024238eb4c192066350cdd4f94945cfa549586ced1c91a2379490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c0a351532024238eb4c192066350cdd4f94945cfa549586ced1c91a2379490\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4r9tc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:26Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:26 crc kubenswrapper[4703]: I1209 12:05:26.627694 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:26 crc kubenswrapper[4703]: I1209 12:05:26.627770 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:26 crc kubenswrapper[4703]: I1209 12:05:26.627782 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:26 crc kubenswrapper[4703]: I1209 12:05:26.627844 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:26 crc kubenswrapper[4703]: I1209 12:05:26.627858 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:26Z","lastTransitionTime":"2025-12-09T12:05:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:26 crc kubenswrapper[4703]: I1209 12:05:26.628508 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9zbgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b57e1095-b0e1-4b30-a491-00852a5219e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c902e8b479756afb7cf929d0963afc0f950288658427c1d49ff22ee4319cda70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6sdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9zbgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:26Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:26 crc kubenswrapper[4703]: I1209 12:05:26.639279 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:26Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:26 crc kubenswrapper[4703]: I1209 12:05:26.730593 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:26 crc kubenswrapper[4703]: I1209 12:05:26.730627 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:26 crc kubenswrapper[4703]: I1209 12:05:26.730636 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:26 crc kubenswrapper[4703]: I1209 12:05:26.730650 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:26 crc kubenswrapper[4703]: I1209 12:05:26.730660 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:26Z","lastTransitionTime":"2025-12-09T12:05:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:26 crc kubenswrapper[4703]: I1209 12:05:26.769840 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:05:26 crc kubenswrapper[4703]: E1209 12:05:26.770054 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:05:34.770029279 +0000 UTC m=+34.018792798 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:05:26 crc kubenswrapper[4703]: I1209 12:05:26.837553 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:26 crc kubenswrapper[4703]: I1209 12:05:26.837587 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:26 crc kubenswrapper[4703]: I1209 12:05:26.837597 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:26 crc kubenswrapper[4703]: I1209 12:05:26.837612 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:26 crc kubenswrapper[4703]: I1209 12:05:26.837621 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:26Z","lastTransitionTime":"2025-12-09T12:05:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:26 crc kubenswrapper[4703]: I1209 12:05:26.870756 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:05:26 crc kubenswrapper[4703]: I1209 12:05:26.870797 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:05:26 crc kubenswrapper[4703]: I1209 12:05:26.870823 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:05:26 crc kubenswrapper[4703]: I1209 12:05:26.870842 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:05:26 crc kubenswrapper[4703]: E1209 12:05:26.870949 4703 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 12:05:26 crc kubenswrapper[4703]: E1209 12:05:26.870952 4703 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 12:05:26 crc kubenswrapper[4703]: E1209 12:05:26.871000 4703 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 12:05:26 crc kubenswrapper[4703]: E1209 12:05:26.871047 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-09 12:05:34.871025198 +0000 UTC m=+34.119788797 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 12:05:26 crc kubenswrapper[4703]: E1209 12:05:26.871081 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-09 12:05:34.87106745 +0000 UTC m=+34.119830969 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 12:05:26 crc kubenswrapper[4703]: E1209 12:05:26.871121 4703 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 12:05:26 crc kubenswrapper[4703]: E1209 12:05:26.871129 4703 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 12:05:26 crc kubenswrapper[4703]: E1209 12:05:26.871139 4703 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 12:05:26 crc kubenswrapper[4703]: E1209 12:05:26.871160 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-09 12:05:34.871154452 +0000 UTC m=+34.119918101 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 12:05:26 crc kubenswrapper[4703]: E1209 12:05:26.870963 4703 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 12:05:26 crc kubenswrapper[4703]: E1209 12:05:26.871174 4703 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 12:05:26 crc kubenswrapper[4703]: E1209 12:05:26.871209 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-09 12:05:34.871203435 +0000 UTC m=+34.119967094 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 12:05:26 crc kubenswrapper[4703]: I1209 12:05:26.939500 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:26 crc kubenswrapper[4703]: I1209 12:05:26.939535 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:26 crc kubenswrapper[4703]: I1209 12:05:26.939545 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:26 crc kubenswrapper[4703]: I1209 12:05:26.939558 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:26 crc kubenswrapper[4703]: I1209 12:05:26.939567 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:26Z","lastTransitionTime":"2025-12-09T12:05:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:27 crc kubenswrapper[4703]: I1209 12:05:27.041729 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:27 crc kubenswrapper[4703]: I1209 12:05:27.041769 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:27 crc kubenswrapper[4703]: I1209 12:05:27.041778 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:27 crc kubenswrapper[4703]: I1209 12:05:27.041792 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:27 crc kubenswrapper[4703]: I1209 12:05:27.041802 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:27Z","lastTransitionTime":"2025-12-09T12:05:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:27 crc kubenswrapper[4703]: I1209 12:05:27.068826 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:05:27 crc kubenswrapper[4703]: E1209 12:05:27.068981 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 12:05:27 crc kubenswrapper[4703]: I1209 12:05:27.069278 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:05:27 crc kubenswrapper[4703]: I1209 12:05:27.069368 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:05:27 crc kubenswrapper[4703]: E1209 12:05:27.069550 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 12:05:27 crc kubenswrapper[4703]: E1209 12:05:27.069568 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 12:05:27 crc kubenswrapper[4703]: I1209 12:05:27.144239 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:27 crc kubenswrapper[4703]: I1209 12:05:27.144282 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:27 crc kubenswrapper[4703]: I1209 12:05:27.144293 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:27 crc kubenswrapper[4703]: I1209 12:05:27.144307 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:27 crc kubenswrapper[4703]: I1209 12:05:27.144317 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:27Z","lastTransitionTime":"2025-12-09T12:05:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:27 crc kubenswrapper[4703]: I1209 12:05:27.246085 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:27 crc kubenswrapper[4703]: I1209 12:05:27.246118 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:27 crc kubenswrapper[4703]: I1209 12:05:27.246126 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:27 crc kubenswrapper[4703]: I1209 12:05:27.246139 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:27 crc kubenswrapper[4703]: I1209 12:05:27.246149 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:27Z","lastTransitionTime":"2025-12-09T12:05:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:27 crc kubenswrapper[4703]: I1209 12:05:27.247762 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-4r9tc" event={"ID":"65954588-9afb-47ff-8c0b-f83bf290da27","Type":"ContainerStarted","Data":"7baabf4646f56bb08044eb6cb29976274522e962906d573a279e967e64d191f2"} Dec 09 12:05:27 crc kubenswrapper[4703]: I1209 12:05:27.247788 4703 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 09 12:05:27 crc kubenswrapper[4703]: I1209 12:05:27.262612 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998159f494b646ac88f1f1ffb09789bd146d7b3554b3a5106174d7c72f3ff583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:27Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:27 crc kubenswrapper[4703]: I1209 12:05:27.275657 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32956ceb-8540-406e-8693-e86efb46cd42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1535901888e91bc0037177a3150033b17d6e2d143faa5a082dba0d096cad474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tv4nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d149dfa980c5e9cee99cd7ba8a51bca19529368e037a1fd8e9b8dbea04179378\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tv4nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q8sfk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:27Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:27 crc kubenswrapper[4703]: I1209 12:05:27.288000 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"687e9c7e-e4b8-4bbf-871e-714260501e27\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd9149c63c18f6178219d1f2e4f8e44bc861746b90fb9748105461192a870431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8612defa917cc99574e7762a0260e1b670277ea754e0a66bca960ef6b5a97f2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc0c9f4fdf8ce1ac42d54d3823b0137e0fd8ddf4499236ac3b954f9bff7c491\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd716e645be3932ca219e76ba8d896a8cf27086bbbac8f5e68057873fa68a6f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da708e8829aba2597198cdab64374791885bac9ddf7cb04b296c81d4e81d42ff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 12:05:18.266857 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 12:05:18.266993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 12:05:18.268588 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1075686048/tls.crt::/tmp/serving-cert-1075686048/tls.key\\\\\\\"\\\\nI1209 12:05:18.602670 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 12:05:18.604681 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 12:05:18.604730 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 12:05:18.604783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 12:05:18.604808 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 12:05:18.609177 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 12:05:18.609280 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:05:18.609306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:05:18.609328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 12:05:18.609348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 12:05:18.609367 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 12:05:18.609387 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 12:05:18.609237 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 12:05:18.612865 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://917390e847b6ec24f370316f0fbc305db0f667eab946b4f0e749855a62112cc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55c7e1e49ab4af3fff08ccfa9a7d9b4f889fbdd1d60592bca6c87990f4623d5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55c7e1e49ab4af3fff08ccfa9a7d9b4f889fbdd1d60592bca6c87990f4623d5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:27Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:27 crc kubenswrapper[4703]: I1209 12:05:27.303285 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:27Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:27 crc kubenswrapper[4703]: I1209 12:05:27.314370 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ncbbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c180fbd9-43db-436b-8166-3cbcb5a14da3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8f7249f78155fea92f6a72919fd391b841a3cdb25583c9b9bc1083c6e34f087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22tv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ncbbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:27Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:27 crc kubenswrapper[4703]: I1209 12:05:27.333401 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9173444-5181-4ee4-b651-11d92ccab0d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://382198a5fbf52c39a98bc79aa4e917d50dfbcb3ff73d831389901544d400817c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce8423b98a9127fea76c23c865d9840069fe97e1d10dc9e97816c916192bf8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5989b9b8e8fb619a07c3242f7c4310e042a281648593a05edd6cb16449480428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9652e0d783249140796d5c8586e300acb7918f3ced79a1daec01da21eac363c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06ebee5cde81f7c5fb299c4a346f466d3bd4667be4dd2e89712dcae660a1e08c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02b8fb5400b3ed69b7e65ae9f09aac617bd51972cfa0d5aec436c9f59eb65ac2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://305779d1504b21f7379fef82e624a34cad28dfb1fc1a224705497a52fc489093\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88a462e310fdbdf79496906a304fcf953c4e48c9cab9137f52cf5aafa9c4054c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f192db4b078c6bf3ffea6db58842ce7e8b4ad76c12fa8422496288bc0cfe0c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f192db4b078c6bf3ffea6db58842ce7e8b4ad76c12fa8422496288bc0cfe0c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7hrm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:27Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:27 crc kubenswrapper[4703]: I1209 12:05:27.344350 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e02414b1-6edb-41df-bef5-da5638c58c13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf2e3462f3adecb5658327310dd9a543e1f7fe8b7d477f0f90c77f9a3c1724cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4933db7a36f445ad8289586a5de8f593c8f565d53b4b310292c2a6b436b86b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dd75d360dcc49fc8db00b8a075ce33da2bd845289f44a0148ab391635ab90d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://feccbfdb1fa7a9935e98927106faf0c924485823950276b473cdb8c06c4b07eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:27Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:27 crc kubenswrapper[4703]: I1209 12:05:27.348247 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:27 crc kubenswrapper[4703]: I1209 12:05:27.348286 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:27 crc kubenswrapper[4703]: I1209 12:05:27.348296 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:27 crc kubenswrapper[4703]: I1209 12:05:27.348311 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:27 crc kubenswrapper[4703]: I1209 12:05:27.348323 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:27Z","lastTransitionTime":"2025-12-09T12:05:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:27 crc kubenswrapper[4703]: I1209 12:05:27.356963 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15947b2dcc90c69809979bf2d4a222ee664245d897b1f9d9ca91d29551ad86ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://592c8564eeb877cf87544918cb97d0e32451a6afd1c201efafb4cfec9a1e5857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:27Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:27 crc kubenswrapper[4703]: I1209 12:05:27.367122 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rfmng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a89eb00-454a-44b2-9b8e-6518b4a9d10c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a0c3ac19e4c5217f17ea8cd7c742043ee7708b0f086c99341a4bbc062fa53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qf85c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rfmng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:27Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:27 crc kubenswrapper[4703]: I1209 12:05:27.379215 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f05c1cdf163073cebf69a5dcd00e285698d11f7c76995b202500503a9762d467\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:27Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:27 crc kubenswrapper[4703]: I1209 12:05:27.392782 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:27Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:27 crc kubenswrapper[4703]: I1209 12:05:27.407378 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9zbgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b57e1095-b0e1-4b30-a491-00852a5219e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c902e8b479756afb7cf929d0963afc0f950288658427c1d49ff22ee4319cda70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6sdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9zbgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:27Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:27 crc kubenswrapper[4703]: I1209 12:05:27.422779 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:27Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:27 crc kubenswrapper[4703]: I1209 12:05:27.438616 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4r9tc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65954588-9afb-47ff-8c0b-f83bf290da27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aeaa589b823d27c4d29e522f951c5578acd67848c14381e3f61821458f74a966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aeaa589b823d27c4d29e522f951c5578acd67848c14381e3f61821458f74a966\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e7f1224d3313cfd9900663c77a41898ee26d0f91591e2f641a17a6a543c69b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e7f1224d3313cfd9900663c77a41898ee26d0f91591e2f641a17a6a543c69b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f86761f861d07fe4e22b95651d0ed069799b95020e8e168527fd838ea6cc1e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f86761f861d07fe4e22b95651d0ed069799b95020e8e168527fd838ea6cc1e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c0a351532024238eb4c192066350cdd4f94945cfa549586ced1c91a2379490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c0a351532024238eb4c192066350cdd4f94945cfa549586ced1c91a2379490\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7baabf4646f56bb08044eb6cb29976274522e962906d573a279e967e64d191f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4r9tc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:27Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:27 crc kubenswrapper[4703]: I1209 12:05:27.449993 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:27 crc kubenswrapper[4703]: I1209 12:05:27.450029 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:27 crc kubenswrapper[4703]: I1209 12:05:27.450037 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:27 crc kubenswrapper[4703]: I1209 12:05:27.450050 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:27 crc kubenswrapper[4703]: I1209 12:05:27.450059 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:27Z","lastTransitionTime":"2025-12-09T12:05:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:27 crc kubenswrapper[4703]: I1209 12:05:27.553454 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:27 crc kubenswrapper[4703]: I1209 12:05:27.553514 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:27 crc kubenswrapper[4703]: I1209 12:05:27.553528 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:27 crc kubenswrapper[4703]: I1209 12:05:27.553550 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:27 crc kubenswrapper[4703]: I1209 12:05:27.553568 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:27Z","lastTransitionTime":"2025-12-09T12:05:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:27 crc kubenswrapper[4703]: I1209 12:05:27.656085 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:27 crc kubenswrapper[4703]: I1209 12:05:27.656135 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:27 crc kubenswrapper[4703]: I1209 12:05:27.656147 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:27 crc kubenswrapper[4703]: I1209 12:05:27.656166 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:27 crc kubenswrapper[4703]: I1209 12:05:27.656177 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:27Z","lastTransitionTime":"2025-12-09T12:05:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:27 crc kubenswrapper[4703]: I1209 12:05:27.758159 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:27 crc kubenswrapper[4703]: I1209 12:05:27.758467 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:27 crc kubenswrapper[4703]: I1209 12:05:27.758484 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:27 crc kubenswrapper[4703]: I1209 12:05:27.758501 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:27 crc kubenswrapper[4703]: I1209 12:05:27.758510 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:27Z","lastTransitionTime":"2025-12-09T12:05:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:27 crc kubenswrapper[4703]: I1209 12:05:27.862139 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:27 crc kubenswrapper[4703]: I1209 12:05:27.862229 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:27 crc kubenswrapper[4703]: I1209 12:05:27.862245 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:27 crc kubenswrapper[4703]: I1209 12:05:27.862268 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:27 crc kubenswrapper[4703]: I1209 12:05:27.862285 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:27Z","lastTransitionTime":"2025-12-09T12:05:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:27 crc kubenswrapper[4703]: I1209 12:05:27.964942 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:27 crc kubenswrapper[4703]: I1209 12:05:27.964999 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:27 crc kubenswrapper[4703]: I1209 12:05:27.965011 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:27 crc kubenswrapper[4703]: I1209 12:05:27.965033 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:27 crc kubenswrapper[4703]: I1209 12:05:27.965046 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:27Z","lastTransitionTime":"2025-12-09T12:05:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:28 crc kubenswrapper[4703]: I1209 12:05:28.068150 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:28 crc kubenswrapper[4703]: I1209 12:05:28.068220 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:28 crc kubenswrapper[4703]: I1209 12:05:28.068232 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:28 crc kubenswrapper[4703]: I1209 12:05:28.068248 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:28 crc kubenswrapper[4703]: I1209 12:05:28.068260 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:28Z","lastTransitionTime":"2025-12-09T12:05:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:28 crc kubenswrapper[4703]: I1209 12:05:28.170504 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:28 crc kubenswrapper[4703]: I1209 12:05:28.170531 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:28 crc kubenswrapper[4703]: I1209 12:05:28.170541 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:28 crc kubenswrapper[4703]: I1209 12:05:28.170555 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:28 crc kubenswrapper[4703]: I1209 12:05:28.170565 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:28Z","lastTransitionTime":"2025-12-09T12:05:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:28 crc kubenswrapper[4703]: I1209 12:05:28.253505 4703 generic.go:334] "Generic (PLEG): container finished" podID="65954588-9afb-47ff-8c0b-f83bf290da27" containerID="7baabf4646f56bb08044eb6cb29976274522e962906d573a279e967e64d191f2" exitCode=0 Dec 09 12:05:28 crc kubenswrapper[4703]: I1209 12:05:28.253624 4703 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 09 12:05:28 crc kubenswrapper[4703]: I1209 12:05:28.254148 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-4r9tc" event={"ID":"65954588-9afb-47ff-8c0b-f83bf290da27","Type":"ContainerDied","Data":"7baabf4646f56bb08044eb6cb29976274522e962906d573a279e967e64d191f2"} Dec 09 12:05:28 crc kubenswrapper[4703]: I1209 12:05:28.272296 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:28 crc kubenswrapper[4703]: I1209 12:05:28.272335 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:28 crc kubenswrapper[4703]: I1209 12:05:28.272343 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:28 crc kubenswrapper[4703]: I1209 12:05:28.272356 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:28 crc kubenswrapper[4703]: I1209 12:05:28.272366 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:28Z","lastTransitionTime":"2025-12-09T12:05:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:28 crc kubenswrapper[4703]: I1209 12:05:28.286512 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"687e9c7e-e4b8-4bbf-871e-714260501e27\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd9149c63c18f6178219d1f2e4f8e44bc861746b90fb9748105461192a870431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8612defa917cc99574e7762a0260e1b670277ea754e0a66bca960ef6b5a97f2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc0c9f4fdf8ce1ac42d54d3823b0137e0fd8ddf4499236ac3b954f9bff7c491\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd716e645be3932ca219e76ba8d896a8cf27086bbbac8f5e68057873fa68a6f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da708e8829aba2597198cdab64374791885bac9ddf7cb04b296c81d4e81d42ff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 12:05:18.266857 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 12:05:18.266993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 12:05:18.268588 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1075686048/tls.crt::/tmp/serving-cert-1075686048/tls.key\\\\\\\"\\\\nI1209 12:05:18.602670 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 12:05:18.604681 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 12:05:18.604730 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 12:05:18.604783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 12:05:18.604808 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 12:05:18.609177 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 12:05:18.609280 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:05:18.609306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:05:18.609328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 12:05:18.609348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 12:05:18.609367 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 12:05:18.609387 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 12:05:18.609237 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 12:05:18.612865 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://917390e847b6ec24f370316f0fbc305db0f667eab946b4f0e749855a62112cc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55c7e1e49ab4af3fff08ccfa9a7d9b4f889fbdd1d60592bca6c87990f4623d5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55c7e1e49ab4af3fff08ccfa9a7d9b4f889fbdd1d60592bca6c87990f4623d5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:28Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:28 crc kubenswrapper[4703]: I1209 12:05:28.306302 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998159f494b646ac88f1f1ffb09789bd146d7b3554b3a5106174d7c72f3ff583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:28Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:28 crc kubenswrapper[4703]: I1209 12:05:28.319160 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32956ceb-8540-406e-8693-e86efb46cd42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1535901888e91bc0037177a3150033b17d6e2d143faa5a082dba0d096cad474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tv4nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d149dfa980c5e9cee99cd7ba8a51bca19529368e037a1fd8e9b8dbea04179378\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tv4nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q8sfk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:28Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:28 crc kubenswrapper[4703]: I1209 12:05:28.332510 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e02414b1-6edb-41df-bef5-da5638c58c13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf2e3462f3adecb5658327310dd9a543e1f7fe8b7d477f0f90c77f9a3c1724cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4933db7a36f445ad8289586a5de8f593c8f565d53b4b310292c2a6b436b86b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dd75d360dcc49fc8db00b8a075ce33da2bd845289f44a0148ab391635ab90d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://feccbfdb1fa7a9935e98927106faf0c924485823950276b473cdb8c06c4b07eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:28Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:28 crc kubenswrapper[4703]: I1209 12:05:28.346445 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:28Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:28 crc kubenswrapper[4703]: I1209 12:05:28.356487 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ncbbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c180fbd9-43db-436b-8166-3cbcb5a14da3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8f7249f78155fea92f6a72919fd391b841a3cdb25583c9b9bc1083c6e34f087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22tv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ncbbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:28Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:28 crc kubenswrapper[4703]: I1209 12:05:28.374921 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:28 crc kubenswrapper[4703]: I1209 12:05:28.374951 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:28 crc kubenswrapper[4703]: I1209 12:05:28.374960 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:28 crc kubenswrapper[4703]: I1209 12:05:28.374974 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:28 crc kubenswrapper[4703]: I1209 12:05:28.374982 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:28Z","lastTransitionTime":"2025-12-09T12:05:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:28 crc kubenswrapper[4703]: I1209 12:05:28.375719 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9173444-5181-4ee4-b651-11d92ccab0d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://382198a5fbf52c39a98bc79aa4e917d50dfbcb3ff73d831389901544d400817c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce8423b98a9127fea76c23c865d9840069fe97e1d10dc9e97816c916192bf8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5989b9b8e8fb619a07c3242f7c4310e042a281648593a05edd6cb16449480428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9652e0d783249140796d5c8586e300acb7918f3ced79a1daec01da21eac363c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06ebee5cde81f7c5fb299c4a346f466d3bd4667be4dd2e89712dcae660a1e08c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02b8fb5400b3ed69b7e65ae9f09aac617bd51972cfa0d5aec436c9f59eb65ac2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://305779d1504b21f7379fef82e624a34cad28dfb1fc1a224705497a52fc489093\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88a462e310fdbdf79496906a304fcf953c4e48c9cab9137f52cf5aafa9c4054c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f192db4b078c6bf3ffea6db58842ce7e8b4ad76c12fa8422496288bc0cfe0c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f192db4b078c6bf3ffea6db58842ce7e8b4ad76c12fa8422496288bc0cfe0c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7hrm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:28Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:28 crc kubenswrapper[4703]: I1209 12:05:28.391481 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:28Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:28 crc kubenswrapper[4703]: I1209 12:05:28.406036 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15947b2dcc90c69809979bf2d4a222ee664245d897b1f9d9ca91d29551ad86ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://592c8564eeb877cf87544918cb97d0e32451a6afd1c201efafb4cfec9a1e5857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:28Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:28 crc kubenswrapper[4703]: I1209 12:05:28.416880 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rfmng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a89eb00-454a-44b2-9b8e-6518b4a9d10c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a0c3ac19e4c5217f17ea8cd7c742043ee7708b0f086c99341a4bbc062fa53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qf85c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rfmng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:28Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:28 crc kubenswrapper[4703]: I1209 12:05:28.429871 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f05c1cdf163073cebf69a5dcd00e285698d11f7c76995b202500503a9762d467\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:28Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:28 crc kubenswrapper[4703]: I1209 12:05:28.448259 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4r9tc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65954588-9afb-47ff-8c0b-f83bf290da27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aeaa589b823d27c4d29e522f951c5578acd67848c14381e3f61821458f74a966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aeaa589b823d27c4d29e522f951c5578acd67848c14381e3f61821458f74a966\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e7f1224d3313cfd9900663c77a41898ee26d0f91591e2f641a17a6a543c69b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e7f1224d3313cfd9900663c77a41898ee26d0f91591e2f641a17a6a543c69b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f86761f861d07fe4e22b95651d0ed069799b95020e8e168527fd838ea6cc1e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f86761f861d07fe4e22b95651d0ed069799b95020e8e168527fd838ea6cc1e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c0a351532024238eb4c192066350cdd4f94945cfa549586ced1c91a2379490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c0a351532024238eb4c192066350cdd4f94945cfa549586ced1c91a2379490\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7baabf4646f56bb08044eb6cb29976274522e962906d573a279e967e64d191f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7baabf4646f56bb08044eb6cb29976274522e962906d573a279e967e64d191f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4r9tc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:28Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:28 crc kubenswrapper[4703]: I1209 12:05:28.459435 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9zbgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b57e1095-b0e1-4b30-a491-00852a5219e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c902e8b479756afb7cf929d0963afc0f950288658427c1d49ff22ee4319cda70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6sdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9zbgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:28Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:28 crc kubenswrapper[4703]: I1209 12:05:28.470683 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:28Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:28 crc kubenswrapper[4703]: I1209 12:05:28.477998 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:28 crc kubenswrapper[4703]: I1209 12:05:28.478027 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:28 crc kubenswrapper[4703]: I1209 12:05:28.478038 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:28 crc kubenswrapper[4703]: I1209 12:05:28.478051 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:28 crc kubenswrapper[4703]: I1209 12:05:28.478059 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:28Z","lastTransitionTime":"2025-12-09T12:05:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:28 crc kubenswrapper[4703]: I1209 12:05:28.580470 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:28 crc kubenswrapper[4703]: I1209 12:05:28.580906 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:28 crc kubenswrapper[4703]: I1209 12:05:28.581004 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:28 crc kubenswrapper[4703]: I1209 12:05:28.581077 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:28 crc kubenswrapper[4703]: I1209 12:05:28.581142 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:28Z","lastTransitionTime":"2025-12-09T12:05:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:28 crc kubenswrapper[4703]: I1209 12:05:28.683461 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:28 crc kubenswrapper[4703]: I1209 12:05:28.683539 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:28 crc kubenswrapper[4703]: I1209 12:05:28.683548 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:28 crc kubenswrapper[4703]: I1209 12:05:28.683561 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:28 crc kubenswrapper[4703]: I1209 12:05:28.683570 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:28Z","lastTransitionTime":"2025-12-09T12:05:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:28 crc kubenswrapper[4703]: I1209 12:05:28.786468 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:28 crc kubenswrapper[4703]: I1209 12:05:28.786504 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:28 crc kubenswrapper[4703]: I1209 12:05:28.786515 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:28 crc kubenswrapper[4703]: I1209 12:05:28.786529 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:28 crc kubenswrapper[4703]: I1209 12:05:28.786541 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:28Z","lastTransitionTime":"2025-12-09T12:05:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:28 crc kubenswrapper[4703]: I1209 12:05:28.889143 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:28 crc kubenswrapper[4703]: I1209 12:05:28.889200 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:28 crc kubenswrapper[4703]: I1209 12:05:28.889210 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:28 crc kubenswrapper[4703]: I1209 12:05:28.889224 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:28 crc kubenswrapper[4703]: I1209 12:05:28.889233 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:28Z","lastTransitionTime":"2025-12-09T12:05:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:28 crc kubenswrapper[4703]: I1209 12:05:28.991441 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:28 crc kubenswrapper[4703]: I1209 12:05:28.991487 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:28 crc kubenswrapper[4703]: I1209 12:05:28.991495 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:28 crc kubenswrapper[4703]: I1209 12:05:28.991515 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:28 crc kubenswrapper[4703]: I1209 12:05:28.991525 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:28Z","lastTransitionTime":"2025-12-09T12:05:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:29 crc kubenswrapper[4703]: I1209 12:05:29.068935 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:05:29 crc kubenswrapper[4703]: I1209 12:05:29.068983 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:05:29 crc kubenswrapper[4703]: I1209 12:05:29.068998 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:05:29 crc kubenswrapper[4703]: E1209 12:05:29.069068 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 12:05:29 crc kubenswrapper[4703]: E1209 12:05:29.069144 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 12:05:29 crc kubenswrapper[4703]: E1209 12:05:29.069203 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 12:05:29 crc kubenswrapper[4703]: I1209 12:05:29.093413 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:29 crc kubenswrapper[4703]: I1209 12:05:29.093452 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:29 crc kubenswrapper[4703]: I1209 12:05:29.093461 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:29 crc kubenswrapper[4703]: I1209 12:05:29.093474 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:29 crc kubenswrapper[4703]: I1209 12:05:29.093484 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:29Z","lastTransitionTime":"2025-12-09T12:05:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:29 crc kubenswrapper[4703]: I1209 12:05:29.195802 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:29 crc kubenswrapper[4703]: I1209 12:05:29.195849 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:29 crc kubenswrapper[4703]: I1209 12:05:29.195860 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:29 crc kubenswrapper[4703]: I1209 12:05:29.195876 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:29 crc kubenswrapper[4703]: I1209 12:05:29.195886 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:29Z","lastTransitionTime":"2025-12-09T12:05:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:29 crc kubenswrapper[4703]: I1209 12:05:29.257855 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7hrm8_e9173444-5181-4ee4-b651-11d92ccab0d0/ovnkube-controller/0.log" Dec 09 12:05:29 crc kubenswrapper[4703]: I1209 12:05:29.260977 4703 generic.go:334] "Generic (PLEG): container finished" podID="e9173444-5181-4ee4-b651-11d92ccab0d0" containerID="305779d1504b21f7379fef82e624a34cad28dfb1fc1a224705497a52fc489093" exitCode=1 Dec 09 12:05:29 crc kubenswrapper[4703]: I1209 12:05:29.261053 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" event={"ID":"e9173444-5181-4ee4-b651-11d92ccab0d0","Type":"ContainerDied","Data":"305779d1504b21f7379fef82e624a34cad28dfb1fc1a224705497a52fc489093"} Dec 09 12:05:29 crc kubenswrapper[4703]: I1209 12:05:29.261716 4703 scope.go:117] "RemoveContainer" containerID="305779d1504b21f7379fef82e624a34cad28dfb1fc1a224705497a52fc489093" Dec 09 12:05:29 crc kubenswrapper[4703]: I1209 12:05:29.265336 4703 generic.go:334] "Generic (PLEG): container finished" podID="65954588-9afb-47ff-8c0b-f83bf290da27" containerID="87f16f3d165437fc695faa410c055f3f3716fbb47e2dc1d41bba05315872e950" exitCode=0 Dec 09 12:05:29 crc kubenswrapper[4703]: I1209 12:05:29.265373 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-4r9tc" event={"ID":"65954588-9afb-47ff-8c0b-f83bf290da27","Type":"ContainerDied","Data":"87f16f3d165437fc695faa410c055f3f3716fbb47e2dc1d41bba05315872e950"} Dec 09 12:05:29 crc kubenswrapper[4703]: I1209 12:05:29.274655 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:29Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:29 crc kubenswrapper[4703]: I1209 12:05:29.284703 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ncbbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c180fbd9-43db-436b-8166-3cbcb5a14da3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8f7249f78155fea92f6a72919fd391b841a3cdb25583c9b9bc1083c6e34f087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22tv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ncbbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:29Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:29 crc kubenswrapper[4703]: I1209 12:05:29.298922 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:29 crc kubenswrapper[4703]: I1209 12:05:29.298982 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:29 crc kubenswrapper[4703]: I1209 12:05:29.299037 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:29 crc kubenswrapper[4703]: I1209 12:05:29.299056 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:29 crc kubenswrapper[4703]: I1209 12:05:29.299069 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:29Z","lastTransitionTime":"2025-12-09T12:05:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:29 crc kubenswrapper[4703]: I1209 12:05:29.309148 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9173444-5181-4ee4-b651-11d92ccab0d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://382198a5fbf52c39a98bc79aa4e917d50dfbcb3ff73d831389901544d400817c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce8423b98a9127fea76c23c865d9840069fe97e1d10dc9e97816c916192bf8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5989b9b8e8fb619a07c3242f7c4310e042a281648593a05edd6cb16449480428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9652e0d783249140796d5c8586e300acb7918f3ced79a1daec01da21eac363c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06ebee5cde81f7c5fb299c4a346f466d3bd4667be4dd2e89712dcae660a1e08c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02b8fb5400b3ed69b7e65ae9f09aac617bd51972cfa0d5aec436c9f59eb65ac2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://305779d1504b21f7379fef82e624a34cad28dfb1fc1a224705497a52fc489093\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://305779d1504b21f7379fef82e624a34cad28dfb1fc1a224705497a52fc489093\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T12:05:28Z\\\",\\\"message\\\":\\\" 5959 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1209 12:05:28.724720 5959 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1209 12:05:28.726249 5959 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1209 12:05:28.726274 5959 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1209 12:05:28.726304 5959 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1209 12:05:28.726310 5959 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1209 12:05:28.726312 5959 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1209 12:05:28.726327 5959 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1209 12:05:28.726336 5959 handler.go:208] Removed *v1.Node event handler 2\\\\nI1209 12:05:28.726368 5959 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1209 12:05:28.726395 5959 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1209 12:05:28.726410 5959 factory.go:656] Stopping watch factory\\\\nI1209 12:05:28.726427 5959 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1209 12:05:28.726435 5959 handler.go:208] Removed *v1.Node event handler 7\\\\nI1209 12:05:28.726438 5959 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88a462e310fdbdf79496906a304fcf953c4e48c9cab9137f52cf5aafa9c4054c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f192db4b078c6bf3ffea6db58842ce7e8b4ad76c12fa8422496288bc0cfe0c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f192db4b078c6bf3ffea6db58842ce7e8b4ad76c12fa8422496288bc0cfe0c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7hrm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:29Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:29 crc kubenswrapper[4703]: I1209 12:05:29.324826 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e02414b1-6edb-41df-bef5-da5638c58c13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf2e3462f3adecb5658327310dd9a543e1f7fe8b7d477f0f90c77f9a3c1724cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4933db7a36f445ad8289586a5de8f593c8f565d53b4b310292c2a6b436b86b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dd75d360dcc49fc8db00b8a075ce33da2bd845289f44a0148ab391635ab90d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://feccbfdb1fa7a9935e98927106faf0c924485823950276b473cdb8c06c4b07eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:29Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:29 crc kubenswrapper[4703]: I1209 12:05:29.336022 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rfmng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a89eb00-454a-44b2-9b8e-6518b4a9d10c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a0c3ac19e4c5217f17ea8cd7c742043ee7708b0f086c99341a4bbc062fa53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qf85c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rfmng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:29Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:29 crc kubenswrapper[4703]: I1209 12:05:29.348362 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f05c1cdf163073cebf69a5dcd00e285698d11f7c76995b202500503a9762d467\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:29Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:29 crc kubenswrapper[4703]: I1209 12:05:29.360342 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:29Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:29 crc kubenswrapper[4703]: I1209 12:05:29.374749 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15947b2dcc90c69809979bf2d4a222ee664245d897b1f9d9ca91d29551ad86ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://592c8564eeb877cf87544918cb97d0e32451a6afd1c201efafb4cfec9a1e5857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:29Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:29 crc kubenswrapper[4703]: I1209 12:05:29.387669 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:29Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:29 crc kubenswrapper[4703]: I1209 12:05:29.400814 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4r9tc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65954588-9afb-47ff-8c0b-f83bf290da27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aeaa589b823d27c4d29e522f951c5578acd67848c14381e3f61821458f74a966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aeaa589b823d27c4d29e522f951c5578acd67848c14381e3f61821458f74a966\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e7f1224d3313cfd9900663c77a41898ee26d0f91591e2f641a17a6a543c69b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e7f1224d3313cfd9900663c77a41898ee26d0f91591e2f641a17a6a543c69b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f86761f861d07fe4e22b95651d0ed069799b95020e8e168527fd838ea6cc1e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f86761f861d07fe4e22b95651d0ed069799b95020e8e168527fd838ea6cc1e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c0a351532024238eb4c192066350cdd4f94945cfa549586ced1c91a2379490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c0a351532024238eb4c192066350cdd4f94945cfa549586ced1c91a2379490\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7baabf4646f56bb08044eb6cb29976274522e962906d573a279e967e64d191f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7baabf4646f56bb08044eb6cb29976274522e962906d573a279e967e64d191f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4r9tc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:29Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:29 crc kubenswrapper[4703]: I1209 12:05:29.402675 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:29 crc kubenswrapper[4703]: I1209 12:05:29.402717 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:29 crc kubenswrapper[4703]: I1209 12:05:29.402730 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:29 crc kubenswrapper[4703]: I1209 12:05:29.402748 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:29 crc kubenswrapper[4703]: I1209 12:05:29.402760 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:29Z","lastTransitionTime":"2025-12-09T12:05:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:29 crc kubenswrapper[4703]: I1209 12:05:29.412388 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9zbgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b57e1095-b0e1-4b30-a491-00852a5219e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c902e8b479756afb7cf929d0963afc0f950288658427c1d49ff22ee4319cda70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6sdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9zbgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:29Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:29 crc kubenswrapper[4703]: I1209 12:05:29.421490 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32956ceb-8540-406e-8693-e86efb46cd42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1535901888e91bc0037177a3150033b17d6e2d143faa5a082dba0d096cad474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tv4nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d149dfa980c5e9cee99cd7ba8a51bca19529368e037a1fd8e9b8dbea04179378\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tv4nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q8sfk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:29Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:29 crc kubenswrapper[4703]: I1209 12:05:29.432306 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"687e9c7e-e4b8-4bbf-871e-714260501e27\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd9149c63c18f6178219d1f2e4f8e44bc861746b90fb9748105461192a870431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8612defa917cc99574e7762a0260e1b670277ea754e0a66bca960ef6b5a97f2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc0c9f4fdf8ce1ac42d54d3823b0137e0fd8ddf4499236ac3b954f9bff7c491\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd716e645be3932ca219e76ba8d896a8cf27086bbbac8f5e68057873fa68a6f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da708e8829aba2597198cdab64374791885bac9ddf7cb04b296c81d4e81d42ff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 12:05:18.266857 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 12:05:18.266993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 12:05:18.268588 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1075686048/tls.crt::/tmp/serving-cert-1075686048/tls.key\\\\\\\"\\\\nI1209 12:05:18.602670 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 12:05:18.604681 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 12:05:18.604730 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 12:05:18.604783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 12:05:18.604808 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 12:05:18.609177 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 12:05:18.609280 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:05:18.609306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:05:18.609328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 12:05:18.609348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 12:05:18.609367 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 12:05:18.609387 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 12:05:18.609237 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 12:05:18.612865 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://917390e847b6ec24f370316f0fbc305db0f667eab946b4f0e749855a62112cc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55c7e1e49ab4af3fff08ccfa9a7d9b4f889fbdd1d60592bca6c87990f4623d5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55c7e1e49ab4af3fff08ccfa9a7d9b4f889fbdd1d60592bca6c87990f4623d5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:29Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:29 crc kubenswrapper[4703]: I1209 12:05:29.442207 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998159f494b646ac88f1f1ffb09789bd146d7b3554b3a5106174d7c72f3ff583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:29Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:29 crc kubenswrapper[4703]: I1209 12:05:29.454257 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"687e9c7e-e4b8-4bbf-871e-714260501e27\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd9149c63c18f6178219d1f2e4f8e44bc861746b90fb9748105461192a870431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8612defa917cc99574e7762a0260e1b670277ea754e0a66bca960ef6b5a97f2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc0c9f4fdf8ce1ac42d54d3823b0137e0fd8ddf4499236ac3b954f9bff7c491\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd716e645be3932ca219e76ba8d896a8cf27086bbbac8f5e68057873fa68a6f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da708e8829aba2597198cdab64374791885bac9ddf7cb04b296c81d4e81d42ff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 12:05:18.266857 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 12:05:18.266993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 12:05:18.268588 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1075686048/tls.crt::/tmp/serving-cert-1075686048/tls.key\\\\\\\"\\\\nI1209 12:05:18.602670 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 12:05:18.604681 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 12:05:18.604730 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 12:05:18.604783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 12:05:18.604808 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 12:05:18.609177 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 12:05:18.609280 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:05:18.609306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:05:18.609328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 12:05:18.609348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 12:05:18.609367 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 12:05:18.609387 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 12:05:18.609237 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 12:05:18.612865 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://917390e847b6ec24f370316f0fbc305db0f667eab946b4f0e749855a62112cc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55c7e1e49ab4af3fff08ccfa9a7d9b4f889fbdd1d60592bca6c87990f4623d5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55c7e1e49ab4af3fff08ccfa9a7d9b4f889fbdd1d60592bca6c87990f4623d5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:29Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:29 crc kubenswrapper[4703]: I1209 12:05:29.463401 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998159f494b646ac88f1f1ffb09789bd146d7b3554b3a5106174d7c72f3ff583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:29Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:29 crc kubenswrapper[4703]: I1209 12:05:29.473503 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32956ceb-8540-406e-8693-e86efb46cd42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1535901888e91bc0037177a3150033b17d6e2d143faa5a082dba0d096cad474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tv4nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d149dfa980c5e9cee99cd7ba8a51bca19529368e037a1fd8e9b8dbea04179378\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tv4nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q8sfk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:29Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:29 crc kubenswrapper[4703]: I1209 12:05:29.487419 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e02414b1-6edb-41df-bef5-da5638c58c13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf2e3462f3adecb5658327310dd9a543e1f7fe8b7d477f0f90c77f9a3c1724cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4933db7a36f445ad8289586a5de8f593c8f565d53b4b310292c2a6b436b86b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dd75d360dcc49fc8db00b8a075ce33da2bd845289f44a0148ab391635ab90d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://feccbfdb1fa7a9935e98927106faf0c924485823950276b473cdb8c06c4b07eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:29Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:29 crc kubenswrapper[4703]: I1209 12:05:29.497952 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:29Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:29 crc kubenswrapper[4703]: I1209 12:05:29.505271 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:29 crc kubenswrapper[4703]: I1209 12:05:29.505311 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:29 crc kubenswrapper[4703]: I1209 12:05:29.505324 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:29 crc kubenswrapper[4703]: I1209 12:05:29.505341 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:29 crc kubenswrapper[4703]: I1209 12:05:29.505353 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:29Z","lastTransitionTime":"2025-12-09T12:05:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:29 crc kubenswrapper[4703]: I1209 12:05:29.506734 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ncbbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c180fbd9-43db-436b-8166-3cbcb5a14da3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8f7249f78155fea92f6a72919fd391b841a3cdb25583c9b9bc1083c6e34f087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22tv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ncbbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:29Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:29 crc kubenswrapper[4703]: I1209 12:05:29.522092 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9173444-5181-4ee4-b651-11d92ccab0d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://382198a5fbf52c39a98bc79aa4e917d50dfbcb3ff73d831389901544d400817c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce8423b98a9127fea76c23c865d9840069fe97e1d10dc9e97816c916192bf8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5989b9b8e8fb619a07c3242f7c4310e042a281648593a05edd6cb16449480428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9652e0d783249140796d5c8586e300acb7918f3ced79a1daec01da21eac363c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06ebee5cde81f7c5fb299c4a346f466d3bd4667be4dd2e89712dcae660a1e08c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02b8fb5400b3ed69b7e65ae9f09aac617bd51972cfa0d5aec436c9f59eb65ac2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://305779d1504b21f7379fef82e624a34cad28dfb1fc1a224705497a52fc489093\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://305779d1504b21f7379fef82e624a34cad28dfb1fc1a224705497a52fc489093\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T12:05:28Z\\\",\\\"message\\\":\\\" 5959 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1209 12:05:28.724720 5959 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1209 12:05:28.726249 5959 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1209 12:05:28.726274 5959 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1209 12:05:28.726304 5959 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1209 12:05:28.726310 5959 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1209 12:05:28.726312 5959 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1209 12:05:28.726327 5959 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1209 12:05:28.726336 5959 handler.go:208] Removed *v1.Node event handler 2\\\\nI1209 12:05:28.726368 5959 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1209 12:05:28.726395 5959 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1209 12:05:28.726410 5959 factory.go:656] Stopping watch factory\\\\nI1209 12:05:28.726427 5959 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1209 12:05:28.726435 5959 handler.go:208] Removed *v1.Node event handler 7\\\\nI1209 12:05:28.726438 5959 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88a462e310fdbdf79496906a304fcf953c4e48c9cab9137f52cf5aafa9c4054c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f192db4b078c6bf3ffea6db58842ce7e8b4ad76c12fa8422496288bc0cfe0c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f192db4b078c6bf3ffea6db58842ce7e8b4ad76c12fa8422496288bc0cfe0c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7hrm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:29Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:29 crc kubenswrapper[4703]: I1209 12:05:29.533092 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f05c1cdf163073cebf69a5dcd00e285698d11f7c76995b202500503a9762d467\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:29Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:29 crc kubenswrapper[4703]: I1209 12:05:29.544170 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:29Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:29 crc kubenswrapper[4703]: I1209 12:05:29.559333 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15947b2dcc90c69809979bf2d4a222ee664245d897b1f9d9ca91d29551ad86ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://592c8564eeb877cf87544918cb97d0e32451a6afd1c201efafb4cfec9a1e5857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:29Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:29 crc kubenswrapper[4703]: I1209 12:05:29.571517 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rfmng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a89eb00-454a-44b2-9b8e-6518b4a9d10c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a0c3ac19e4c5217f17ea8cd7c742043ee7708b0f086c99341a4bbc062fa53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qf85c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rfmng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:29Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:29 crc kubenswrapper[4703]: I1209 12:05:29.585515 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:29Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:29 crc kubenswrapper[4703]: I1209 12:05:29.599123 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4r9tc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65954588-9afb-47ff-8c0b-f83bf290da27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aeaa589b823d27c4d29e522f951c5578acd67848c14381e3f61821458f74a966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aeaa589b823d27c4d29e522f951c5578acd67848c14381e3f61821458f74a966\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e7f1224d3313cfd9900663c77a41898ee26d0f91591e2f641a17a6a543c69b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e7f1224d3313cfd9900663c77a41898ee26d0f91591e2f641a17a6a543c69b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f86761f861d07fe4e22b95651d0ed069799b95020e8e168527fd838ea6cc1e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f86761f861d07fe4e22b95651d0ed069799b95020e8e168527fd838ea6cc1e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c0a351532024238eb4c192066350cdd4f94945cfa549586ced1c91a2379490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c0a351532024238eb4c192066350cdd4f94945cfa549586ced1c91a2379490\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7baabf4646f56bb08044eb6cb29976274522e962906d573a279e967e64d191f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7baabf4646f56bb08044eb6cb29976274522e962906d573a279e967e64d191f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87f16f3d165437fc695faa410c055f3f3716fbb47e2dc1d41bba05315872e950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87f16f3d165437fc695faa410c055f3f3716fbb47e2dc1d41bba05315872e950\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4r9tc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:29Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:29 crc kubenswrapper[4703]: I1209 12:05:29.606837 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:29 crc kubenswrapper[4703]: I1209 12:05:29.606859 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:29 crc kubenswrapper[4703]: I1209 12:05:29.606867 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:29 crc kubenswrapper[4703]: I1209 12:05:29.606879 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:29 crc kubenswrapper[4703]: I1209 12:05:29.606903 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:29Z","lastTransitionTime":"2025-12-09T12:05:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:29 crc kubenswrapper[4703]: I1209 12:05:29.613826 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9zbgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b57e1095-b0e1-4b30-a491-00852a5219e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c902e8b479756afb7cf929d0963afc0f950288658427c1d49ff22ee4319cda70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6sdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9zbgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:29Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:29 crc kubenswrapper[4703]: I1209 12:05:29.709536 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:29 crc kubenswrapper[4703]: I1209 12:05:29.709570 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:29 crc kubenswrapper[4703]: I1209 12:05:29.709577 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:29 crc kubenswrapper[4703]: I1209 12:05:29.709591 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:29 crc kubenswrapper[4703]: I1209 12:05:29.709599 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:29Z","lastTransitionTime":"2025-12-09T12:05:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:29 crc kubenswrapper[4703]: I1209 12:05:29.812146 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:29 crc kubenswrapper[4703]: I1209 12:05:29.812202 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:29 crc kubenswrapper[4703]: I1209 12:05:29.812220 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:29 crc kubenswrapper[4703]: I1209 12:05:29.812238 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:29 crc kubenswrapper[4703]: I1209 12:05:29.812254 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:29Z","lastTransitionTime":"2025-12-09T12:05:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:29 crc kubenswrapper[4703]: I1209 12:05:29.917159 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:29 crc kubenswrapper[4703]: I1209 12:05:29.917227 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:29 crc kubenswrapper[4703]: I1209 12:05:29.917239 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:29 crc kubenswrapper[4703]: I1209 12:05:29.917251 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:29 crc kubenswrapper[4703]: I1209 12:05:29.917260 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:29Z","lastTransitionTime":"2025-12-09T12:05:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:30 crc kubenswrapper[4703]: I1209 12:05:30.019933 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:30 crc kubenswrapper[4703]: I1209 12:05:30.019978 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:30 crc kubenswrapper[4703]: I1209 12:05:30.019989 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:30 crc kubenswrapper[4703]: I1209 12:05:30.020003 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:30 crc kubenswrapper[4703]: I1209 12:05:30.020015 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:30Z","lastTransitionTime":"2025-12-09T12:05:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:30 crc kubenswrapper[4703]: I1209 12:05:30.121542 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:30 crc kubenswrapper[4703]: I1209 12:05:30.121583 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:30 crc kubenswrapper[4703]: I1209 12:05:30.121591 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:30 crc kubenswrapper[4703]: I1209 12:05:30.121603 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:30 crc kubenswrapper[4703]: I1209 12:05:30.121612 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:30Z","lastTransitionTime":"2025-12-09T12:05:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:30 crc kubenswrapper[4703]: I1209 12:05:30.223932 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:30 crc kubenswrapper[4703]: I1209 12:05:30.223965 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:30 crc kubenswrapper[4703]: I1209 12:05:30.223973 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:30 crc kubenswrapper[4703]: I1209 12:05:30.223987 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:30 crc kubenswrapper[4703]: I1209 12:05:30.223995 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:30Z","lastTransitionTime":"2025-12-09T12:05:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:30 crc kubenswrapper[4703]: I1209 12:05:30.271720 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7hrm8_e9173444-5181-4ee4-b651-11d92ccab0d0/ovnkube-controller/0.log" Dec 09 12:05:30 crc kubenswrapper[4703]: I1209 12:05:30.274746 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" event={"ID":"e9173444-5181-4ee4-b651-11d92ccab0d0","Type":"ContainerStarted","Data":"fee843e54b1ecda5724a42e485d25a1ddf459ebc6a54c2722cf8cd8d11684ca5"} Dec 09 12:05:30 crc kubenswrapper[4703]: I1209 12:05:30.275044 4703 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 09 12:05:30 crc kubenswrapper[4703]: I1209 12:05:30.278712 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-4r9tc" event={"ID":"65954588-9afb-47ff-8c0b-f83bf290da27","Type":"ContainerStarted","Data":"698ca5256efee75c73f1585ad1f7955676be6dac9cbd1c5aebdbf88848c35cee"} Dec 09 12:05:30 crc kubenswrapper[4703]: I1209 12:05:30.289853 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f05c1cdf163073cebf69a5dcd00e285698d11f7c76995b202500503a9762d467\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:30Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:30 crc kubenswrapper[4703]: I1209 12:05:30.303082 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:30Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:30 crc kubenswrapper[4703]: I1209 12:05:30.316988 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15947b2dcc90c69809979bf2d4a222ee664245d897b1f9d9ca91d29551ad86ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://592c8564eeb877cf87544918cb97d0e32451a6afd1c201efafb4cfec9a1e5857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:30Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:30 crc kubenswrapper[4703]: I1209 12:05:30.325805 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:30 crc kubenswrapper[4703]: I1209 12:05:30.325873 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:30 crc kubenswrapper[4703]: I1209 12:05:30.325891 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:30 crc kubenswrapper[4703]: I1209 12:05:30.325919 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:30 crc kubenswrapper[4703]: I1209 12:05:30.325938 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:30Z","lastTransitionTime":"2025-12-09T12:05:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:30 crc kubenswrapper[4703]: I1209 12:05:30.328549 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rfmng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a89eb00-454a-44b2-9b8e-6518b4a9d10c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a0c3ac19e4c5217f17ea8cd7c742043ee7708b0f086c99341a4bbc062fa53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qf85c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rfmng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:30Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:30 crc kubenswrapper[4703]: I1209 12:05:30.344445 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:30Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:30 crc kubenswrapper[4703]: I1209 12:05:30.358647 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4r9tc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65954588-9afb-47ff-8c0b-f83bf290da27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aeaa589b823d27c4d29e522f951c5578acd67848c14381e3f61821458f74a966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aeaa589b823d27c4d29e522f951c5578acd67848c14381e3f61821458f74a966\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e7f1224d3313cfd9900663c77a41898ee26d0f91591e2f641a17a6a543c69b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e7f1224d3313cfd9900663c77a41898ee26d0f91591e2f641a17a6a543c69b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f86761f861d07fe4e22b95651d0ed069799b95020e8e168527fd838ea6cc1e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f86761f861d07fe4e22b95651d0ed069799b95020e8e168527fd838ea6cc1e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c0a351532024238eb4c192066350cdd4f94945cfa549586ced1c91a2379490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c0a351532024238eb4c192066350cdd4f94945cfa549586ced1c91a2379490\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7baabf4646f56bb08044eb6cb29976274522e962906d573a279e967e64d191f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7baabf4646f56bb08044eb6cb29976274522e962906d573a279e967e64d191f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87f16f3d165437fc695faa410c055f3f3716fbb47e2dc1d41bba05315872e950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87f16f3d165437fc695faa410c055f3f3716fbb47e2dc1d41bba05315872e950\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4r9tc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:30Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:30 crc kubenswrapper[4703]: I1209 12:05:30.372253 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9zbgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b57e1095-b0e1-4b30-a491-00852a5219e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c902e8b479756afb7cf929d0963afc0f950288658427c1d49ff22ee4319cda70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6sdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9zbgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:30Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:30 crc kubenswrapper[4703]: I1209 12:05:30.386229 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"687e9c7e-e4b8-4bbf-871e-714260501e27\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd9149c63c18f6178219d1f2e4f8e44bc861746b90fb9748105461192a870431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8612defa917cc99574e7762a0260e1b670277ea754e0a66bca960ef6b5a97f2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc0c9f4fdf8ce1ac42d54d3823b0137e0fd8ddf4499236ac3b954f9bff7c491\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd716e645be3932ca219e76ba8d896a8cf27086bbbac8f5e68057873fa68a6f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da708e8829aba2597198cdab64374791885bac9ddf7cb04b296c81d4e81d42ff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 12:05:18.266857 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 12:05:18.266993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 12:05:18.268588 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1075686048/tls.crt::/tmp/serving-cert-1075686048/tls.key\\\\\\\"\\\\nI1209 12:05:18.602670 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 12:05:18.604681 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 12:05:18.604730 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 12:05:18.604783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 12:05:18.604808 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 12:05:18.609177 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 12:05:18.609280 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:05:18.609306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:05:18.609328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 12:05:18.609348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 12:05:18.609367 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 12:05:18.609387 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 12:05:18.609237 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 12:05:18.612865 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://917390e847b6ec24f370316f0fbc305db0f667eab946b4f0e749855a62112cc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55c7e1e49ab4af3fff08ccfa9a7d9b4f889fbdd1d60592bca6c87990f4623d5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55c7e1e49ab4af3fff08ccfa9a7d9b4f889fbdd1d60592bca6c87990f4623d5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:30Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:30 crc kubenswrapper[4703]: I1209 12:05:30.402697 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998159f494b646ac88f1f1ffb09789bd146d7b3554b3a5106174d7c72f3ff583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:30Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:30 crc kubenswrapper[4703]: I1209 12:05:30.418562 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32956ceb-8540-406e-8693-e86efb46cd42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1535901888e91bc0037177a3150033b17d6e2d143faa5a082dba0d096cad474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tv4nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d149dfa980c5e9cee99cd7ba8a51bca19529368e037a1fd8e9b8dbea04179378\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tv4nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q8sfk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:30Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:30 crc kubenswrapper[4703]: I1209 12:05:30.428552 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:30 crc kubenswrapper[4703]: I1209 12:05:30.428589 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:30 crc kubenswrapper[4703]: I1209 12:05:30.428601 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:30 crc kubenswrapper[4703]: I1209 12:05:30.428621 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:30 crc kubenswrapper[4703]: I1209 12:05:30.428634 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:30Z","lastTransitionTime":"2025-12-09T12:05:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:30 crc kubenswrapper[4703]: I1209 12:05:30.433114 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e02414b1-6edb-41df-bef5-da5638c58c13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf2e3462f3adecb5658327310dd9a543e1f7fe8b7d477f0f90c77f9a3c1724cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4933db7a36f445ad8289586a5de8f593c8f565d53b4b310292c2a6b436b86b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dd75d360dcc49fc8db00b8a075ce33da2bd845289f44a0148ab391635ab90d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://feccbfdb1fa7a9935e98927106faf0c924485823950276b473cdb8c06c4b07eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:30Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:30 crc kubenswrapper[4703]: I1209 12:05:30.447553 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:30Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:30 crc kubenswrapper[4703]: I1209 12:05:30.460262 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ncbbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c180fbd9-43db-436b-8166-3cbcb5a14da3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8f7249f78155fea92f6a72919fd391b841a3cdb25583c9b9bc1083c6e34f087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22tv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ncbbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:30Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:30 crc kubenswrapper[4703]: I1209 12:05:30.480787 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9173444-5181-4ee4-b651-11d92ccab0d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://382198a5fbf52c39a98bc79aa4e917d50dfbcb3ff73d831389901544d400817c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce8423b98a9127fea76c23c865d9840069fe97e1d10dc9e97816c916192bf8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5989b9b8e8fb619a07c3242f7c4310e042a281648593a05edd6cb16449480428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9652e0d783249140796d5c8586e300acb7918f3ced79a1daec01da21eac363c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06ebee5cde81f7c5fb299c4a346f466d3bd4667be4dd2e89712dcae660a1e08c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02b8fb5400b3ed69b7e65ae9f09aac617bd51972cfa0d5aec436c9f59eb65ac2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fee843e54b1ecda5724a42e485d25a1ddf459ebc6a54c2722cf8cd8d11684ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://305779d1504b21f7379fef82e624a34cad28dfb1fc1a224705497a52fc489093\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T12:05:28Z\\\",\\\"message\\\":\\\" 5959 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1209 12:05:28.724720 5959 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1209 12:05:28.726249 5959 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1209 12:05:28.726274 5959 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1209 12:05:28.726304 5959 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1209 12:05:28.726310 5959 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1209 12:05:28.726312 5959 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1209 12:05:28.726327 5959 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1209 12:05:28.726336 5959 handler.go:208] Removed *v1.Node event handler 2\\\\nI1209 12:05:28.726368 5959 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1209 12:05:28.726395 5959 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1209 12:05:28.726410 5959 factory.go:656] Stopping watch factory\\\\nI1209 12:05:28.726427 5959 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1209 12:05:28.726435 5959 handler.go:208] Removed *v1.Node event handler 7\\\\nI1209 12:05:28.726438 5959 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88a462e310fdbdf79496906a304fcf953c4e48c9cab9137f52cf5aafa9c4054c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f192db4b078c6bf3ffea6db58842ce7e8b4ad76c12fa8422496288bc0cfe0c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f192db4b078c6bf3ffea6db58842ce7e8b4ad76c12fa8422496288bc0cfe0c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7hrm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:30Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:30 crc kubenswrapper[4703]: I1209 12:05:30.498733 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f05c1cdf163073cebf69a5dcd00e285698d11f7c76995b202500503a9762d467\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:30Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:30 crc kubenswrapper[4703]: I1209 12:05:30.566240 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:30Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:30 crc kubenswrapper[4703]: I1209 12:05:30.568064 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:30 crc kubenswrapper[4703]: I1209 12:05:30.568102 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:30 crc kubenswrapper[4703]: I1209 12:05:30.568116 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:30 crc kubenswrapper[4703]: I1209 12:05:30.568134 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:30 crc kubenswrapper[4703]: I1209 12:05:30.568145 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:30Z","lastTransitionTime":"2025-12-09T12:05:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:30 crc kubenswrapper[4703]: I1209 12:05:30.578904 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15947b2dcc90c69809979bf2d4a222ee664245d897b1f9d9ca91d29551ad86ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://592c8564eeb877cf87544918cb97d0e32451a6afd1c201efafb4cfec9a1e5857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:30Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:30 crc kubenswrapper[4703]: I1209 12:05:30.589721 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rfmng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a89eb00-454a-44b2-9b8e-6518b4a9d10c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a0c3ac19e4c5217f17ea8cd7c742043ee7708b0f086c99341a4bbc062fa53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qf85c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rfmng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:30Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:30 crc kubenswrapper[4703]: I1209 12:05:30.599945 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:30Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:30 crc kubenswrapper[4703]: I1209 12:05:30.611940 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4r9tc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65954588-9afb-47ff-8c0b-f83bf290da27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://698ca5256efee75c73f1585ad1f7955676be6dac9cbd1c5aebdbf88848c35cee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aeaa589b823d27c4d29e522f951c5578acd67848c14381e3f61821458f74a966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aeaa589b823d27c4d29e522f951c5578acd67848c14381e3f61821458f74a966\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e7f1224d3313cfd9900663c77a41898ee26d0f91591e2f641a17a6a543c69b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e7f1224d3313cfd9900663c77a41898ee26d0f91591e2f641a17a6a543c69b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f86761f861d07fe4e22b95651d0ed069799b95020e8e168527fd838ea6cc1e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f86761f861d07fe4e22b95651d0ed069799b95020e8e168527fd838ea6cc1e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c0a351532024238eb4c192066350cdd4f94945cfa549586ced1c91a2379490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c0a351532024238eb4c192066350cdd4f94945cfa549586ced1c91a2379490\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7baabf4646f56bb08044eb6cb29976274522e962906d573a279e967e64d191f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7baabf4646f56bb08044eb6cb29976274522e962906d573a279e967e64d191f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87f16f3d165437fc695faa410c055f3f3716fbb47e2dc1d41bba05315872e950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87f16f3d165437fc695faa410c055f3f3716fbb47e2dc1d41bba05315872e950\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4r9tc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:30Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:30 crc kubenswrapper[4703]: I1209 12:05:30.622224 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9zbgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b57e1095-b0e1-4b30-a491-00852a5219e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c902e8b479756afb7cf929d0963afc0f950288658427c1d49ff22ee4319cda70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6sdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9zbgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:30Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:30 crc kubenswrapper[4703]: I1209 12:05:30.633605 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"687e9c7e-e4b8-4bbf-871e-714260501e27\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd9149c63c18f6178219d1f2e4f8e44bc861746b90fb9748105461192a870431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8612defa917cc99574e7762a0260e1b670277ea754e0a66bca960ef6b5a97f2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc0c9f4fdf8ce1ac42d54d3823b0137e0fd8ddf4499236ac3b954f9bff7c491\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd716e645be3932ca219e76ba8d896a8cf27086bbbac8f5e68057873fa68a6f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da708e8829aba2597198cdab64374791885bac9ddf7cb04b296c81d4e81d42ff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 12:05:18.266857 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 12:05:18.266993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 12:05:18.268588 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1075686048/tls.crt::/tmp/serving-cert-1075686048/tls.key\\\\\\\"\\\\nI1209 12:05:18.602670 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 12:05:18.604681 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 12:05:18.604730 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 12:05:18.604783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 12:05:18.604808 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 12:05:18.609177 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 12:05:18.609280 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:05:18.609306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:05:18.609328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 12:05:18.609348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 12:05:18.609367 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 12:05:18.609387 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 12:05:18.609237 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 12:05:18.612865 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://917390e847b6ec24f370316f0fbc305db0f667eab946b4f0e749855a62112cc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55c7e1e49ab4af3fff08ccfa9a7d9b4f889fbdd1d60592bca6c87990f4623d5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55c7e1e49ab4af3fff08ccfa9a7d9b4f889fbdd1d60592bca6c87990f4623d5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:30Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:30 crc kubenswrapper[4703]: I1209 12:05:30.645569 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998159f494b646ac88f1f1ffb09789bd146d7b3554b3a5106174d7c72f3ff583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:30Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:30 crc kubenswrapper[4703]: I1209 12:05:30.655946 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32956ceb-8540-406e-8693-e86efb46cd42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1535901888e91bc0037177a3150033b17d6e2d143faa5a082dba0d096cad474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tv4nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d149dfa980c5e9cee99cd7ba8a51bca19529368e037a1fd8e9b8dbea04179378\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tv4nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q8sfk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:30Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:30 crc kubenswrapper[4703]: I1209 12:05:30.670367 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:30 crc kubenswrapper[4703]: I1209 12:05:30.670408 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:30 crc kubenswrapper[4703]: I1209 12:05:30.670417 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:30 crc kubenswrapper[4703]: I1209 12:05:30.670433 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:30 crc kubenswrapper[4703]: I1209 12:05:30.670442 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:30Z","lastTransitionTime":"2025-12-09T12:05:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:30 crc kubenswrapper[4703]: I1209 12:05:30.673069 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9173444-5181-4ee4-b651-11d92ccab0d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://382198a5fbf52c39a98bc79aa4e917d50dfbcb3ff73d831389901544d400817c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce8423b98a9127fea76c23c865d9840069fe97e1d10dc9e97816c916192bf8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5989b9b8e8fb619a07c3242f7c4310e042a281648593a05edd6cb16449480428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9652e0d783249140796d5c8586e300acb7918f3ced79a1daec01da21eac363c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06ebee5cde81f7c5fb299c4a346f466d3bd4667be4dd2e89712dcae660a1e08c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02b8fb5400b3ed69b7e65ae9f09aac617bd51972cfa0d5aec436c9f59eb65ac2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fee843e54b1ecda5724a42e485d25a1ddf459ebc6a54c2722cf8cd8d11684ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://305779d1504b21f7379fef82e624a34cad28dfb1fc1a224705497a52fc489093\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T12:05:28Z\\\",\\\"message\\\":\\\" 5959 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1209 12:05:28.724720 5959 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1209 12:05:28.726249 5959 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1209 12:05:28.726274 5959 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1209 12:05:28.726304 5959 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1209 12:05:28.726310 5959 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1209 12:05:28.726312 5959 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1209 12:05:28.726327 5959 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1209 12:05:28.726336 5959 handler.go:208] Removed *v1.Node event handler 2\\\\nI1209 12:05:28.726368 5959 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1209 12:05:28.726395 5959 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1209 12:05:28.726410 5959 factory.go:656] Stopping watch factory\\\\nI1209 12:05:28.726427 5959 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1209 12:05:28.726435 5959 handler.go:208] Removed *v1.Node event handler 7\\\\nI1209 12:05:28.726438 5959 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88a462e310fdbdf79496906a304fcf953c4e48c9cab9137f52cf5aafa9c4054c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f192db4b078c6bf3ffea6db58842ce7e8b4ad76c12fa8422496288bc0cfe0c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f192db4b078c6bf3ffea6db58842ce7e8b4ad76c12fa8422496288bc0cfe0c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7hrm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:30Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:30 crc kubenswrapper[4703]: I1209 12:05:30.684065 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e02414b1-6edb-41df-bef5-da5638c58c13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf2e3462f3adecb5658327310dd9a543e1f7fe8b7d477f0f90c77f9a3c1724cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4933db7a36f445ad8289586a5de8f593c8f565d53b4b310292c2a6b436b86b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dd75d360dcc49fc8db00b8a075ce33da2bd845289f44a0148ab391635ab90d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://feccbfdb1fa7a9935e98927106faf0c924485823950276b473cdb8c06c4b07eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:30Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:30 crc kubenswrapper[4703]: I1209 12:05:30.695525 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:30Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:30 crc kubenswrapper[4703]: I1209 12:05:30.711109 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ncbbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c180fbd9-43db-436b-8166-3cbcb5a14da3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8f7249f78155fea92f6a72919fd391b841a3cdb25583c9b9bc1083c6e34f087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22tv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ncbbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:30Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:30 crc kubenswrapper[4703]: I1209 12:05:30.773123 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:30 crc kubenswrapper[4703]: I1209 12:05:30.773173 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:30 crc kubenswrapper[4703]: I1209 12:05:30.773229 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:30 crc kubenswrapper[4703]: I1209 12:05:30.773257 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:30 crc kubenswrapper[4703]: I1209 12:05:30.773271 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:30Z","lastTransitionTime":"2025-12-09T12:05:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:30 crc kubenswrapper[4703]: I1209 12:05:30.875911 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:30 crc kubenswrapper[4703]: I1209 12:05:30.875965 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:30 crc kubenswrapper[4703]: I1209 12:05:30.875976 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:30 crc kubenswrapper[4703]: I1209 12:05:30.875993 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:30 crc kubenswrapper[4703]: I1209 12:05:30.876003 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:30Z","lastTransitionTime":"2025-12-09T12:05:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:30 crc kubenswrapper[4703]: I1209 12:05:30.978506 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:30 crc kubenswrapper[4703]: I1209 12:05:30.978548 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:30 crc kubenswrapper[4703]: I1209 12:05:30.978557 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:30 crc kubenswrapper[4703]: I1209 12:05:30.978575 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:30 crc kubenswrapper[4703]: I1209 12:05:30.978586 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:30Z","lastTransitionTime":"2025-12-09T12:05:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:31 crc kubenswrapper[4703]: I1209 12:05:31.068941 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:05:31 crc kubenswrapper[4703]: E1209 12:05:31.069165 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 12:05:31 crc kubenswrapper[4703]: I1209 12:05:31.069181 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:05:31 crc kubenswrapper[4703]: E1209 12:05:31.069299 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 12:05:31 crc kubenswrapper[4703]: I1209 12:05:31.069688 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:05:31 crc kubenswrapper[4703]: E1209 12:05:31.069988 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 12:05:31 crc kubenswrapper[4703]: I1209 12:05:31.081436 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:31 crc kubenswrapper[4703]: I1209 12:05:31.081484 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:31 crc kubenswrapper[4703]: I1209 12:05:31.081495 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:31 crc kubenswrapper[4703]: I1209 12:05:31.081516 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:31 crc kubenswrapper[4703]: I1209 12:05:31.081533 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:31Z","lastTransitionTime":"2025-12-09T12:05:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:31 crc kubenswrapper[4703]: I1209 12:05:31.084573 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9zbgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b57e1095-b0e1-4b30-a491-00852a5219e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c902e8b479756afb7cf929d0963afc0f950288658427c1d49ff22ee4319cda70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6sdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9zbgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:31Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:31 crc kubenswrapper[4703]: I1209 12:05:31.097232 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:31Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:31 crc kubenswrapper[4703]: I1209 12:05:31.112721 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4r9tc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65954588-9afb-47ff-8c0b-f83bf290da27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://698ca5256efee75c73f1585ad1f7955676be6dac9cbd1c5aebdbf88848c35cee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aeaa589b823d27c4d29e522f951c5578acd67848c14381e3f61821458f74a966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aeaa589b823d27c4d29e522f951c5578acd67848c14381e3f61821458f74a966\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e7f1224d3313cfd9900663c77a41898ee26d0f91591e2f641a17a6a543c69b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e7f1224d3313cfd9900663c77a41898ee26d0f91591e2f641a17a6a543c69b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f86761f861d07fe4e22b95651d0ed069799b95020e8e168527fd838ea6cc1e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f86761f861d07fe4e22b95651d0ed069799b95020e8e168527fd838ea6cc1e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c0a351532024238eb4c192066350cdd4f94945cfa549586ced1c91a2379490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c0a351532024238eb4c192066350cdd4f94945cfa549586ced1c91a2379490\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7baabf4646f56bb08044eb6cb29976274522e962906d573a279e967e64d191f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7baabf4646f56bb08044eb6cb29976274522e962906d573a279e967e64d191f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87f16f3d165437fc695faa410c055f3f3716fbb47e2dc1d41bba05315872e950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87f16f3d165437fc695faa410c055f3f3716fbb47e2dc1d41bba05315872e950\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4r9tc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:31Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:31 crc kubenswrapper[4703]: I1209 12:05:31.128383 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998159f494b646ac88f1f1ffb09789bd146d7b3554b3a5106174d7c72f3ff583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:31Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:31 crc kubenswrapper[4703]: I1209 12:05:31.142878 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32956ceb-8540-406e-8693-e86efb46cd42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1535901888e91bc0037177a3150033b17d6e2d143faa5a082dba0d096cad474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tv4nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d149dfa980c5e9cee99cd7ba8a51bca19529368e037a1fd8e9b8dbea04179378\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tv4nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q8sfk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:31Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:31 crc kubenswrapper[4703]: I1209 12:05:31.164859 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"687e9c7e-e4b8-4bbf-871e-714260501e27\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd9149c63c18f6178219d1f2e4f8e44bc861746b90fb9748105461192a870431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8612defa917cc99574e7762a0260e1b670277ea754e0a66bca960ef6b5a97f2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc0c9f4fdf8ce1ac42d54d3823b0137e0fd8ddf4499236ac3b954f9bff7c491\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd716e645be3932ca219e76ba8d896a8cf27086bbbac8f5e68057873fa68a6f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da708e8829aba2597198cdab64374791885bac9ddf7cb04b296c81d4e81d42ff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 12:05:18.266857 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 12:05:18.266993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 12:05:18.268588 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1075686048/tls.crt::/tmp/serving-cert-1075686048/tls.key\\\\\\\"\\\\nI1209 12:05:18.602670 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 12:05:18.604681 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 12:05:18.604730 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 12:05:18.604783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 12:05:18.604808 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 12:05:18.609177 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 12:05:18.609280 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:05:18.609306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:05:18.609328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 12:05:18.609348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 12:05:18.609367 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 12:05:18.609387 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 12:05:18.609237 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 12:05:18.612865 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://917390e847b6ec24f370316f0fbc305db0f667eab946b4f0e749855a62112cc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55c7e1e49ab4af3fff08ccfa9a7d9b4f889fbdd1d60592bca6c87990f4623d5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55c7e1e49ab4af3fff08ccfa9a7d9b4f889fbdd1d60592bca6c87990f4623d5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:31Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:31 crc kubenswrapper[4703]: I1209 12:05:31.179120 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:31Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:31 crc kubenswrapper[4703]: I1209 12:05:31.183437 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:31 crc kubenswrapper[4703]: I1209 12:05:31.183484 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:31 crc kubenswrapper[4703]: I1209 12:05:31.183496 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:31 crc kubenswrapper[4703]: I1209 12:05:31.183514 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:31 crc kubenswrapper[4703]: I1209 12:05:31.183526 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:31Z","lastTransitionTime":"2025-12-09T12:05:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:31 crc kubenswrapper[4703]: I1209 12:05:31.190946 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ncbbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c180fbd9-43db-436b-8166-3cbcb5a14da3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8f7249f78155fea92f6a72919fd391b841a3cdb25583c9b9bc1083c6e34f087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22tv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ncbbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:31Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:31 crc kubenswrapper[4703]: I1209 12:05:31.213166 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9173444-5181-4ee4-b651-11d92ccab0d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://382198a5fbf52c39a98bc79aa4e917d50dfbcb3ff73d831389901544d400817c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce8423b98a9127fea76c23c865d9840069fe97e1d10dc9e97816c916192bf8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5989b9b8e8fb619a07c3242f7c4310e042a281648593a05edd6cb16449480428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9652e0d783249140796d5c8586e300acb7918f3ced79a1daec01da21eac363c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06ebee5cde81f7c5fb299c4a346f466d3bd4667be4dd2e89712dcae660a1e08c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02b8fb5400b3ed69b7e65ae9f09aac617bd51972cfa0d5aec436c9f59eb65ac2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fee843e54b1ecda5724a42e485d25a1ddf459ebc6a54c2722cf8cd8d11684ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://305779d1504b21f7379fef82e624a34cad28dfb1fc1a224705497a52fc489093\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T12:05:28Z\\\",\\\"message\\\":\\\" 5959 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1209 12:05:28.724720 5959 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1209 12:05:28.726249 5959 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1209 12:05:28.726274 5959 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1209 12:05:28.726304 5959 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1209 12:05:28.726310 5959 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1209 12:05:28.726312 5959 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1209 12:05:28.726327 5959 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1209 12:05:28.726336 5959 handler.go:208] Removed *v1.Node event handler 2\\\\nI1209 12:05:28.726368 5959 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1209 12:05:28.726395 5959 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1209 12:05:28.726410 5959 factory.go:656] Stopping watch factory\\\\nI1209 12:05:28.726427 5959 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1209 12:05:28.726435 5959 handler.go:208] Removed *v1.Node event handler 7\\\\nI1209 12:05:28.726438 5959 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88a462e310fdbdf79496906a304fcf953c4e48c9cab9137f52cf5aafa9c4054c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f192db4b078c6bf3ffea6db58842ce7e8b4ad76c12fa8422496288bc0cfe0c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f192db4b078c6bf3ffea6db58842ce7e8b4ad76c12fa8422496288bc0cfe0c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7hrm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:31Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:31 crc kubenswrapper[4703]: I1209 12:05:31.226505 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e02414b1-6edb-41df-bef5-da5638c58c13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf2e3462f3adecb5658327310dd9a543e1f7fe8b7d477f0f90c77f9a3c1724cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4933db7a36f445ad8289586a5de8f593c8f565d53b4b310292c2a6b436b86b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dd75d360dcc49fc8db00b8a075ce33da2bd845289f44a0148ab391635ab90d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://feccbfdb1fa7a9935e98927106faf0c924485823950276b473cdb8c06c4b07eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:31Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:31 crc kubenswrapper[4703]: I1209 12:05:31.238754 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15947b2dcc90c69809979bf2d4a222ee664245d897b1f9d9ca91d29551ad86ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://592c8564eeb877cf87544918cb97d0e32451a6afd1c201efafb4cfec9a1e5857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:31Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:31 crc kubenswrapper[4703]: I1209 12:05:31.249903 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rfmng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a89eb00-454a-44b2-9b8e-6518b4a9d10c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a0c3ac19e4c5217f17ea8cd7c742043ee7708b0f086c99341a4bbc062fa53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qf85c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rfmng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:31Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:31 crc kubenswrapper[4703]: I1209 12:05:31.262473 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f05c1cdf163073cebf69a5dcd00e285698d11f7c76995b202500503a9762d467\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:31Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:31 crc kubenswrapper[4703]: I1209 12:05:31.273458 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:31Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:31 crc kubenswrapper[4703]: I1209 12:05:31.284348 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7hrm8_e9173444-5181-4ee4-b651-11d92ccab0d0/ovnkube-controller/1.log" Dec 09 12:05:31 crc kubenswrapper[4703]: I1209 12:05:31.284902 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:31 crc kubenswrapper[4703]: I1209 12:05:31.284935 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:31 crc kubenswrapper[4703]: I1209 12:05:31.284944 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:31 crc kubenswrapper[4703]: I1209 12:05:31.284959 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:31 crc kubenswrapper[4703]: I1209 12:05:31.284970 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:31Z","lastTransitionTime":"2025-12-09T12:05:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:31 crc kubenswrapper[4703]: I1209 12:05:31.285112 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7hrm8_e9173444-5181-4ee4-b651-11d92ccab0d0/ovnkube-controller/0.log" Dec 09 12:05:31 crc kubenswrapper[4703]: I1209 12:05:31.289277 4703 generic.go:334] "Generic (PLEG): container finished" podID="e9173444-5181-4ee4-b651-11d92ccab0d0" containerID="fee843e54b1ecda5724a42e485d25a1ddf459ebc6a54c2722cf8cd8d11684ca5" exitCode=1 Dec 09 12:05:31 crc kubenswrapper[4703]: I1209 12:05:31.289355 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" event={"ID":"e9173444-5181-4ee4-b651-11d92ccab0d0","Type":"ContainerDied","Data":"fee843e54b1ecda5724a42e485d25a1ddf459ebc6a54c2722cf8cd8d11684ca5"} Dec 09 12:05:31 crc kubenswrapper[4703]: I1209 12:05:31.289415 4703 scope.go:117] "RemoveContainer" containerID="305779d1504b21f7379fef82e624a34cad28dfb1fc1a224705497a52fc489093" Dec 09 12:05:31 crc kubenswrapper[4703]: I1209 12:05:31.290466 4703 scope.go:117] "RemoveContainer" containerID="fee843e54b1ecda5724a42e485d25a1ddf459ebc6a54c2722cf8cd8d11684ca5" Dec 09 12:05:31 crc kubenswrapper[4703]: E1209 12:05:31.290673 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-7hrm8_openshift-ovn-kubernetes(e9173444-5181-4ee4-b651-11d92ccab0d0)\"" pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" podUID="e9173444-5181-4ee4-b651-11d92ccab0d0" Dec 09 12:05:31 crc kubenswrapper[4703]: I1209 12:05:31.310627 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9173444-5181-4ee4-b651-11d92ccab0d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://382198a5fbf52c39a98bc79aa4e917d50dfbcb3ff73d831389901544d400817c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce8423b98a9127fea76c23c865d9840069fe97e1d10dc9e97816c916192bf8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5989b9b8e8fb619a07c3242f7c4310e042a281648593a05edd6cb16449480428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9652e0d783249140796d5c8586e300acb7918f3ced79a1daec01da21eac363c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06ebee5cde81f7c5fb299c4a346f466d3bd4667be4dd2e89712dcae660a1e08c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02b8fb5400b3ed69b7e65ae9f09aac617bd51972cfa0d5aec436c9f59eb65ac2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fee843e54b1ecda5724a42e485d25a1ddf459ebc6a54c2722cf8cd8d11684ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://305779d1504b21f7379fef82e624a34cad28dfb1fc1a224705497a52fc489093\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T12:05:28Z\\\",\\\"message\\\":\\\" 5959 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1209 12:05:28.724720 5959 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1209 12:05:28.726249 5959 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1209 12:05:28.726274 5959 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1209 12:05:28.726304 5959 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1209 12:05:28.726310 5959 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1209 12:05:28.726312 5959 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1209 12:05:28.726327 5959 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1209 12:05:28.726336 5959 handler.go:208] Removed *v1.Node event handler 2\\\\nI1209 12:05:28.726368 5959 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1209 12:05:28.726395 5959 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1209 12:05:28.726410 5959 factory.go:656] Stopping watch factory\\\\nI1209 12:05:28.726427 5959 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1209 12:05:28.726435 5959 handler.go:208] Removed *v1.Node event handler 7\\\\nI1209 12:05:28.726438 5959 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fee843e54b1ecda5724a42e485d25a1ddf459ebc6a54c2722cf8cd8d11684ca5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T12:05:30Z\\\",\\\"message\\\":\\\"rvices.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1209 12:05:30.205709 6137 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-ncbbx\\\\nF1209 12:05:30.205711 6137 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:30Z is after 2025-08-24T17:21:41Z]\\\\nI1209 12:05:30.205711 6137 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI1209 12:05:30.205722 6137 ovn.go:134] Ensuring zone local for Pod openshift-image-registry/node-ca-ncbbx in node crc\\\\nI1209 12:05:30.205728 6137 base_network_controller_\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88a462e310fdbdf79496906a304fcf953c4e48c9cab9137f52cf5aafa9c4054c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f192db4b078c6bf3ffea6db58842ce7e8b4ad76c12fa8422496288bc0cfe0c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f192db4b078c6bf3ffea6db58842ce7e8b4ad76c12fa8422496288bc0cfe0c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7hrm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:31Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:31 crc kubenswrapper[4703]: I1209 12:05:31.326768 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e02414b1-6edb-41df-bef5-da5638c58c13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf2e3462f3adecb5658327310dd9a543e1f7fe8b7d477f0f90c77f9a3c1724cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4933db7a36f445ad8289586a5de8f593c8f565d53b4b310292c2a6b436b86b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dd75d360dcc49fc8db00b8a075ce33da2bd845289f44a0148ab391635ab90d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://feccbfdb1fa7a9935e98927106faf0c924485823950276b473cdb8c06c4b07eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:31Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:31 crc kubenswrapper[4703]: I1209 12:05:31.339925 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:31Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:31 crc kubenswrapper[4703]: I1209 12:05:31.348541 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ncbbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c180fbd9-43db-436b-8166-3cbcb5a14da3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8f7249f78155fea92f6a72919fd391b841a3cdb25583c9b9bc1083c6e34f087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22tv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ncbbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:31Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:31 crc kubenswrapper[4703]: I1209 12:05:31.360994 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f05c1cdf163073cebf69a5dcd00e285698d11f7c76995b202500503a9762d467\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:31Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:31 crc kubenswrapper[4703]: I1209 12:05:31.372913 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:31Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:31 crc kubenswrapper[4703]: I1209 12:05:31.384580 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15947b2dcc90c69809979bf2d4a222ee664245d897b1f9d9ca91d29551ad86ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://592c8564eeb877cf87544918cb97d0e32451a6afd1c201efafb4cfec9a1e5857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:31Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:31 crc kubenswrapper[4703]: I1209 12:05:31.387241 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:31 crc kubenswrapper[4703]: I1209 12:05:31.387269 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:31 crc kubenswrapper[4703]: I1209 12:05:31.387277 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:31 crc kubenswrapper[4703]: I1209 12:05:31.387290 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:31 crc kubenswrapper[4703]: I1209 12:05:31.387300 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:31Z","lastTransitionTime":"2025-12-09T12:05:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:31 crc kubenswrapper[4703]: I1209 12:05:31.395351 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rfmng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a89eb00-454a-44b2-9b8e-6518b4a9d10c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a0c3ac19e4c5217f17ea8cd7c742043ee7708b0f086c99341a4bbc062fa53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qf85c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rfmng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:31Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:31 crc kubenswrapper[4703]: I1209 12:05:31.407107 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:31Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:31 crc kubenswrapper[4703]: I1209 12:05:31.427564 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4r9tc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65954588-9afb-47ff-8c0b-f83bf290da27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://698ca5256efee75c73f1585ad1f7955676be6dac9cbd1c5aebdbf88848c35cee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aeaa589b823d27c4d29e522f951c5578acd67848c14381e3f61821458f74a966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aeaa589b823d27c4d29e522f951c5578acd67848c14381e3f61821458f74a966\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e7f1224d3313cfd9900663c77a41898ee26d0f91591e2f641a17a6a543c69b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e7f1224d3313cfd9900663c77a41898ee26d0f91591e2f641a17a6a543c69b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f86761f861d07fe4e22b95651d0ed069799b95020e8e168527fd838ea6cc1e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f86761f861d07fe4e22b95651d0ed069799b95020e8e168527fd838ea6cc1e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c0a351532024238eb4c192066350cdd4f94945cfa549586ced1c91a2379490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c0a351532024238eb4c192066350cdd4f94945cfa549586ced1c91a2379490\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7baabf4646f56bb08044eb6cb29976274522e962906d573a279e967e64d191f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7baabf4646f56bb08044eb6cb29976274522e962906d573a279e967e64d191f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87f16f3d165437fc695faa410c055f3f3716fbb47e2dc1d41bba05315872e950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87f16f3d165437fc695faa410c055f3f3716fbb47e2dc1d41bba05315872e950\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4r9tc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:31Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:31 crc kubenswrapper[4703]: I1209 12:05:31.452074 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9zbgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b57e1095-b0e1-4b30-a491-00852a5219e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c902e8b479756afb7cf929d0963afc0f950288658427c1d49ff22ee4319cda70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6sdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9zbgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:31Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:31 crc kubenswrapper[4703]: I1209 12:05:31.473305 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"687e9c7e-e4b8-4bbf-871e-714260501e27\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd9149c63c18f6178219d1f2e4f8e44bc861746b90fb9748105461192a870431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8612defa917cc99574e7762a0260e1b670277ea754e0a66bca960ef6b5a97f2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc0c9f4fdf8ce1ac42d54d3823b0137e0fd8ddf4499236ac3b954f9bff7c491\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd716e645be3932ca219e76ba8d896a8cf27086bbbac8f5e68057873fa68a6f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da708e8829aba2597198cdab64374791885bac9ddf7cb04b296c81d4e81d42ff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 12:05:18.266857 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 12:05:18.266993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 12:05:18.268588 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1075686048/tls.crt::/tmp/serving-cert-1075686048/tls.key\\\\\\\"\\\\nI1209 12:05:18.602670 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 12:05:18.604681 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 12:05:18.604730 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 12:05:18.604783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 12:05:18.604808 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 12:05:18.609177 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 12:05:18.609280 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:05:18.609306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:05:18.609328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 12:05:18.609348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 12:05:18.609367 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 12:05:18.609387 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 12:05:18.609237 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 12:05:18.612865 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://917390e847b6ec24f370316f0fbc305db0f667eab946b4f0e749855a62112cc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55c7e1e49ab4af3fff08ccfa9a7d9b4f889fbdd1d60592bca6c87990f4623d5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55c7e1e49ab4af3fff08ccfa9a7d9b4f889fbdd1d60592bca6c87990f4623d5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:31Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:31 crc kubenswrapper[4703]: I1209 12:05:31.486587 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998159f494b646ac88f1f1ffb09789bd146d7b3554b3a5106174d7c72f3ff583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:31Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:31 crc kubenswrapper[4703]: I1209 12:05:31.489358 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:31 crc kubenswrapper[4703]: I1209 12:05:31.489412 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:31 crc kubenswrapper[4703]: I1209 12:05:31.489424 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:31 crc kubenswrapper[4703]: I1209 12:05:31.489439 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:31 crc kubenswrapper[4703]: I1209 12:05:31.489448 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:31Z","lastTransitionTime":"2025-12-09T12:05:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:31 crc kubenswrapper[4703]: I1209 12:05:31.499015 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32956ceb-8540-406e-8693-e86efb46cd42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1535901888e91bc0037177a3150033b17d6e2d143faa5a082dba0d096cad474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tv4nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d149dfa980c5e9cee99cd7ba8a51bca19529368e037a1fd8e9b8dbea04179378\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tv4nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q8sfk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:31Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:31 crc kubenswrapper[4703]: I1209 12:05:31.592233 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:31 crc kubenswrapper[4703]: I1209 12:05:31.592269 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:31 crc kubenswrapper[4703]: I1209 12:05:31.592277 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:31 crc kubenswrapper[4703]: I1209 12:05:31.592290 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:31 crc kubenswrapper[4703]: I1209 12:05:31.592299 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:31Z","lastTransitionTime":"2025-12-09T12:05:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:31 crc kubenswrapper[4703]: I1209 12:05:31.696524 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:31 crc kubenswrapper[4703]: I1209 12:05:31.696792 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:31 crc kubenswrapper[4703]: I1209 12:05:31.696857 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:31 crc kubenswrapper[4703]: I1209 12:05:31.696921 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:31 crc kubenswrapper[4703]: I1209 12:05:31.697024 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:31Z","lastTransitionTime":"2025-12-09T12:05:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:31 crc kubenswrapper[4703]: I1209 12:05:31.757875 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:31 crc kubenswrapper[4703]: I1209 12:05:31.757932 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:31 crc kubenswrapper[4703]: I1209 12:05:31.757946 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:31 crc kubenswrapper[4703]: I1209 12:05:31.757967 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:31 crc kubenswrapper[4703]: I1209 12:05:31.757982 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:31Z","lastTransitionTime":"2025-12-09T12:05:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:31 crc kubenswrapper[4703]: E1209 12:05:31.772944 4703 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:05:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:05:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:05:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:05:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cc650f57-0f1e-4118-b5e8-874027bb4fd3\\\",\\\"systemUUID\\\":\\\"538480e3-ee75-4c42-9816-5a001726e0b5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:31Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:31 crc kubenswrapper[4703]: I1209 12:05:31.778678 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:31 crc kubenswrapper[4703]: I1209 12:05:31.778732 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:31 crc kubenswrapper[4703]: I1209 12:05:31.778741 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:31 crc kubenswrapper[4703]: I1209 12:05:31.778763 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:31 crc kubenswrapper[4703]: I1209 12:05:31.778776 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:31Z","lastTransitionTime":"2025-12-09T12:05:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:31 crc kubenswrapper[4703]: E1209 12:05:31.793289 4703 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:05:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:05:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:05:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:05:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cc650f57-0f1e-4118-b5e8-874027bb4fd3\\\",\\\"systemUUID\\\":\\\"538480e3-ee75-4c42-9816-5a001726e0b5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:31Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:31 crc kubenswrapper[4703]: I1209 12:05:31.798972 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:31 crc kubenswrapper[4703]: I1209 12:05:31.799016 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:31 crc kubenswrapper[4703]: I1209 12:05:31.799030 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:31 crc kubenswrapper[4703]: I1209 12:05:31.799048 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:31 crc kubenswrapper[4703]: I1209 12:05:31.799059 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:31Z","lastTransitionTime":"2025-12-09T12:05:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:31 crc kubenswrapper[4703]: E1209 12:05:31.811537 4703 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:05:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:05:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:05:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:05:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cc650f57-0f1e-4118-b5e8-874027bb4fd3\\\",\\\"systemUUID\\\":\\\"538480e3-ee75-4c42-9816-5a001726e0b5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:31Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:31 crc kubenswrapper[4703]: I1209 12:05:31.815926 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:31 crc kubenswrapper[4703]: I1209 12:05:31.815984 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:31 crc kubenswrapper[4703]: I1209 12:05:31.815994 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:31 crc kubenswrapper[4703]: I1209 12:05:31.816020 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:31 crc kubenswrapper[4703]: I1209 12:05:31.816032 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:31Z","lastTransitionTime":"2025-12-09T12:05:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:31 crc kubenswrapper[4703]: E1209 12:05:31.828784 4703 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:05:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:05:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:05:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:05:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cc650f57-0f1e-4118-b5e8-874027bb4fd3\\\",\\\"systemUUID\\\":\\\"538480e3-ee75-4c42-9816-5a001726e0b5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:31Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:31 crc kubenswrapper[4703]: I1209 12:05:31.832884 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:31 crc kubenswrapper[4703]: I1209 12:05:31.832926 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:31 crc kubenswrapper[4703]: I1209 12:05:31.832936 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:31 crc kubenswrapper[4703]: I1209 12:05:31.832956 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:31 crc kubenswrapper[4703]: I1209 12:05:31.832971 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:31Z","lastTransitionTime":"2025-12-09T12:05:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:31 crc kubenswrapper[4703]: E1209 12:05:31.847097 4703 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:05:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:05:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:05:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:05:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cc650f57-0f1e-4118-b5e8-874027bb4fd3\\\",\\\"systemUUID\\\":\\\"538480e3-ee75-4c42-9816-5a001726e0b5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:31Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:31 crc kubenswrapper[4703]: E1209 12:05:31.847281 4703 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 09 12:05:31 crc kubenswrapper[4703]: I1209 12:05:31.849468 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:31 crc kubenswrapper[4703]: I1209 12:05:31.849546 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:31 crc kubenswrapper[4703]: I1209 12:05:31.849575 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:31 crc kubenswrapper[4703]: I1209 12:05:31.849604 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:31 crc kubenswrapper[4703]: I1209 12:05:31.849623 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:31Z","lastTransitionTime":"2025-12-09T12:05:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:31 crc kubenswrapper[4703]: I1209 12:05:31.952439 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:31 crc kubenswrapper[4703]: I1209 12:05:31.952501 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:31 crc kubenswrapper[4703]: I1209 12:05:31.952513 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:31 crc kubenswrapper[4703]: I1209 12:05:31.952531 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:31 crc kubenswrapper[4703]: I1209 12:05:31.952549 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:31Z","lastTransitionTime":"2025-12-09T12:05:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:32 crc kubenswrapper[4703]: I1209 12:05:32.008865 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4sss9"] Dec 09 12:05:32 crc kubenswrapper[4703]: I1209 12:05:32.009595 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4sss9" Dec 09 12:05:32 crc kubenswrapper[4703]: I1209 12:05:32.011385 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 09 12:05:32 crc kubenswrapper[4703]: I1209 12:05:32.013360 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Dec 09 12:05:32 crc kubenswrapper[4703]: I1209 12:05:32.026673 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f05c1cdf163073cebf69a5dcd00e285698d11f7c76995b202500503a9762d467\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:32Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:32 crc kubenswrapper[4703]: I1209 12:05:32.040631 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:32Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:32 crc kubenswrapper[4703]: I1209 12:05:32.055090 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15947b2dcc90c69809979bf2d4a222ee664245d897b1f9d9ca91d29551ad86ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://592c8564eeb877cf87544918cb97d0e32451a6afd1c201efafb4cfec9a1e5857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:32Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:32 crc kubenswrapper[4703]: I1209 12:05:32.055957 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:32 crc kubenswrapper[4703]: I1209 12:05:32.056006 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:32 crc kubenswrapper[4703]: I1209 12:05:32.056018 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:32 crc kubenswrapper[4703]: I1209 12:05:32.056041 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:32 crc kubenswrapper[4703]: I1209 12:05:32.056056 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:32Z","lastTransitionTime":"2025-12-09T12:05:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:32 crc kubenswrapper[4703]: I1209 12:05:32.069076 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rfmng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a89eb00-454a-44b2-9b8e-6518b4a9d10c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a0c3ac19e4c5217f17ea8cd7c742043ee7708b0f086c99341a4bbc062fa53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qf85c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rfmng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:32Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:32 crc kubenswrapper[4703]: I1209 12:05:32.087432 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:32Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:32 crc kubenswrapper[4703]: I1209 12:05:32.102089 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4r9tc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65954588-9afb-47ff-8c0b-f83bf290da27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://698ca5256efee75c73f1585ad1f7955676be6dac9cbd1c5aebdbf88848c35cee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aeaa589b823d27c4d29e522f951c5578acd67848c14381e3f61821458f74a966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aeaa589b823d27c4d29e522f951c5578acd67848c14381e3f61821458f74a966\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e7f1224d3313cfd9900663c77a41898ee26d0f91591e2f641a17a6a543c69b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e7f1224d3313cfd9900663c77a41898ee26d0f91591e2f641a17a6a543c69b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f86761f861d07fe4e22b95651d0ed069799b95020e8e168527fd838ea6cc1e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f86761f861d07fe4e22b95651d0ed069799b95020e8e168527fd838ea6cc1e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c0a351532024238eb4c192066350cdd4f94945cfa549586ced1c91a2379490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c0a351532024238eb4c192066350cdd4f94945cfa549586ced1c91a2379490\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7baabf4646f56bb08044eb6cb29976274522e962906d573a279e967e64d191f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7baabf4646f56bb08044eb6cb29976274522e962906d573a279e967e64d191f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87f16f3d165437fc695faa410c055f3f3716fbb47e2dc1d41bba05315872e950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87f16f3d165437fc695faa410c055f3f3716fbb47e2dc1d41bba05315872e950\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4r9tc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:32Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:32 crc kubenswrapper[4703]: I1209 12:05:32.112600 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9zbgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b57e1095-b0e1-4b30-a491-00852a5219e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c902e8b479756afb7cf929d0963afc0f950288658427c1d49ff22ee4319cda70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6sdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9zbgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:32Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:32 crc kubenswrapper[4703]: I1209 12:05:32.122317 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4sss9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9dac2a02-8ee0-445c-bac8-4d448cda509d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q96fc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q96fc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4sss9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:32Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:32 crc kubenswrapper[4703]: I1209 12:05:32.127667 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9dac2a02-8ee0-445c-bac8-4d448cda509d-env-overrides\") pod \"ovnkube-control-plane-749d76644c-4sss9\" (UID: \"9dac2a02-8ee0-445c-bac8-4d448cda509d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4sss9" Dec 09 12:05:32 crc kubenswrapper[4703]: I1209 12:05:32.127702 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q96fc\" (UniqueName: \"kubernetes.io/projected/9dac2a02-8ee0-445c-bac8-4d448cda509d-kube-api-access-q96fc\") pod \"ovnkube-control-plane-749d76644c-4sss9\" (UID: \"9dac2a02-8ee0-445c-bac8-4d448cda509d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4sss9" Dec 09 12:05:32 crc kubenswrapper[4703]: I1209 12:05:32.127731 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9dac2a02-8ee0-445c-bac8-4d448cda509d-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-4sss9\" (UID: \"9dac2a02-8ee0-445c-bac8-4d448cda509d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4sss9" Dec 09 12:05:32 crc kubenswrapper[4703]: I1209 12:05:32.127809 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9dac2a02-8ee0-445c-bac8-4d448cda509d-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-4sss9\" (UID: \"9dac2a02-8ee0-445c-bac8-4d448cda509d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4sss9" Dec 09 12:05:32 crc kubenswrapper[4703]: I1209 12:05:32.134233 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"687e9c7e-e4b8-4bbf-871e-714260501e27\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd9149c63c18f6178219d1f2e4f8e44bc861746b90fb9748105461192a870431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8612defa917cc99574e7762a0260e1b670277ea754e0a66bca960ef6b5a97f2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc0c9f4fdf8ce1ac42d54d3823b0137e0fd8ddf4499236ac3b954f9bff7c491\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd716e645be3932ca219e76ba8d896a8cf27086bbbac8f5e68057873fa68a6f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da708e8829aba2597198cdab64374791885bac9ddf7cb04b296c81d4e81d42ff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 12:05:18.266857 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 12:05:18.266993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 12:05:18.268588 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1075686048/tls.crt::/tmp/serving-cert-1075686048/tls.key\\\\\\\"\\\\nI1209 12:05:18.602670 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 12:05:18.604681 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 12:05:18.604730 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 12:05:18.604783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 12:05:18.604808 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 12:05:18.609177 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 12:05:18.609280 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:05:18.609306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:05:18.609328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 12:05:18.609348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 12:05:18.609367 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 12:05:18.609387 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 12:05:18.609237 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 12:05:18.612865 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://917390e847b6ec24f370316f0fbc305db0f667eab946b4f0e749855a62112cc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55c7e1e49ab4af3fff08ccfa9a7d9b4f889fbdd1d60592bca6c87990f4623d5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55c7e1e49ab4af3fff08ccfa9a7d9b4f889fbdd1d60592bca6c87990f4623d5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:32Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:32 crc kubenswrapper[4703]: I1209 12:05:32.144823 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998159f494b646ac88f1f1ffb09789bd146d7b3554b3a5106174d7c72f3ff583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:32Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:32 crc kubenswrapper[4703]: I1209 12:05:32.158552 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:32 crc kubenswrapper[4703]: I1209 12:05:32.158614 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:32 crc kubenswrapper[4703]: I1209 12:05:32.158629 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:32 crc kubenswrapper[4703]: I1209 12:05:32.158651 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:32 crc kubenswrapper[4703]: I1209 12:05:32.158663 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:32Z","lastTransitionTime":"2025-12-09T12:05:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:32 crc kubenswrapper[4703]: I1209 12:05:32.161310 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32956ceb-8540-406e-8693-e86efb46cd42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1535901888e91bc0037177a3150033b17d6e2d143faa5a082dba0d096cad474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tv4nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d149dfa980c5e9cee99cd7ba8a51bca19529368e037a1fd8e9b8dbea04179378\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tv4nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q8sfk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:32Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:32 crc kubenswrapper[4703]: I1209 12:05:32.174139 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ncbbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c180fbd9-43db-436b-8166-3cbcb5a14da3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8f7249f78155fea92f6a72919fd391b841a3cdb25583c9b9bc1083c6e34f087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22tv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ncbbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:32Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:32 crc kubenswrapper[4703]: I1209 12:05:32.199586 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9173444-5181-4ee4-b651-11d92ccab0d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://382198a5fbf52c39a98bc79aa4e917d50dfbcb3ff73d831389901544d400817c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce8423b98a9127fea76c23c865d9840069fe97e1d10dc9e97816c916192bf8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5989b9b8e8fb619a07c3242f7c4310e042a281648593a05edd6cb16449480428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9652e0d783249140796d5c8586e300acb7918f3ced79a1daec01da21eac363c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06ebee5cde81f7c5fb299c4a346f466d3bd4667be4dd2e89712dcae660a1e08c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02b8fb5400b3ed69b7e65ae9f09aac617bd51972cfa0d5aec436c9f59eb65ac2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fee843e54b1ecda5724a42e485d25a1ddf459ebc6a54c2722cf8cd8d11684ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://305779d1504b21f7379fef82e624a34cad28dfb1fc1a224705497a52fc489093\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T12:05:28Z\\\",\\\"message\\\":\\\" 5959 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1209 12:05:28.724720 5959 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1209 12:05:28.726249 5959 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1209 12:05:28.726274 5959 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1209 12:05:28.726304 5959 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1209 12:05:28.726310 5959 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1209 12:05:28.726312 5959 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1209 12:05:28.726327 5959 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1209 12:05:28.726336 5959 handler.go:208] Removed *v1.Node event handler 2\\\\nI1209 12:05:28.726368 5959 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1209 12:05:28.726395 5959 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1209 12:05:28.726410 5959 factory.go:656] Stopping watch factory\\\\nI1209 12:05:28.726427 5959 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1209 12:05:28.726435 5959 handler.go:208] Removed *v1.Node event handler 7\\\\nI1209 12:05:28.726438 5959 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fee843e54b1ecda5724a42e485d25a1ddf459ebc6a54c2722cf8cd8d11684ca5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T12:05:30Z\\\",\\\"message\\\":\\\"rvices.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1209 12:05:30.205709 6137 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-ncbbx\\\\nF1209 12:05:30.205711 6137 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:30Z is after 2025-08-24T17:21:41Z]\\\\nI1209 12:05:30.205711 6137 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI1209 12:05:30.205722 6137 ovn.go:134] Ensuring zone local for Pod openshift-image-registry/node-ca-ncbbx in node crc\\\\nI1209 12:05:30.205728 6137 base_network_controller_\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88a462e310fdbdf79496906a304fcf953c4e48c9cab9137f52cf5aafa9c4054c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f192db4b078c6bf3ffea6db58842ce7e8b4ad76c12fa8422496288bc0cfe0c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f192db4b078c6bf3ffea6db58842ce7e8b4ad76c12fa8422496288bc0cfe0c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7hrm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:32Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:32 crc kubenswrapper[4703]: I1209 12:05:32.219651 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e02414b1-6edb-41df-bef5-da5638c58c13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf2e3462f3adecb5658327310dd9a543e1f7fe8b7d477f0f90c77f9a3c1724cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4933db7a36f445ad8289586a5de8f593c8f565d53b4b310292c2a6b436b86b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dd75d360dcc49fc8db00b8a075ce33da2bd845289f44a0148ab391635ab90d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://feccbfdb1fa7a9935e98927106faf0c924485823950276b473cdb8c06c4b07eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:32Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:32 crc kubenswrapper[4703]: I1209 12:05:32.228987 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9dac2a02-8ee0-445c-bac8-4d448cda509d-env-overrides\") pod \"ovnkube-control-plane-749d76644c-4sss9\" (UID: \"9dac2a02-8ee0-445c-bac8-4d448cda509d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4sss9" Dec 09 12:05:32 crc kubenswrapper[4703]: I1209 12:05:32.229053 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q96fc\" (UniqueName: \"kubernetes.io/projected/9dac2a02-8ee0-445c-bac8-4d448cda509d-kube-api-access-q96fc\") pod \"ovnkube-control-plane-749d76644c-4sss9\" (UID: \"9dac2a02-8ee0-445c-bac8-4d448cda509d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4sss9" Dec 09 12:05:32 crc kubenswrapper[4703]: I1209 12:05:32.229114 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9dac2a02-8ee0-445c-bac8-4d448cda509d-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-4sss9\" (UID: \"9dac2a02-8ee0-445c-bac8-4d448cda509d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4sss9" Dec 09 12:05:32 crc kubenswrapper[4703]: I1209 12:05:32.229179 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9dac2a02-8ee0-445c-bac8-4d448cda509d-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-4sss9\" (UID: \"9dac2a02-8ee0-445c-bac8-4d448cda509d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4sss9" Dec 09 12:05:32 crc kubenswrapper[4703]: I1209 12:05:32.229975 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9dac2a02-8ee0-445c-bac8-4d448cda509d-env-overrides\") pod \"ovnkube-control-plane-749d76644c-4sss9\" (UID: \"9dac2a02-8ee0-445c-bac8-4d448cda509d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4sss9" Dec 09 12:05:32 crc kubenswrapper[4703]: I1209 12:05:32.230408 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9dac2a02-8ee0-445c-bac8-4d448cda509d-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-4sss9\" (UID: \"9dac2a02-8ee0-445c-bac8-4d448cda509d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4sss9" Dec 09 12:05:32 crc kubenswrapper[4703]: I1209 12:05:32.235575 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:32Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:32 crc kubenswrapper[4703]: I1209 12:05:32.238331 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9dac2a02-8ee0-445c-bac8-4d448cda509d-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-4sss9\" (UID: \"9dac2a02-8ee0-445c-bac8-4d448cda509d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4sss9" Dec 09 12:05:32 crc kubenswrapper[4703]: I1209 12:05:32.250136 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q96fc\" (UniqueName: \"kubernetes.io/projected/9dac2a02-8ee0-445c-bac8-4d448cda509d-kube-api-access-q96fc\") pod \"ovnkube-control-plane-749d76644c-4sss9\" (UID: \"9dac2a02-8ee0-445c-bac8-4d448cda509d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4sss9" Dec 09 12:05:32 crc kubenswrapper[4703]: I1209 12:05:32.262160 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:32 crc kubenswrapper[4703]: I1209 12:05:32.262220 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:32 crc kubenswrapper[4703]: I1209 12:05:32.262232 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:32 crc kubenswrapper[4703]: I1209 12:05:32.262251 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:32 crc kubenswrapper[4703]: I1209 12:05:32.262265 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:32Z","lastTransitionTime":"2025-12-09T12:05:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:32 crc kubenswrapper[4703]: I1209 12:05:32.295141 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7hrm8_e9173444-5181-4ee4-b651-11d92ccab0d0/ovnkube-controller/1.log" Dec 09 12:05:32 crc kubenswrapper[4703]: I1209 12:05:32.322660 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4sss9" Dec 09 12:05:32 crc kubenswrapper[4703]: W1209 12:05:32.336756 4703 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9dac2a02_8ee0_445c_bac8_4d448cda509d.slice/crio-7e6009bb39de6b8eb3cd492e3c940e0b327554e1c8ce0d434633406d3aa03542 WatchSource:0}: Error finding container 7e6009bb39de6b8eb3cd492e3c940e0b327554e1c8ce0d434633406d3aa03542: Status 404 returned error can't find the container with id 7e6009bb39de6b8eb3cd492e3c940e0b327554e1c8ce0d434633406d3aa03542 Dec 09 12:05:32 crc kubenswrapper[4703]: I1209 12:05:32.365997 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:32 crc kubenswrapper[4703]: I1209 12:05:32.366035 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:32 crc kubenswrapper[4703]: I1209 12:05:32.366047 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:32 crc kubenswrapper[4703]: I1209 12:05:32.366063 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:32 crc kubenswrapper[4703]: I1209 12:05:32.366076 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:32Z","lastTransitionTime":"2025-12-09T12:05:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:32 crc kubenswrapper[4703]: I1209 12:05:32.468333 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:32 crc kubenswrapper[4703]: I1209 12:05:32.468385 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:32 crc kubenswrapper[4703]: I1209 12:05:32.468398 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:32 crc kubenswrapper[4703]: I1209 12:05:32.468415 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:32 crc kubenswrapper[4703]: I1209 12:05:32.468429 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:32Z","lastTransitionTime":"2025-12-09T12:05:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:32 crc kubenswrapper[4703]: I1209 12:05:32.571420 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:32 crc kubenswrapper[4703]: I1209 12:05:32.571480 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:32 crc kubenswrapper[4703]: I1209 12:05:32.571494 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:32 crc kubenswrapper[4703]: I1209 12:05:32.571515 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:32 crc kubenswrapper[4703]: I1209 12:05:32.571529 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:32Z","lastTransitionTime":"2025-12-09T12:05:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:32 crc kubenswrapper[4703]: I1209 12:05:32.673601 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:32 crc kubenswrapper[4703]: I1209 12:05:32.673635 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:32 crc kubenswrapper[4703]: I1209 12:05:32.673646 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:32 crc kubenswrapper[4703]: I1209 12:05:32.673662 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:32 crc kubenswrapper[4703]: I1209 12:05:32.673673 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:32Z","lastTransitionTime":"2025-12-09T12:05:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:32 crc kubenswrapper[4703]: I1209 12:05:32.776981 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:32 crc kubenswrapper[4703]: I1209 12:05:32.777033 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:32 crc kubenswrapper[4703]: I1209 12:05:32.777045 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:32 crc kubenswrapper[4703]: I1209 12:05:32.777063 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:32 crc kubenswrapper[4703]: I1209 12:05:32.777093 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:32Z","lastTransitionTime":"2025-12-09T12:05:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:32 crc kubenswrapper[4703]: I1209 12:05:32.880111 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:32 crc kubenswrapper[4703]: I1209 12:05:32.880168 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:32 crc kubenswrapper[4703]: I1209 12:05:32.880180 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:32 crc kubenswrapper[4703]: I1209 12:05:32.880220 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:32 crc kubenswrapper[4703]: I1209 12:05:32.880234 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:32Z","lastTransitionTime":"2025-12-09T12:05:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:32 crc kubenswrapper[4703]: I1209 12:05:32.982821 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:32 crc kubenswrapper[4703]: I1209 12:05:32.982857 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:32 crc kubenswrapper[4703]: I1209 12:05:32.982866 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:32 crc kubenswrapper[4703]: I1209 12:05:32.982880 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:32 crc kubenswrapper[4703]: I1209 12:05:32.982914 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:32Z","lastTransitionTime":"2025-12-09T12:05:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:33 crc kubenswrapper[4703]: I1209 12:05:33.068794 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:05:33 crc kubenswrapper[4703]: E1209 12:05:33.068952 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 12:05:33 crc kubenswrapper[4703]: I1209 12:05:33.069508 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:05:33 crc kubenswrapper[4703]: E1209 12:05:33.069687 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 12:05:33 crc kubenswrapper[4703]: I1209 12:05:33.069512 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:05:33 crc kubenswrapper[4703]: E1209 12:05:33.069883 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 12:05:33 crc kubenswrapper[4703]: I1209 12:05:33.085368 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:33 crc kubenswrapper[4703]: I1209 12:05:33.085411 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:33 crc kubenswrapper[4703]: I1209 12:05:33.085428 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:33 crc kubenswrapper[4703]: I1209 12:05:33.085444 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:33 crc kubenswrapper[4703]: I1209 12:05:33.085454 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:33Z","lastTransitionTime":"2025-12-09T12:05:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:33 crc kubenswrapper[4703]: I1209 12:05:33.104893 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-pf4r7"] Dec 09 12:05:33 crc kubenswrapper[4703]: I1209 12:05:33.105286 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pf4r7" Dec 09 12:05:33 crc kubenswrapper[4703]: E1209 12:05:33.105344 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pf4r7" podUID="9f199898-7916-48b6-b5e6-c878bacae384" Dec 09 12:05:33 crc kubenswrapper[4703]: I1209 12:05:33.118349 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f05c1cdf163073cebf69a5dcd00e285698d11f7c76995b202500503a9762d467\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:33Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:33 crc kubenswrapper[4703]: I1209 12:05:33.130216 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:33Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:33 crc kubenswrapper[4703]: I1209 12:05:33.139582 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-856ln\" (UniqueName: \"kubernetes.io/projected/9f199898-7916-48b6-b5e6-c878bacae384-kube-api-access-856ln\") pod \"network-metrics-daemon-pf4r7\" (UID: \"9f199898-7916-48b6-b5e6-c878bacae384\") " pod="openshift-multus/network-metrics-daemon-pf4r7" Dec 09 12:05:33 crc kubenswrapper[4703]: I1209 12:05:33.139703 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9f199898-7916-48b6-b5e6-c878bacae384-metrics-certs\") pod \"network-metrics-daemon-pf4r7\" (UID: \"9f199898-7916-48b6-b5e6-c878bacae384\") " pod="openshift-multus/network-metrics-daemon-pf4r7" Dec 09 12:05:33 crc kubenswrapper[4703]: I1209 12:05:33.143510 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15947b2dcc90c69809979bf2d4a222ee664245d897b1f9d9ca91d29551ad86ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://592c8564eeb877cf87544918cb97d0e32451a6afd1c201efafb4cfec9a1e5857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:33Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:33 crc kubenswrapper[4703]: I1209 12:05:33.153641 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rfmng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a89eb00-454a-44b2-9b8e-6518b4a9d10c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a0c3ac19e4c5217f17ea8cd7c742043ee7708b0f086c99341a4bbc062fa53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qf85c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rfmng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:33Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:33 crc kubenswrapper[4703]: I1209 12:05:33.167106 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:33Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:33 crc kubenswrapper[4703]: I1209 12:05:33.182930 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4r9tc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65954588-9afb-47ff-8c0b-f83bf290da27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://698ca5256efee75c73f1585ad1f7955676be6dac9cbd1c5aebdbf88848c35cee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aeaa589b823d27c4d29e522f951c5578acd67848c14381e3f61821458f74a966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aeaa589b823d27c4d29e522f951c5578acd67848c14381e3f61821458f74a966\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e7f1224d3313cfd9900663c77a41898ee26d0f91591e2f641a17a6a543c69b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e7f1224d3313cfd9900663c77a41898ee26d0f91591e2f641a17a6a543c69b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f86761f861d07fe4e22b95651d0ed069799b95020e8e168527fd838ea6cc1e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f86761f861d07fe4e22b95651d0ed069799b95020e8e168527fd838ea6cc1e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c0a351532024238eb4c192066350cdd4f94945cfa549586ced1c91a2379490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c0a351532024238eb4c192066350cdd4f94945cfa549586ced1c91a2379490\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7baabf4646f56bb08044eb6cb29976274522e962906d573a279e967e64d191f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7baabf4646f56bb08044eb6cb29976274522e962906d573a279e967e64d191f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87f16f3d165437fc695faa410c055f3f3716fbb47e2dc1d41bba05315872e950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87f16f3d165437fc695faa410c055f3f3716fbb47e2dc1d41bba05315872e950\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4r9tc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:33Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:33 crc kubenswrapper[4703]: I1209 12:05:33.187939 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:33 crc kubenswrapper[4703]: I1209 12:05:33.188027 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:33 crc kubenswrapper[4703]: I1209 12:05:33.188043 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:33 crc kubenswrapper[4703]: I1209 12:05:33.188066 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:33 crc kubenswrapper[4703]: I1209 12:05:33.188079 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:33Z","lastTransitionTime":"2025-12-09T12:05:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:33 crc kubenswrapper[4703]: I1209 12:05:33.197843 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9zbgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b57e1095-b0e1-4b30-a491-00852a5219e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c902e8b479756afb7cf929d0963afc0f950288658427c1d49ff22ee4319cda70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6sdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9zbgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:33Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:33 crc kubenswrapper[4703]: I1209 12:05:33.221218 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"687e9c7e-e4b8-4bbf-871e-714260501e27\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd9149c63c18f6178219d1f2e4f8e44bc861746b90fb9748105461192a870431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8612defa917cc99574e7762a0260e1b670277ea754e0a66bca960ef6b5a97f2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc0c9f4fdf8ce1ac42d54d3823b0137e0fd8ddf4499236ac3b954f9bff7c491\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd716e645be3932ca219e76ba8d896a8cf27086bbbac8f5e68057873fa68a6f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da708e8829aba2597198cdab64374791885bac9ddf7cb04b296c81d4e81d42ff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 12:05:18.266857 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 12:05:18.266993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 12:05:18.268588 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1075686048/tls.crt::/tmp/serving-cert-1075686048/tls.key\\\\\\\"\\\\nI1209 12:05:18.602670 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 12:05:18.604681 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 12:05:18.604730 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 12:05:18.604783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 12:05:18.604808 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 12:05:18.609177 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 12:05:18.609280 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:05:18.609306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:05:18.609328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 12:05:18.609348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 12:05:18.609367 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 12:05:18.609387 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 12:05:18.609237 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 12:05:18.612865 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://917390e847b6ec24f370316f0fbc305db0f667eab946b4f0e749855a62112cc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55c7e1e49ab4af3fff08ccfa9a7d9b4f889fbdd1d60592bca6c87990f4623d5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55c7e1e49ab4af3fff08ccfa9a7d9b4f889fbdd1d60592bca6c87990f4623d5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:33Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:33 crc kubenswrapper[4703]: I1209 12:05:33.237927 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998159f494b646ac88f1f1ffb09789bd146d7b3554b3a5106174d7c72f3ff583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:33Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:33 crc kubenswrapper[4703]: I1209 12:05:33.240575 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9f199898-7916-48b6-b5e6-c878bacae384-metrics-certs\") pod \"network-metrics-daemon-pf4r7\" (UID: \"9f199898-7916-48b6-b5e6-c878bacae384\") " pod="openshift-multus/network-metrics-daemon-pf4r7" Dec 09 12:05:33 crc kubenswrapper[4703]: I1209 12:05:33.240676 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-856ln\" (UniqueName: \"kubernetes.io/projected/9f199898-7916-48b6-b5e6-c878bacae384-kube-api-access-856ln\") pod \"network-metrics-daemon-pf4r7\" (UID: \"9f199898-7916-48b6-b5e6-c878bacae384\") " pod="openshift-multus/network-metrics-daemon-pf4r7" Dec 09 12:05:33 crc kubenswrapper[4703]: E1209 12:05:33.241011 4703 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 12:05:33 crc kubenswrapper[4703]: E1209 12:05:33.241077 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9f199898-7916-48b6-b5e6-c878bacae384-metrics-certs podName:9f199898-7916-48b6-b5e6-c878bacae384 nodeName:}" failed. No retries permitted until 2025-12-09 12:05:33.741058168 +0000 UTC m=+32.989821687 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9f199898-7916-48b6-b5e6-c878bacae384-metrics-certs") pod "network-metrics-daemon-pf4r7" (UID: "9f199898-7916-48b6-b5e6-c878bacae384") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 12:05:33 crc kubenswrapper[4703]: I1209 12:05:33.254161 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32956ceb-8540-406e-8693-e86efb46cd42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1535901888e91bc0037177a3150033b17d6e2d143faa5a082dba0d096cad474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tv4nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d149dfa980c5e9cee99cd7ba8a51bca19529368e037a1fd8e9b8dbea04179378\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tv4nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q8sfk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:33Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:33 crc kubenswrapper[4703]: I1209 12:05:33.259091 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-856ln\" (UniqueName: \"kubernetes.io/projected/9f199898-7916-48b6-b5e6-c878bacae384-kube-api-access-856ln\") pod \"network-metrics-daemon-pf4r7\" (UID: \"9f199898-7916-48b6-b5e6-c878bacae384\") " pod="openshift-multus/network-metrics-daemon-pf4r7" Dec 09 12:05:33 crc kubenswrapper[4703]: I1209 12:05:33.270338 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4sss9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9dac2a02-8ee0-445c-bac8-4d448cda509d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q96fc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q96fc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4sss9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:33Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:33 crc kubenswrapper[4703]: I1209 12:05:33.286311 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e02414b1-6edb-41df-bef5-da5638c58c13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf2e3462f3adecb5658327310dd9a543e1f7fe8b7d477f0f90c77f9a3c1724cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4933db7a36f445ad8289586a5de8f593c8f565d53b4b310292c2a6b436b86b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dd75d360dcc49fc8db00b8a075ce33da2bd845289f44a0148ab391635ab90d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://feccbfdb1fa7a9935e98927106faf0c924485823950276b473cdb8c06c4b07eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:33Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:33 crc kubenswrapper[4703]: I1209 12:05:33.290040 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:33 crc kubenswrapper[4703]: I1209 12:05:33.290068 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:33 crc kubenswrapper[4703]: I1209 12:05:33.290077 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:33 crc kubenswrapper[4703]: I1209 12:05:33.290091 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:33 crc kubenswrapper[4703]: I1209 12:05:33.290102 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:33Z","lastTransitionTime":"2025-12-09T12:05:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:33 crc kubenswrapper[4703]: I1209 12:05:33.300842 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:33Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:33 crc kubenswrapper[4703]: I1209 12:05:33.304171 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4sss9" event={"ID":"9dac2a02-8ee0-445c-bac8-4d448cda509d","Type":"ContainerStarted","Data":"c698b36fa974c95698ac8ac133b3caa9cbe8847626f5abf8088011c1a1a13032"} Dec 09 12:05:33 crc kubenswrapper[4703]: I1209 12:05:33.304241 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4sss9" event={"ID":"9dac2a02-8ee0-445c-bac8-4d448cda509d","Type":"ContainerStarted","Data":"7e6009bb39de6b8eb3cd492e3c940e0b327554e1c8ce0d434633406d3aa03542"} Dec 09 12:05:33 crc kubenswrapper[4703]: I1209 12:05:33.313922 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ncbbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c180fbd9-43db-436b-8166-3cbcb5a14da3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8f7249f78155fea92f6a72919fd391b841a3cdb25583c9b9bc1083c6e34f087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22tv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ncbbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:33Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:33 crc kubenswrapper[4703]: I1209 12:05:33.332467 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9173444-5181-4ee4-b651-11d92ccab0d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://382198a5fbf52c39a98bc79aa4e917d50dfbcb3ff73d831389901544d400817c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce8423b98a9127fea76c23c865d9840069fe97e1d10dc9e97816c916192bf8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5989b9b8e8fb619a07c3242f7c4310e042a281648593a05edd6cb16449480428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9652e0d783249140796d5c8586e300acb7918f3ced79a1daec01da21eac363c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06ebee5cde81f7c5fb299c4a346f466d3bd4667be4dd2e89712dcae660a1e08c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02b8fb5400b3ed69b7e65ae9f09aac617bd51972cfa0d5aec436c9f59eb65ac2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fee843e54b1ecda5724a42e485d25a1ddf459ebc6a54c2722cf8cd8d11684ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://305779d1504b21f7379fef82e624a34cad28dfb1fc1a224705497a52fc489093\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T12:05:28Z\\\",\\\"message\\\":\\\" 5959 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1209 12:05:28.724720 5959 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1209 12:05:28.726249 5959 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1209 12:05:28.726274 5959 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1209 12:05:28.726304 5959 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1209 12:05:28.726310 5959 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1209 12:05:28.726312 5959 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1209 12:05:28.726327 5959 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1209 12:05:28.726336 5959 handler.go:208] Removed *v1.Node event handler 2\\\\nI1209 12:05:28.726368 5959 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1209 12:05:28.726395 5959 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1209 12:05:28.726410 5959 factory.go:656] Stopping watch factory\\\\nI1209 12:05:28.726427 5959 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1209 12:05:28.726435 5959 handler.go:208] Removed *v1.Node event handler 7\\\\nI1209 12:05:28.726438 5959 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fee843e54b1ecda5724a42e485d25a1ddf459ebc6a54c2722cf8cd8d11684ca5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T12:05:30Z\\\",\\\"message\\\":\\\"rvices.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1209 12:05:30.205709 6137 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-ncbbx\\\\nF1209 12:05:30.205711 6137 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:30Z is after 2025-08-24T17:21:41Z]\\\\nI1209 12:05:30.205711 6137 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI1209 12:05:30.205722 6137 ovn.go:134] Ensuring zone local for Pod openshift-image-registry/node-ca-ncbbx in node crc\\\\nI1209 12:05:30.205728 6137 base_network_controller_\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88a462e310fdbdf79496906a304fcf953c4e48c9cab9137f52cf5aafa9c4054c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f192db4b078c6bf3ffea6db58842ce7e8b4ad76c12fa8422496288bc0cfe0c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f192db4b078c6bf3ffea6db58842ce7e8b4ad76c12fa8422496288bc0cfe0c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7hrm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:33Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:33 crc kubenswrapper[4703]: I1209 12:05:33.347040 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pf4r7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f199898-7916-48b6-b5e6-c878bacae384\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-856ln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-856ln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pf4r7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:33Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:33 crc kubenswrapper[4703]: I1209 12:05:33.392412 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:33 crc kubenswrapper[4703]: I1209 12:05:33.392456 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:33 crc kubenswrapper[4703]: I1209 12:05:33.392474 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:33 crc kubenswrapper[4703]: I1209 12:05:33.392497 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:33 crc kubenswrapper[4703]: I1209 12:05:33.392509 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:33Z","lastTransitionTime":"2025-12-09T12:05:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:33 crc kubenswrapper[4703]: I1209 12:05:33.494572 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:33 crc kubenswrapper[4703]: I1209 12:05:33.494606 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:33 crc kubenswrapper[4703]: I1209 12:05:33.494615 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:33 crc kubenswrapper[4703]: I1209 12:05:33.494627 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:33 crc kubenswrapper[4703]: I1209 12:05:33.494637 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:33Z","lastTransitionTime":"2025-12-09T12:05:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:33 crc kubenswrapper[4703]: I1209 12:05:33.597074 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:33 crc kubenswrapper[4703]: I1209 12:05:33.597481 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:33 crc kubenswrapper[4703]: I1209 12:05:33.597579 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:33 crc kubenswrapper[4703]: I1209 12:05:33.597663 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:33 crc kubenswrapper[4703]: I1209 12:05:33.597741 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:33Z","lastTransitionTime":"2025-12-09T12:05:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:33 crc kubenswrapper[4703]: I1209 12:05:33.699662 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:33 crc kubenswrapper[4703]: I1209 12:05:33.699697 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:33 crc kubenswrapper[4703]: I1209 12:05:33.699708 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:33 crc kubenswrapper[4703]: I1209 12:05:33.699722 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:33 crc kubenswrapper[4703]: I1209 12:05:33.699733 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:33Z","lastTransitionTime":"2025-12-09T12:05:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:33 crc kubenswrapper[4703]: I1209 12:05:33.745422 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9f199898-7916-48b6-b5e6-c878bacae384-metrics-certs\") pod \"network-metrics-daemon-pf4r7\" (UID: \"9f199898-7916-48b6-b5e6-c878bacae384\") " pod="openshift-multus/network-metrics-daemon-pf4r7" Dec 09 12:05:33 crc kubenswrapper[4703]: E1209 12:05:33.745534 4703 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 12:05:33 crc kubenswrapper[4703]: E1209 12:05:33.745596 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9f199898-7916-48b6-b5e6-c878bacae384-metrics-certs podName:9f199898-7916-48b6-b5e6-c878bacae384 nodeName:}" failed. No retries permitted until 2025-12-09 12:05:34.74557826 +0000 UTC m=+33.994341779 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9f199898-7916-48b6-b5e6-c878bacae384-metrics-certs") pod "network-metrics-daemon-pf4r7" (UID: "9f199898-7916-48b6-b5e6-c878bacae384") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 12:05:33 crc kubenswrapper[4703]: I1209 12:05:33.801595 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:33 crc kubenswrapper[4703]: I1209 12:05:33.801663 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:33 crc kubenswrapper[4703]: I1209 12:05:33.801675 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:33 crc kubenswrapper[4703]: I1209 12:05:33.801689 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:33 crc kubenswrapper[4703]: I1209 12:05:33.801697 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:33Z","lastTransitionTime":"2025-12-09T12:05:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:33 crc kubenswrapper[4703]: I1209 12:05:33.904206 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:33 crc kubenswrapper[4703]: I1209 12:05:33.904251 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:33 crc kubenswrapper[4703]: I1209 12:05:33.904260 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:33 crc kubenswrapper[4703]: I1209 12:05:33.904275 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:33 crc kubenswrapper[4703]: I1209 12:05:33.904283 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:33Z","lastTransitionTime":"2025-12-09T12:05:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:34 crc kubenswrapper[4703]: I1209 12:05:34.005973 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:34 crc kubenswrapper[4703]: I1209 12:05:34.006025 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:34 crc kubenswrapper[4703]: I1209 12:05:34.006039 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:34 crc kubenswrapper[4703]: I1209 12:05:34.006058 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:34 crc kubenswrapper[4703]: I1209 12:05:34.006071 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:34Z","lastTransitionTime":"2025-12-09T12:05:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:34 crc kubenswrapper[4703]: I1209 12:05:34.109971 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:34 crc kubenswrapper[4703]: I1209 12:05:34.110021 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:34 crc kubenswrapper[4703]: I1209 12:05:34.110034 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:34 crc kubenswrapper[4703]: I1209 12:05:34.110049 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:34 crc kubenswrapper[4703]: I1209 12:05:34.110057 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:34Z","lastTransitionTime":"2025-12-09T12:05:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:34 crc kubenswrapper[4703]: I1209 12:05:34.212694 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:34 crc kubenswrapper[4703]: I1209 12:05:34.212753 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:34 crc kubenswrapper[4703]: I1209 12:05:34.212764 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:34 crc kubenswrapper[4703]: I1209 12:05:34.212780 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:34 crc kubenswrapper[4703]: I1209 12:05:34.212795 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:34Z","lastTransitionTime":"2025-12-09T12:05:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:34 crc kubenswrapper[4703]: I1209 12:05:34.308677 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4sss9" event={"ID":"9dac2a02-8ee0-445c-bac8-4d448cda509d","Type":"ContainerStarted","Data":"660ecb626e35b5e693783b416e62c773b996e7e39233160e422dc9357ca0fd2d"} Dec 09 12:05:34 crc kubenswrapper[4703]: I1209 12:05:34.314811 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:34 crc kubenswrapper[4703]: I1209 12:05:34.314846 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:34 crc kubenswrapper[4703]: I1209 12:05:34.314854 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:34 crc kubenswrapper[4703]: I1209 12:05:34.314868 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:34 crc kubenswrapper[4703]: I1209 12:05:34.314880 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:34Z","lastTransitionTime":"2025-12-09T12:05:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:34 crc kubenswrapper[4703]: I1209 12:05:34.323326 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:34Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:34 crc kubenswrapper[4703]: I1209 12:05:34.333109 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ncbbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c180fbd9-43db-436b-8166-3cbcb5a14da3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8f7249f78155fea92f6a72919fd391b841a3cdb25583c9b9bc1083c6e34f087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22tv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ncbbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:34Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:34 crc kubenswrapper[4703]: I1209 12:05:34.351667 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9173444-5181-4ee4-b651-11d92ccab0d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://382198a5fbf52c39a98bc79aa4e917d50dfbcb3ff73d831389901544d400817c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce8423b98a9127fea76c23c865d9840069fe97e1d10dc9e97816c916192bf8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5989b9b8e8fb619a07c3242f7c4310e042a281648593a05edd6cb16449480428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9652e0d783249140796d5c8586e300acb7918f3ced79a1daec01da21eac363c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06ebee5cde81f7c5fb299c4a346f466d3bd4667be4dd2e89712dcae660a1e08c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02b8fb5400b3ed69b7e65ae9f09aac617bd51972cfa0d5aec436c9f59eb65ac2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fee843e54b1ecda5724a42e485d25a1ddf459ebc6a54c2722cf8cd8d11684ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://305779d1504b21f7379fef82e624a34cad28dfb1fc1a224705497a52fc489093\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T12:05:28Z\\\",\\\"message\\\":\\\" 5959 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1209 12:05:28.724720 5959 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1209 12:05:28.726249 5959 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1209 12:05:28.726274 5959 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1209 12:05:28.726304 5959 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1209 12:05:28.726310 5959 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1209 12:05:28.726312 5959 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1209 12:05:28.726327 5959 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1209 12:05:28.726336 5959 handler.go:208] Removed *v1.Node event handler 2\\\\nI1209 12:05:28.726368 5959 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1209 12:05:28.726395 5959 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1209 12:05:28.726410 5959 factory.go:656] Stopping watch factory\\\\nI1209 12:05:28.726427 5959 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1209 12:05:28.726435 5959 handler.go:208] Removed *v1.Node event handler 7\\\\nI1209 12:05:28.726438 5959 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fee843e54b1ecda5724a42e485d25a1ddf459ebc6a54c2722cf8cd8d11684ca5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T12:05:30Z\\\",\\\"message\\\":\\\"rvices.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1209 12:05:30.205709 6137 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-ncbbx\\\\nF1209 12:05:30.205711 6137 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:30Z is after 2025-08-24T17:21:41Z]\\\\nI1209 12:05:30.205711 6137 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI1209 12:05:30.205722 6137 ovn.go:134] Ensuring zone local for Pod openshift-image-registry/node-ca-ncbbx in node crc\\\\nI1209 12:05:30.205728 6137 base_network_controller_\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88a462e310fdbdf79496906a304fcf953c4e48c9cab9137f52cf5aafa9c4054c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f192db4b078c6bf3ffea6db58842ce7e8b4ad76c12fa8422496288bc0cfe0c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f192db4b078c6bf3ffea6db58842ce7e8b4ad76c12fa8422496288bc0cfe0c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7hrm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:34Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:34 crc kubenswrapper[4703]: I1209 12:05:34.363991 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pf4r7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f199898-7916-48b6-b5e6-c878bacae384\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-856ln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-856ln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pf4r7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:34Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:34 crc kubenswrapper[4703]: I1209 12:05:34.377593 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e02414b1-6edb-41df-bef5-da5638c58c13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf2e3462f3adecb5658327310dd9a543e1f7fe8b7d477f0f90c77f9a3c1724cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4933db7a36f445ad8289586a5de8f593c8f565d53b4b310292c2a6b436b86b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dd75d360dcc49fc8db00b8a075ce33da2bd845289f44a0148ab391635ab90d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://feccbfdb1fa7a9935e98927106faf0c924485823950276b473cdb8c06c4b07eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:34Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:34 crc kubenswrapper[4703]: I1209 12:05:34.387119 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rfmng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a89eb00-454a-44b2-9b8e-6518b4a9d10c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a0c3ac19e4c5217f17ea8cd7c742043ee7708b0f086c99341a4bbc062fa53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qf85c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rfmng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:34Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:34 crc kubenswrapper[4703]: I1209 12:05:34.398496 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f05c1cdf163073cebf69a5dcd00e285698d11f7c76995b202500503a9762d467\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:34Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:34 crc kubenswrapper[4703]: I1209 12:05:34.413500 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:34Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:34 crc kubenswrapper[4703]: I1209 12:05:34.417381 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:34 crc kubenswrapper[4703]: I1209 12:05:34.417422 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:34 crc kubenswrapper[4703]: I1209 12:05:34.417439 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:34 crc kubenswrapper[4703]: I1209 12:05:34.417486 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:34 crc kubenswrapper[4703]: I1209 12:05:34.417499 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:34Z","lastTransitionTime":"2025-12-09T12:05:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:34 crc kubenswrapper[4703]: I1209 12:05:34.428673 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15947b2dcc90c69809979bf2d4a222ee664245d897b1f9d9ca91d29551ad86ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://592c8564eeb877cf87544918cb97d0e32451a6afd1c201efafb4cfec9a1e5857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:34Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:34 crc kubenswrapper[4703]: I1209 12:05:34.446210 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:34Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:34 crc kubenswrapper[4703]: I1209 12:05:34.462132 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4r9tc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65954588-9afb-47ff-8c0b-f83bf290da27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://698ca5256efee75c73f1585ad1f7955676be6dac9cbd1c5aebdbf88848c35cee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aeaa589b823d27c4d29e522f951c5578acd67848c14381e3f61821458f74a966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aeaa589b823d27c4d29e522f951c5578acd67848c14381e3f61821458f74a966\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e7f1224d3313cfd9900663c77a41898ee26d0f91591e2f641a17a6a543c69b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e7f1224d3313cfd9900663c77a41898ee26d0f91591e2f641a17a6a543c69b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f86761f861d07fe4e22b95651d0ed069799b95020e8e168527fd838ea6cc1e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f86761f861d07fe4e22b95651d0ed069799b95020e8e168527fd838ea6cc1e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c0a351532024238eb4c192066350cdd4f94945cfa549586ced1c91a2379490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c0a351532024238eb4c192066350cdd4f94945cfa549586ced1c91a2379490\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7baabf4646f56bb08044eb6cb29976274522e962906d573a279e967e64d191f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7baabf4646f56bb08044eb6cb29976274522e962906d573a279e967e64d191f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87f16f3d165437fc695faa410c055f3f3716fbb47e2dc1d41bba05315872e950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87f16f3d165437fc695faa410c055f3f3716fbb47e2dc1d41bba05315872e950\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4r9tc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:34Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:34 crc kubenswrapper[4703]: I1209 12:05:34.477952 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9zbgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b57e1095-b0e1-4b30-a491-00852a5219e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c902e8b479756afb7cf929d0963afc0f950288658427c1d49ff22ee4319cda70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6sdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9zbgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:34Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:34 crc kubenswrapper[4703]: I1209 12:05:34.489460 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32956ceb-8540-406e-8693-e86efb46cd42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1535901888e91bc0037177a3150033b17d6e2d143faa5a082dba0d096cad474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tv4nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d149dfa980c5e9cee99cd7ba8a51bca19529368e037a1fd8e9b8dbea04179378\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tv4nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q8sfk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:34Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:34 crc kubenswrapper[4703]: I1209 12:05:34.500314 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4sss9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9dac2a02-8ee0-445c-bac8-4d448cda509d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c698b36fa974c95698ac8ac133b3caa9cbe8847626f5abf8088011c1a1a13032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q96fc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://660ecb626e35b5e693783b416e62c773b996e7e39233160e422dc9357ca0fd2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q96fc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4sss9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:34Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:34 crc kubenswrapper[4703]: I1209 12:05:34.513010 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"687e9c7e-e4b8-4bbf-871e-714260501e27\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd9149c63c18f6178219d1f2e4f8e44bc861746b90fb9748105461192a870431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8612defa917cc99574e7762a0260e1b670277ea754e0a66bca960ef6b5a97f2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc0c9f4fdf8ce1ac42d54d3823b0137e0fd8ddf4499236ac3b954f9bff7c491\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd716e645be3932ca219e76ba8d896a8cf27086bbbac8f5e68057873fa68a6f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da708e8829aba2597198cdab64374791885bac9ddf7cb04b296c81d4e81d42ff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 12:05:18.266857 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 12:05:18.266993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 12:05:18.268588 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1075686048/tls.crt::/tmp/serving-cert-1075686048/tls.key\\\\\\\"\\\\nI1209 12:05:18.602670 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 12:05:18.604681 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 12:05:18.604730 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 12:05:18.604783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 12:05:18.604808 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 12:05:18.609177 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 12:05:18.609280 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:05:18.609306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:05:18.609328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 12:05:18.609348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 12:05:18.609367 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 12:05:18.609387 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 12:05:18.609237 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 12:05:18.612865 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://917390e847b6ec24f370316f0fbc305db0f667eab946b4f0e749855a62112cc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55c7e1e49ab4af3fff08ccfa9a7d9b4f889fbdd1d60592bca6c87990f4623d5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55c7e1e49ab4af3fff08ccfa9a7d9b4f889fbdd1d60592bca6c87990f4623d5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:34Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:34 crc kubenswrapper[4703]: I1209 12:05:34.519689 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:34 crc kubenswrapper[4703]: I1209 12:05:34.519735 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:34 crc kubenswrapper[4703]: I1209 12:05:34.519746 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:34 crc kubenswrapper[4703]: I1209 12:05:34.519761 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:34 crc kubenswrapper[4703]: I1209 12:05:34.519774 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:34Z","lastTransitionTime":"2025-12-09T12:05:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:34 crc kubenswrapper[4703]: I1209 12:05:34.523952 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998159f494b646ac88f1f1ffb09789bd146d7b3554b3a5106174d7c72f3ff583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:34Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:34 crc kubenswrapper[4703]: I1209 12:05:34.622253 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:34 crc kubenswrapper[4703]: I1209 12:05:34.622516 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:34 crc kubenswrapper[4703]: I1209 12:05:34.622628 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:34 crc kubenswrapper[4703]: I1209 12:05:34.622717 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:34 crc kubenswrapper[4703]: I1209 12:05:34.622792 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:34Z","lastTransitionTime":"2025-12-09T12:05:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:34 crc kubenswrapper[4703]: I1209 12:05:34.724822 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:34 crc kubenswrapper[4703]: I1209 12:05:34.724859 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:34 crc kubenswrapper[4703]: I1209 12:05:34.724869 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:34 crc kubenswrapper[4703]: I1209 12:05:34.724885 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:34 crc kubenswrapper[4703]: I1209 12:05:34.724896 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:34Z","lastTransitionTime":"2025-12-09T12:05:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:34 crc kubenswrapper[4703]: I1209 12:05:34.754535 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9f199898-7916-48b6-b5e6-c878bacae384-metrics-certs\") pod \"network-metrics-daemon-pf4r7\" (UID: \"9f199898-7916-48b6-b5e6-c878bacae384\") " pod="openshift-multus/network-metrics-daemon-pf4r7" Dec 09 12:05:34 crc kubenswrapper[4703]: E1209 12:05:34.754779 4703 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 12:05:34 crc kubenswrapper[4703]: E1209 12:05:34.755005 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9f199898-7916-48b6-b5e6-c878bacae384-metrics-certs podName:9f199898-7916-48b6-b5e6-c878bacae384 nodeName:}" failed. No retries permitted until 2025-12-09 12:05:36.754985304 +0000 UTC m=+36.003748823 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9f199898-7916-48b6-b5e6-c878bacae384-metrics-certs") pod "network-metrics-daemon-pf4r7" (UID: "9f199898-7916-48b6-b5e6-c878bacae384") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 12:05:34 crc kubenswrapper[4703]: I1209 12:05:34.826952 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:34 crc kubenswrapper[4703]: I1209 12:05:34.826994 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:34 crc kubenswrapper[4703]: I1209 12:05:34.827007 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:34 crc kubenswrapper[4703]: I1209 12:05:34.827023 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:34 crc kubenswrapper[4703]: I1209 12:05:34.827035 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:34Z","lastTransitionTime":"2025-12-09T12:05:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:34 crc kubenswrapper[4703]: I1209 12:05:34.855791 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:05:34 crc kubenswrapper[4703]: E1209 12:05:34.856068 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:05:50.856043245 +0000 UTC m=+50.104806764 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:05:34 crc kubenswrapper[4703]: I1209 12:05:34.929709 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:34 crc kubenswrapper[4703]: I1209 12:05:34.929748 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:34 crc kubenswrapper[4703]: I1209 12:05:34.929758 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:34 crc kubenswrapper[4703]: I1209 12:05:34.929774 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:34 crc kubenswrapper[4703]: I1209 12:05:34.929786 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:34Z","lastTransitionTime":"2025-12-09T12:05:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:34 crc kubenswrapper[4703]: I1209 12:05:34.956874 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:05:34 crc kubenswrapper[4703]: I1209 12:05:34.957310 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:05:34 crc kubenswrapper[4703]: I1209 12:05:34.957495 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:05:34 crc kubenswrapper[4703]: I1209 12:05:34.957636 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:05:34 crc kubenswrapper[4703]: E1209 12:05:34.957276 4703 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 12:05:34 crc kubenswrapper[4703]: E1209 12:05:34.957879 4703 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 12:05:34 crc kubenswrapper[4703]: E1209 12:05:34.957955 4703 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 12:05:34 crc kubenswrapper[4703]: E1209 12:05:34.958102 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-09 12:05:50.958084966 +0000 UTC m=+50.206848485 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 12:05:34 crc kubenswrapper[4703]: E1209 12:05:34.957447 4703 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 12:05:34 crc kubenswrapper[4703]: E1209 12:05:34.958657 4703 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 12:05:34 crc kubenswrapper[4703]: E1209 12:05:34.958735 4703 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 12:05:34 crc kubenswrapper[4703]: E1209 12:05:34.958828 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-09 12:05:50.958817559 +0000 UTC m=+50.207581078 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 12:05:34 crc kubenswrapper[4703]: E1209 12:05:34.957602 4703 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 12:05:34 crc kubenswrapper[4703]: E1209 12:05:34.957760 4703 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 12:05:34 crc kubenswrapper[4703]: E1209 12:05:34.959213 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-09 12:05:50.959202761 +0000 UTC m=+50.207966280 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 12:05:34 crc kubenswrapper[4703]: E1209 12:05:34.959334 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-09 12:05:50.959323864 +0000 UTC m=+50.208087383 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 12:05:35 crc kubenswrapper[4703]: I1209 12:05:35.031819 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:35 crc kubenswrapper[4703]: I1209 12:05:35.031860 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:35 crc kubenswrapper[4703]: I1209 12:05:35.031871 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:35 crc kubenswrapper[4703]: I1209 12:05:35.031888 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:35 crc kubenswrapper[4703]: I1209 12:05:35.031898 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:35Z","lastTransitionTime":"2025-12-09T12:05:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:35 crc kubenswrapper[4703]: I1209 12:05:35.069097 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:05:35 crc kubenswrapper[4703]: I1209 12:05:35.069153 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:05:35 crc kubenswrapper[4703]: I1209 12:05:35.069171 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pf4r7" Dec 09 12:05:35 crc kubenswrapper[4703]: I1209 12:05:35.069261 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:05:35 crc kubenswrapper[4703]: E1209 12:05:35.069371 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 12:05:35 crc kubenswrapper[4703]: E1209 12:05:35.069452 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 12:05:35 crc kubenswrapper[4703]: E1209 12:05:35.069491 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pf4r7" podUID="9f199898-7916-48b6-b5e6-c878bacae384" Dec 09 12:05:35 crc kubenswrapper[4703]: E1209 12:05:35.069526 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 12:05:35 crc kubenswrapper[4703]: I1209 12:05:35.134423 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:35 crc kubenswrapper[4703]: I1209 12:05:35.134460 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:35 crc kubenswrapper[4703]: I1209 12:05:35.134470 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:35 crc kubenswrapper[4703]: I1209 12:05:35.134487 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:35 crc kubenswrapper[4703]: I1209 12:05:35.134534 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:35Z","lastTransitionTime":"2025-12-09T12:05:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:35 crc kubenswrapper[4703]: I1209 12:05:35.237593 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:35 crc kubenswrapper[4703]: I1209 12:05:35.237627 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:35 crc kubenswrapper[4703]: I1209 12:05:35.237636 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:35 crc kubenswrapper[4703]: I1209 12:05:35.237651 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:35 crc kubenswrapper[4703]: I1209 12:05:35.237674 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:35Z","lastTransitionTime":"2025-12-09T12:05:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:35 crc kubenswrapper[4703]: I1209 12:05:35.339517 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:35 crc kubenswrapper[4703]: I1209 12:05:35.340134 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:35 crc kubenswrapper[4703]: I1209 12:05:35.340319 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:35 crc kubenswrapper[4703]: I1209 12:05:35.340418 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:35 crc kubenswrapper[4703]: I1209 12:05:35.340508 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:35Z","lastTransitionTime":"2025-12-09T12:05:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:35 crc kubenswrapper[4703]: I1209 12:05:35.442810 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:35 crc kubenswrapper[4703]: I1209 12:05:35.442851 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:35 crc kubenswrapper[4703]: I1209 12:05:35.442862 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:35 crc kubenswrapper[4703]: I1209 12:05:35.442877 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:35 crc kubenswrapper[4703]: I1209 12:05:35.442890 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:35Z","lastTransitionTime":"2025-12-09T12:05:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:35 crc kubenswrapper[4703]: I1209 12:05:35.544793 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:35 crc kubenswrapper[4703]: I1209 12:05:35.544826 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:35 crc kubenswrapper[4703]: I1209 12:05:35.544836 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:35 crc kubenswrapper[4703]: I1209 12:05:35.544853 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:35 crc kubenswrapper[4703]: I1209 12:05:35.544873 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:35Z","lastTransitionTime":"2025-12-09T12:05:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:35 crc kubenswrapper[4703]: I1209 12:05:35.647568 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:35 crc kubenswrapper[4703]: I1209 12:05:35.647625 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:35 crc kubenswrapper[4703]: I1209 12:05:35.647636 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:35 crc kubenswrapper[4703]: I1209 12:05:35.647653 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:35 crc kubenswrapper[4703]: I1209 12:05:35.647670 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:35Z","lastTransitionTime":"2025-12-09T12:05:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:35 crc kubenswrapper[4703]: I1209 12:05:35.749695 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:35 crc kubenswrapper[4703]: I1209 12:05:35.749751 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:35 crc kubenswrapper[4703]: I1209 12:05:35.749760 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:35 crc kubenswrapper[4703]: I1209 12:05:35.749777 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:35 crc kubenswrapper[4703]: I1209 12:05:35.749787 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:35Z","lastTransitionTime":"2025-12-09T12:05:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:35 crc kubenswrapper[4703]: I1209 12:05:35.851798 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:35 crc kubenswrapper[4703]: I1209 12:05:35.851832 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:35 crc kubenswrapper[4703]: I1209 12:05:35.851841 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:35 crc kubenswrapper[4703]: I1209 12:05:35.851856 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:35 crc kubenswrapper[4703]: I1209 12:05:35.851864 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:35Z","lastTransitionTime":"2025-12-09T12:05:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:35 crc kubenswrapper[4703]: I1209 12:05:35.954859 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:35 crc kubenswrapper[4703]: I1209 12:05:35.955132 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:35 crc kubenswrapper[4703]: I1209 12:05:35.955209 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:35 crc kubenswrapper[4703]: I1209 12:05:35.955311 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:35 crc kubenswrapper[4703]: I1209 12:05:35.955439 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:35Z","lastTransitionTime":"2025-12-09T12:05:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:36 crc kubenswrapper[4703]: I1209 12:05:36.058034 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:36 crc kubenswrapper[4703]: I1209 12:05:36.058073 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:36 crc kubenswrapper[4703]: I1209 12:05:36.058085 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:36 crc kubenswrapper[4703]: I1209 12:05:36.058103 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:36 crc kubenswrapper[4703]: I1209 12:05:36.058115 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:36Z","lastTransitionTime":"2025-12-09T12:05:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:36 crc kubenswrapper[4703]: I1209 12:05:36.159641 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:36 crc kubenswrapper[4703]: I1209 12:05:36.159685 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:36 crc kubenswrapper[4703]: I1209 12:05:36.159697 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:36 crc kubenswrapper[4703]: I1209 12:05:36.159716 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:36 crc kubenswrapper[4703]: I1209 12:05:36.159729 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:36Z","lastTransitionTime":"2025-12-09T12:05:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:36 crc kubenswrapper[4703]: I1209 12:05:36.262648 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:36 crc kubenswrapper[4703]: I1209 12:05:36.262972 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:36 crc kubenswrapper[4703]: I1209 12:05:36.263050 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:36 crc kubenswrapper[4703]: I1209 12:05:36.263127 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:36 crc kubenswrapper[4703]: I1209 12:05:36.263217 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:36Z","lastTransitionTime":"2025-12-09T12:05:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:36 crc kubenswrapper[4703]: I1209 12:05:36.365176 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:36 crc kubenswrapper[4703]: I1209 12:05:36.365247 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:36 crc kubenswrapper[4703]: I1209 12:05:36.365259 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:36 crc kubenswrapper[4703]: I1209 12:05:36.365275 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:36 crc kubenswrapper[4703]: I1209 12:05:36.365286 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:36Z","lastTransitionTime":"2025-12-09T12:05:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:36 crc kubenswrapper[4703]: I1209 12:05:36.467246 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:36 crc kubenswrapper[4703]: I1209 12:05:36.467285 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:36 crc kubenswrapper[4703]: I1209 12:05:36.467293 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:36 crc kubenswrapper[4703]: I1209 12:05:36.467306 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:36 crc kubenswrapper[4703]: I1209 12:05:36.467315 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:36Z","lastTransitionTime":"2025-12-09T12:05:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:36 crc kubenswrapper[4703]: I1209 12:05:36.569841 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:36 crc kubenswrapper[4703]: I1209 12:05:36.569901 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:36 crc kubenswrapper[4703]: I1209 12:05:36.569934 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:36 crc kubenswrapper[4703]: I1209 12:05:36.569950 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:36 crc kubenswrapper[4703]: I1209 12:05:36.569959 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:36Z","lastTransitionTime":"2025-12-09T12:05:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:36 crc kubenswrapper[4703]: I1209 12:05:36.672300 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:36 crc kubenswrapper[4703]: I1209 12:05:36.672587 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:36 crc kubenswrapper[4703]: I1209 12:05:36.672675 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:36 crc kubenswrapper[4703]: I1209 12:05:36.672783 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:36 crc kubenswrapper[4703]: I1209 12:05:36.672878 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:36Z","lastTransitionTime":"2025-12-09T12:05:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:36 crc kubenswrapper[4703]: I1209 12:05:36.774327 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9f199898-7916-48b6-b5e6-c878bacae384-metrics-certs\") pod \"network-metrics-daemon-pf4r7\" (UID: \"9f199898-7916-48b6-b5e6-c878bacae384\") " pod="openshift-multus/network-metrics-daemon-pf4r7" Dec 09 12:05:36 crc kubenswrapper[4703]: E1209 12:05:36.774443 4703 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 12:05:36 crc kubenswrapper[4703]: E1209 12:05:36.774503 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9f199898-7916-48b6-b5e6-c878bacae384-metrics-certs podName:9f199898-7916-48b6-b5e6-c878bacae384 nodeName:}" failed. No retries permitted until 2025-12-09 12:05:40.774488604 +0000 UTC m=+40.023252113 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9f199898-7916-48b6-b5e6-c878bacae384-metrics-certs") pod "network-metrics-daemon-pf4r7" (UID: "9f199898-7916-48b6-b5e6-c878bacae384") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 12:05:36 crc kubenswrapper[4703]: I1209 12:05:36.776240 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:36 crc kubenswrapper[4703]: I1209 12:05:36.776265 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:36 crc kubenswrapper[4703]: I1209 12:05:36.776273 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:36 crc kubenswrapper[4703]: I1209 12:05:36.776285 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:36 crc kubenswrapper[4703]: I1209 12:05:36.776293 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:36Z","lastTransitionTime":"2025-12-09T12:05:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:36 crc kubenswrapper[4703]: I1209 12:05:36.878686 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:36 crc kubenswrapper[4703]: I1209 12:05:36.878728 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:36 crc kubenswrapper[4703]: I1209 12:05:36.878738 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:36 crc kubenswrapper[4703]: I1209 12:05:36.878756 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:36 crc kubenswrapper[4703]: I1209 12:05:36.878767 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:36Z","lastTransitionTime":"2025-12-09T12:05:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:36 crc kubenswrapper[4703]: I1209 12:05:36.980922 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:36 crc kubenswrapper[4703]: I1209 12:05:36.980958 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:36 crc kubenswrapper[4703]: I1209 12:05:36.980967 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:36 crc kubenswrapper[4703]: I1209 12:05:36.980979 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:36 crc kubenswrapper[4703]: I1209 12:05:36.980987 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:36Z","lastTransitionTime":"2025-12-09T12:05:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:37 crc kubenswrapper[4703]: I1209 12:05:37.069216 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:05:37 crc kubenswrapper[4703]: I1209 12:05:37.069221 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:05:37 crc kubenswrapper[4703]: E1209 12:05:37.069657 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 12:05:37 crc kubenswrapper[4703]: I1209 12:05:37.069245 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:05:37 crc kubenswrapper[4703]: I1209 12:05:37.069245 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pf4r7" Dec 09 12:05:37 crc kubenswrapper[4703]: E1209 12:05:37.069719 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 12:05:37 crc kubenswrapper[4703]: E1209 12:05:37.069578 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 12:05:37 crc kubenswrapper[4703]: E1209 12:05:37.069811 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pf4r7" podUID="9f199898-7916-48b6-b5e6-c878bacae384" Dec 09 12:05:37 crc kubenswrapper[4703]: I1209 12:05:37.082794 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:37 crc kubenswrapper[4703]: I1209 12:05:37.082995 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:37 crc kubenswrapper[4703]: I1209 12:05:37.083300 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:37 crc kubenswrapper[4703]: I1209 12:05:37.083385 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:37 crc kubenswrapper[4703]: I1209 12:05:37.083452 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:37Z","lastTransitionTime":"2025-12-09T12:05:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:37 crc kubenswrapper[4703]: I1209 12:05:37.185877 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:37 crc kubenswrapper[4703]: I1209 12:05:37.186129 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:37 crc kubenswrapper[4703]: I1209 12:05:37.186254 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:37 crc kubenswrapper[4703]: I1209 12:05:37.186381 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:37 crc kubenswrapper[4703]: I1209 12:05:37.186464 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:37Z","lastTransitionTime":"2025-12-09T12:05:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:37 crc kubenswrapper[4703]: I1209 12:05:37.288909 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:37 crc kubenswrapper[4703]: I1209 12:05:37.288954 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:37 crc kubenswrapper[4703]: I1209 12:05:37.288966 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:37 crc kubenswrapper[4703]: I1209 12:05:37.288984 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:37 crc kubenswrapper[4703]: I1209 12:05:37.289000 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:37Z","lastTransitionTime":"2025-12-09T12:05:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:37 crc kubenswrapper[4703]: I1209 12:05:37.391073 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:37 crc kubenswrapper[4703]: I1209 12:05:37.391120 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:37 crc kubenswrapper[4703]: I1209 12:05:37.391129 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:37 crc kubenswrapper[4703]: I1209 12:05:37.391150 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:37 crc kubenswrapper[4703]: I1209 12:05:37.391160 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:37Z","lastTransitionTime":"2025-12-09T12:05:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:37 crc kubenswrapper[4703]: I1209 12:05:37.494104 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:37 crc kubenswrapper[4703]: I1209 12:05:37.494452 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:37 crc kubenswrapper[4703]: I1209 12:05:37.494555 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:37 crc kubenswrapper[4703]: I1209 12:05:37.494626 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:37 crc kubenswrapper[4703]: I1209 12:05:37.494689 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:37Z","lastTransitionTime":"2025-12-09T12:05:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:37 crc kubenswrapper[4703]: I1209 12:05:37.597274 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:37 crc kubenswrapper[4703]: I1209 12:05:37.597314 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:37 crc kubenswrapper[4703]: I1209 12:05:37.597325 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:37 crc kubenswrapper[4703]: I1209 12:05:37.597339 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:37 crc kubenswrapper[4703]: I1209 12:05:37.597352 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:37Z","lastTransitionTime":"2025-12-09T12:05:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:37 crc kubenswrapper[4703]: I1209 12:05:37.699043 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:37 crc kubenswrapper[4703]: I1209 12:05:37.699102 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:37 crc kubenswrapper[4703]: I1209 12:05:37.699111 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:37 crc kubenswrapper[4703]: I1209 12:05:37.699124 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:37 crc kubenswrapper[4703]: I1209 12:05:37.699135 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:37Z","lastTransitionTime":"2025-12-09T12:05:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:37 crc kubenswrapper[4703]: I1209 12:05:37.801518 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:37 crc kubenswrapper[4703]: I1209 12:05:37.801552 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:37 crc kubenswrapper[4703]: I1209 12:05:37.801561 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:37 crc kubenswrapper[4703]: I1209 12:05:37.801580 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:37 crc kubenswrapper[4703]: I1209 12:05:37.801589 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:37Z","lastTransitionTime":"2025-12-09T12:05:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:37 crc kubenswrapper[4703]: I1209 12:05:37.904024 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:37 crc kubenswrapper[4703]: I1209 12:05:37.904074 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:37 crc kubenswrapper[4703]: I1209 12:05:37.904087 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:37 crc kubenswrapper[4703]: I1209 12:05:37.904104 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:37 crc kubenswrapper[4703]: I1209 12:05:37.904116 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:37Z","lastTransitionTime":"2025-12-09T12:05:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:38 crc kubenswrapper[4703]: I1209 12:05:38.005677 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:38 crc kubenswrapper[4703]: I1209 12:05:38.005714 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:38 crc kubenswrapper[4703]: I1209 12:05:38.005723 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:38 crc kubenswrapper[4703]: I1209 12:05:38.005735 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:38 crc kubenswrapper[4703]: I1209 12:05:38.005744 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:38Z","lastTransitionTime":"2025-12-09T12:05:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:38 crc kubenswrapper[4703]: I1209 12:05:38.108179 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:38 crc kubenswrapper[4703]: I1209 12:05:38.108241 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:38 crc kubenswrapper[4703]: I1209 12:05:38.108251 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:38 crc kubenswrapper[4703]: I1209 12:05:38.108265 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:38 crc kubenswrapper[4703]: I1209 12:05:38.108274 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:38Z","lastTransitionTime":"2025-12-09T12:05:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:38 crc kubenswrapper[4703]: I1209 12:05:38.210682 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:38 crc kubenswrapper[4703]: I1209 12:05:38.210715 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:38 crc kubenswrapper[4703]: I1209 12:05:38.210723 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:38 crc kubenswrapper[4703]: I1209 12:05:38.210738 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:38 crc kubenswrapper[4703]: I1209 12:05:38.210748 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:38Z","lastTransitionTime":"2025-12-09T12:05:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:38 crc kubenswrapper[4703]: I1209 12:05:38.313237 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:38 crc kubenswrapper[4703]: I1209 12:05:38.313272 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:38 crc kubenswrapper[4703]: I1209 12:05:38.313285 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:38 crc kubenswrapper[4703]: I1209 12:05:38.313301 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:38 crc kubenswrapper[4703]: I1209 12:05:38.313313 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:38Z","lastTransitionTime":"2025-12-09T12:05:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:38 crc kubenswrapper[4703]: I1209 12:05:38.416126 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:38 crc kubenswrapper[4703]: I1209 12:05:38.416161 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:38 crc kubenswrapper[4703]: I1209 12:05:38.416169 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:38 crc kubenswrapper[4703]: I1209 12:05:38.416181 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:38 crc kubenswrapper[4703]: I1209 12:05:38.416214 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:38Z","lastTransitionTime":"2025-12-09T12:05:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:38 crc kubenswrapper[4703]: I1209 12:05:38.519028 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:38 crc kubenswrapper[4703]: I1209 12:05:38.519072 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:38 crc kubenswrapper[4703]: I1209 12:05:38.519084 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:38 crc kubenswrapper[4703]: I1209 12:05:38.519101 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:38 crc kubenswrapper[4703]: I1209 12:05:38.519113 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:38Z","lastTransitionTime":"2025-12-09T12:05:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:38 crc kubenswrapper[4703]: I1209 12:05:38.537701 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 12:05:38 crc kubenswrapper[4703]: I1209 12:05:38.552736 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f05c1cdf163073cebf69a5dcd00e285698d11f7c76995b202500503a9762d467\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:38Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:38 crc kubenswrapper[4703]: I1209 12:05:38.564415 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:38Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:38 crc kubenswrapper[4703]: I1209 12:05:38.575523 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15947b2dcc90c69809979bf2d4a222ee664245d897b1f9d9ca91d29551ad86ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://592c8564eeb877cf87544918cb97d0e32451a6afd1c201efafb4cfec9a1e5857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:38Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:38 crc kubenswrapper[4703]: I1209 12:05:38.584104 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rfmng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a89eb00-454a-44b2-9b8e-6518b4a9d10c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a0c3ac19e4c5217f17ea8cd7c742043ee7708b0f086c99341a4bbc062fa53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qf85c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rfmng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:38Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:38 crc kubenswrapper[4703]: I1209 12:05:38.596261 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:38Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:38 crc kubenswrapper[4703]: I1209 12:05:38.613064 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4r9tc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65954588-9afb-47ff-8c0b-f83bf290da27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://698ca5256efee75c73f1585ad1f7955676be6dac9cbd1c5aebdbf88848c35cee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aeaa589b823d27c4d29e522f951c5578acd67848c14381e3f61821458f74a966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aeaa589b823d27c4d29e522f951c5578acd67848c14381e3f61821458f74a966\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e7f1224d3313cfd9900663c77a41898ee26d0f91591e2f641a17a6a543c69b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e7f1224d3313cfd9900663c77a41898ee26d0f91591e2f641a17a6a543c69b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f86761f861d07fe4e22b95651d0ed069799b95020e8e168527fd838ea6cc1e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f86761f861d07fe4e22b95651d0ed069799b95020e8e168527fd838ea6cc1e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c0a351532024238eb4c192066350cdd4f94945cfa549586ced1c91a2379490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c0a351532024238eb4c192066350cdd4f94945cfa549586ced1c91a2379490\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7baabf4646f56bb08044eb6cb29976274522e962906d573a279e967e64d191f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7baabf4646f56bb08044eb6cb29976274522e962906d573a279e967e64d191f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87f16f3d165437fc695faa410c055f3f3716fbb47e2dc1d41bba05315872e950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87f16f3d165437fc695faa410c055f3f3716fbb47e2dc1d41bba05315872e950\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4r9tc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:38Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:38 crc kubenswrapper[4703]: I1209 12:05:38.621475 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:38 crc kubenswrapper[4703]: I1209 12:05:38.621511 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:38 crc kubenswrapper[4703]: I1209 12:05:38.621519 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:38 crc kubenswrapper[4703]: I1209 12:05:38.621535 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:38 crc kubenswrapper[4703]: I1209 12:05:38.621545 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:38Z","lastTransitionTime":"2025-12-09T12:05:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:38 crc kubenswrapper[4703]: I1209 12:05:38.627681 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9zbgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b57e1095-b0e1-4b30-a491-00852a5219e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c902e8b479756afb7cf929d0963afc0f950288658427c1d49ff22ee4319cda70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6sdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9zbgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:38Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:38 crc kubenswrapper[4703]: I1209 12:05:38.641754 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"687e9c7e-e4b8-4bbf-871e-714260501e27\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd9149c63c18f6178219d1f2e4f8e44bc861746b90fb9748105461192a870431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8612defa917cc99574e7762a0260e1b670277ea754e0a66bca960ef6b5a97f2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc0c9f4fdf8ce1ac42d54d3823b0137e0fd8ddf4499236ac3b954f9bff7c491\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd716e645be3932ca219e76ba8d896a8cf27086bbbac8f5e68057873fa68a6f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da708e8829aba2597198cdab64374791885bac9ddf7cb04b296c81d4e81d42ff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 12:05:18.266857 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 12:05:18.266993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 12:05:18.268588 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1075686048/tls.crt::/tmp/serving-cert-1075686048/tls.key\\\\\\\"\\\\nI1209 12:05:18.602670 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 12:05:18.604681 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 12:05:18.604730 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 12:05:18.604783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 12:05:18.604808 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 12:05:18.609177 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 12:05:18.609280 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:05:18.609306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:05:18.609328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 12:05:18.609348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 12:05:18.609367 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 12:05:18.609387 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 12:05:18.609237 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 12:05:18.612865 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://917390e847b6ec24f370316f0fbc305db0f667eab946b4f0e749855a62112cc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55c7e1e49ab4af3fff08ccfa9a7d9b4f889fbdd1d60592bca6c87990f4623d5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55c7e1e49ab4af3fff08ccfa9a7d9b4f889fbdd1d60592bca6c87990f4623d5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:38Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:38 crc kubenswrapper[4703]: I1209 12:05:38.651804 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998159f494b646ac88f1f1ffb09789bd146d7b3554b3a5106174d7c72f3ff583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:38Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:38 crc kubenswrapper[4703]: I1209 12:05:38.663556 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32956ceb-8540-406e-8693-e86efb46cd42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1535901888e91bc0037177a3150033b17d6e2d143faa5a082dba0d096cad474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tv4nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d149dfa980c5e9cee99cd7ba8a51bca19529368e037a1fd8e9b8dbea04179378\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tv4nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q8sfk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:38Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:38 crc kubenswrapper[4703]: I1209 12:05:38.681593 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4sss9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9dac2a02-8ee0-445c-bac8-4d448cda509d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c698b36fa974c95698ac8ac133b3caa9cbe8847626f5abf8088011c1a1a13032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q96fc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://660ecb626e35b5e693783b416e62c773b996e7e39233160e422dc9357ca0fd2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q96fc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4sss9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:38Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:38 crc kubenswrapper[4703]: I1209 12:05:38.693246 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e02414b1-6edb-41df-bef5-da5638c58c13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf2e3462f3adecb5658327310dd9a543e1f7fe8b7d477f0f90c77f9a3c1724cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4933db7a36f445ad8289586a5de8f593c8f565d53b4b310292c2a6b436b86b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dd75d360dcc49fc8db00b8a075ce33da2bd845289f44a0148ab391635ab90d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://feccbfdb1fa7a9935e98927106faf0c924485823950276b473cdb8c06c4b07eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:38Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:38 crc kubenswrapper[4703]: I1209 12:05:38.705120 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:38Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:38 crc kubenswrapper[4703]: I1209 12:05:38.713901 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ncbbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c180fbd9-43db-436b-8166-3cbcb5a14da3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8f7249f78155fea92f6a72919fd391b841a3cdb25583c9b9bc1083c6e34f087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22tv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ncbbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:38Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:38 crc kubenswrapper[4703]: I1209 12:05:38.724020 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:38 crc kubenswrapper[4703]: I1209 12:05:38.724062 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:38 crc kubenswrapper[4703]: I1209 12:05:38.724071 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:38 crc kubenswrapper[4703]: I1209 12:05:38.724085 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:38 crc kubenswrapper[4703]: I1209 12:05:38.724093 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:38Z","lastTransitionTime":"2025-12-09T12:05:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:38 crc kubenswrapper[4703]: I1209 12:05:38.733858 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9173444-5181-4ee4-b651-11d92ccab0d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://382198a5fbf52c39a98bc79aa4e917d50dfbcb3ff73d831389901544d400817c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce8423b98a9127fea76c23c865d9840069fe97e1d10dc9e97816c916192bf8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5989b9b8e8fb619a07c3242f7c4310e042a281648593a05edd6cb16449480428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9652e0d783249140796d5c8586e300acb7918f3ced79a1daec01da21eac363c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06ebee5cde81f7c5fb299c4a346f466d3bd4667be4dd2e89712dcae660a1e08c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02b8fb5400b3ed69b7e65ae9f09aac617bd51972cfa0d5aec436c9f59eb65ac2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fee843e54b1ecda5724a42e485d25a1ddf459ebc6a54c2722cf8cd8d11684ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://305779d1504b21f7379fef82e624a34cad28dfb1fc1a224705497a52fc489093\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T12:05:28Z\\\",\\\"message\\\":\\\" 5959 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1209 12:05:28.724720 5959 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1209 12:05:28.726249 5959 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1209 12:05:28.726274 5959 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1209 12:05:28.726304 5959 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1209 12:05:28.726310 5959 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1209 12:05:28.726312 5959 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1209 12:05:28.726327 5959 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1209 12:05:28.726336 5959 handler.go:208] Removed *v1.Node event handler 2\\\\nI1209 12:05:28.726368 5959 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1209 12:05:28.726395 5959 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1209 12:05:28.726410 5959 factory.go:656] Stopping watch factory\\\\nI1209 12:05:28.726427 5959 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1209 12:05:28.726435 5959 handler.go:208] Removed *v1.Node event handler 7\\\\nI1209 12:05:28.726438 5959 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fee843e54b1ecda5724a42e485d25a1ddf459ebc6a54c2722cf8cd8d11684ca5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T12:05:30Z\\\",\\\"message\\\":\\\"rvices.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1209 12:05:30.205709 6137 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-ncbbx\\\\nF1209 12:05:30.205711 6137 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:30Z is after 2025-08-24T17:21:41Z]\\\\nI1209 12:05:30.205711 6137 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI1209 12:05:30.205722 6137 ovn.go:134] Ensuring zone local for Pod openshift-image-registry/node-ca-ncbbx in node crc\\\\nI1209 12:05:30.205728 6137 base_network_controller_\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88a462e310fdbdf79496906a304fcf953c4e48c9cab9137f52cf5aafa9c4054c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f192db4b078c6bf3ffea6db58842ce7e8b4ad76c12fa8422496288bc0cfe0c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f192db4b078c6bf3ffea6db58842ce7e8b4ad76c12fa8422496288bc0cfe0c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7hrm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:38Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:38 crc kubenswrapper[4703]: I1209 12:05:38.744580 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pf4r7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f199898-7916-48b6-b5e6-c878bacae384\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-856ln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-856ln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pf4r7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:38Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:38 crc kubenswrapper[4703]: I1209 12:05:38.826230 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:38 crc kubenswrapper[4703]: I1209 12:05:38.826267 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:38 crc kubenswrapper[4703]: I1209 12:05:38.826278 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:38 crc kubenswrapper[4703]: I1209 12:05:38.826294 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:38 crc kubenswrapper[4703]: I1209 12:05:38.826304 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:38Z","lastTransitionTime":"2025-12-09T12:05:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:38 crc kubenswrapper[4703]: I1209 12:05:38.928853 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:38 crc kubenswrapper[4703]: I1209 12:05:38.928894 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:38 crc kubenswrapper[4703]: I1209 12:05:38.928903 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:38 crc kubenswrapper[4703]: I1209 12:05:38.928916 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:38 crc kubenswrapper[4703]: I1209 12:05:38.928928 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:38Z","lastTransitionTime":"2025-12-09T12:05:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:39 crc kubenswrapper[4703]: I1209 12:05:39.032479 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:39 crc kubenswrapper[4703]: I1209 12:05:39.032529 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:39 crc kubenswrapper[4703]: I1209 12:05:39.032540 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:39 crc kubenswrapper[4703]: I1209 12:05:39.032558 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:39 crc kubenswrapper[4703]: I1209 12:05:39.032571 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:39Z","lastTransitionTime":"2025-12-09T12:05:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:39 crc kubenswrapper[4703]: I1209 12:05:39.069328 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:05:39 crc kubenswrapper[4703]: E1209 12:05:39.069470 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 12:05:39 crc kubenswrapper[4703]: I1209 12:05:39.069874 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:05:39 crc kubenswrapper[4703]: E1209 12:05:39.069929 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 12:05:39 crc kubenswrapper[4703]: I1209 12:05:39.069975 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:05:39 crc kubenswrapper[4703]: E1209 12:05:39.070021 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 12:05:39 crc kubenswrapper[4703]: I1209 12:05:39.070072 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pf4r7" Dec 09 12:05:39 crc kubenswrapper[4703]: E1209 12:05:39.070125 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pf4r7" podUID="9f199898-7916-48b6-b5e6-c878bacae384" Dec 09 12:05:39 crc kubenswrapper[4703]: I1209 12:05:39.135919 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:39 crc kubenswrapper[4703]: I1209 12:05:39.135996 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:39 crc kubenswrapper[4703]: I1209 12:05:39.136009 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:39 crc kubenswrapper[4703]: I1209 12:05:39.136029 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:39 crc kubenswrapper[4703]: I1209 12:05:39.136045 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:39Z","lastTransitionTime":"2025-12-09T12:05:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:39 crc kubenswrapper[4703]: I1209 12:05:39.239979 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:39 crc kubenswrapper[4703]: I1209 12:05:39.240054 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:39 crc kubenswrapper[4703]: I1209 12:05:39.240070 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:39 crc kubenswrapper[4703]: I1209 12:05:39.240093 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:39 crc kubenswrapper[4703]: I1209 12:05:39.240120 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:39Z","lastTransitionTime":"2025-12-09T12:05:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:39 crc kubenswrapper[4703]: I1209 12:05:39.343514 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:39 crc kubenswrapper[4703]: I1209 12:05:39.343570 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:39 crc kubenswrapper[4703]: I1209 12:05:39.343583 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:39 crc kubenswrapper[4703]: I1209 12:05:39.343602 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:39 crc kubenswrapper[4703]: I1209 12:05:39.343615 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:39Z","lastTransitionTime":"2025-12-09T12:05:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:39 crc kubenswrapper[4703]: I1209 12:05:39.446845 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:39 crc kubenswrapper[4703]: I1209 12:05:39.446890 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:39 crc kubenswrapper[4703]: I1209 12:05:39.446898 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:39 crc kubenswrapper[4703]: I1209 12:05:39.446912 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:39 crc kubenswrapper[4703]: I1209 12:05:39.446923 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:39Z","lastTransitionTime":"2025-12-09T12:05:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:39 crc kubenswrapper[4703]: I1209 12:05:39.550101 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:39 crc kubenswrapper[4703]: I1209 12:05:39.550169 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:39 crc kubenswrapper[4703]: I1209 12:05:39.550181 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:39 crc kubenswrapper[4703]: I1209 12:05:39.550217 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:39 crc kubenswrapper[4703]: I1209 12:05:39.550230 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:39Z","lastTransitionTime":"2025-12-09T12:05:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:39 crc kubenswrapper[4703]: I1209 12:05:39.652661 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:39 crc kubenswrapper[4703]: I1209 12:05:39.652691 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:39 crc kubenswrapper[4703]: I1209 12:05:39.652701 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:39 crc kubenswrapper[4703]: I1209 12:05:39.652714 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:39 crc kubenswrapper[4703]: I1209 12:05:39.652723 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:39Z","lastTransitionTime":"2025-12-09T12:05:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:39 crc kubenswrapper[4703]: I1209 12:05:39.756713 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:39 crc kubenswrapper[4703]: I1209 12:05:39.756798 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:39 crc kubenswrapper[4703]: I1209 12:05:39.756814 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:39 crc kubenswrapper[4703]: I1209 12:05:39.756838 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:39 crc kubenswrapper[4703]: I1209 12:05:39.756864 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:39Z","lastTransitionTime":"2025-12-09T12:05:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:39 crc kubenswrapper[4703]: I1209 12:05:39.862465 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:39 crc kubenswrapper[4703]: I1209 12:05:39.862522 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:39 crc kubenswrapper[4703]: I1209 12:05:39.862536 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:39 crc kubenswrapper[4703]: I1209 12:05:39.862555 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:39 crc kubenswrapper[4703]: I1209 12:05:39.862594 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:39Z","lastTransitionTime":"2025-12-09T12:05:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:39 crc kubenswrapper[4703]: I1209 12:05:39.965611 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:39 crc kubenswrapper[4703]: I1209 12:05:39.965659 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:39 crc kubenswrapper[4703]: I1209 12:05:39.965671 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:39 crc kubenswrapper[4703]: I1209 12:05:39.965688 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:39 crc kubenswrapper[4703]: I1209 12:05:39.965702 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:39Z","lastTransitionTime":"2025-12-09T12:05:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:40 crc kubenswrapper[4703]: I1209 12:05:40.067942 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:40 crc kubenswrapper[4703]: I1209 12:05:40.067996 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:40 crc kubenswrapper[4703]: I1209 12:05:40.068012 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:40 crc kubenswrapper[4703]: I1209 12:05:40.068029 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:40 crc kubenswrapper[4703]: I1209 12:05:40.068040 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:40Z","lastTransitionTime":"2025-12-09T12:05:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:40 crc kubenswrapper[4703]: I1209 12:05:40.171066 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:40 crc kubenswrapper[4703]: I1209 12:05:40.171112 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:40 crc kubenswrapper[4703]: I1209 12:05:40.171125 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:40 crc kubenswrapper[4703]: I1209 12:05:40.171143 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:40 crc kubenswrapper[4703]: I1209 12:05:40.171154 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:40Z","lastTransitionTime":"2025-12-09T12:05:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:40 crc kubenswrapper[4703]: I1209 12:05:40.273721 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:40 crc kubenswrapper[4703]: I1209 12:05:40.273774 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:40 crc kubenswrapper[4703]: I1209 12:05:40.273787 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:40 crc kubenswrapper[4703]: I1209 12:05:40.273805 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:40 crc kubenswrapper[4703]: I1209 12:05:40.273821 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:40Z","lastTransitionTime":"2025-12-09T12:05:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:40 crc kubenswrapper[4703]: I1209 12:05:40.377036 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:40 crc kubenswrapper[4703]: I1209 12:05:40.377094 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:40 crc kubenswrapper[4703]: I1209 12:05:40.377108 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:40 crc kubenswrapper[4703]: I1209 12:05:40.377133 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:40 crc kubenswrapper[4703]: I1209 12:05:40.377147 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:40Z","lastTransitionTime":"2025-12-09T12:05:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:40 crc kubenswrapper[4703]: I1209 12:05:40.480281 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:40 crc kubenswrapper[4703]: I1209 12:05:40.480348 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:40 crc kubenswrapper[4703]: I1209 12:05:40.480366 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:40 crc kubenswrapper[4703]: I1209 12:05:40.480390 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:40 crc kubenswrapper[4703]: I1209 12:05:40.480405 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:40Z","lastTransitionTime":"2025-12-09T12:05:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:40 crc kubenswrapper[4703]: I1209 12:05:40.584002 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:40 crc kubenswrapper[4703]: I1209 12:05:40.584075 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:40 crc kubenswrapper[4703]: I1209 12:05:40.584086 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:40 crc kubenswrapper[4703]: I1209 12:05:40.584102 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:40 crc kubenswrapper[4703]: I1209 12:05:40.584114 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:40Z","lastTransitionTime":"2025-12-09T12:05:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:40 crc kubenswrapper[4703]: I1209 12:05:40.686924 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:40 crc kubenswrapper[4703]: I1209 12:05:40.687015 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:40 crc kubenswrapper[4703]: I1209 12:05:40.687028 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:40 crc kubenswrapper[4703]: I1209 12:05:40.687052 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:40 crc kubenswrapper[4703]: I1209 12:05:40.687066 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:40Z","lastTransitionTime":"2025-12-09T12:05:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:40 crc kubenswrapper[4703]: I1209 12:05:40.790100 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:40 crc kubenswrapper[4703]: I1209 12:05:40.790236 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:40 crc kubenswrapper[4703]: I1209 12:05:40.790277 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:40 crc kubenswrapper[4703]: I1209 12:05:40.790322 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:40 crc kubenswrapper[4703]: I1209 12:05:40.790349 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:40Z","lastTransitionTime":"2025-12-09T12:05:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:40 crc kubenswrapper[4703]: I1209 12:05:40.818132 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9f199898-7916-48b6-b5e6-c878bacae384-metrics-certs\") pod \"network-metrics-daemon-pf4r7\" (UID: \"9f199898-7916-48b6-b5e6-c878bacae384\") " pod="openshift-multus/network-metrics-daemon-pf4r7" Dec 09 12:05:40 crc kubenswrapper[4703]: E1209 12:05:40.818410 4703 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 12:05:40 crc kubenswrapper[4703]: E1209 12:05:40.818523 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9f199898-7916-48b6-b5e6-c878bacae384-metrics-certs podName:9f199898-7916-48b6-b5e6-c878bacae384 nodeName:}" failed. No retries permitted until 2025-12-09 12:05:48.81850093 +0000 UTC m=+48.067264449 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9f199898-7916-48b6-b5e6-c878bacae384-metrics-certs") pod "network-metrics-daemon-pf4r7" (UID: "9f199898-7916-48b6-b5e6-c878bacae384") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 12:05:40 crc kubenswrapper[4703]: I1209 12:05:40.893232 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:40 crc kubenswrapper[4703]: I1209 12:05:40.893466 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:40 crc kubenswrapper[4703]: I1209 12:05:40.893478 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:40 crc kubenswrapper[4703]: I1209 12:05:40.893493 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:40 crc kubenswrapper[4703]: I1209 12:05:40.893501 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:40Z","lastTransitionTime":"2025-12-09T12:05:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:40 crc kubenswrapper[4703]: I1209 12:05:40.995728 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:40 crc kubenswrapper[4703]: I1209 12:05:40.995765 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:40 crc kubenswrapper[4703]: I1209 12:05:40.995777 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:40 crc kubenswrapper[4703]: I1209 12:05:40.995794 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:40 crc kubenswrapper[4703]: I1209 12:05:40.995806 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:40Z","lastTransitionTime":"2025-12-09T12:05:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:41 crc kubenswrapper[4703]: I1209 12:05:41.068890 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:05:41 crc kubenswrapper[4703]: E1209 12:05:41.069058 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 12:05:41 crc kubenswrapper[4703]: I1209 12:05:41.069438 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pf4r7" Dec 09 12:05:41 crc kubenswrapper[4703]: I1209 12:05:41.069462 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:05:41 crc kubenswrapper[4703]: E1209 12:05:41.069545 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pf4r7" podUID="9f199898-7916-48b6-b5e6-c878bacae384" Dec 09 12:05:41 crc kubenswrapper[4703]: E1209 12:05:41.069622 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 12:05:41 crc kubenswrapper[4703]: I1209 12:05:41.069717 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:05:41 crc kubenswrapper[4703]: E1209 12:05:41.069857 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 12:05:41 crc kubenswrapper[4703]: I1209 12:05:41.087738 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:41Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:41 crc kubenswrapper[4703]: I1209 12:05:41.098354 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:41 crc kubenswrapper[4703]: I1209 12:05:41.098407 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:41 crc kubenswrapper[4703]: I1209 12:05:41.098415 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:41 crc kubenswrapper[4703]: I1209 12:05:41.098429 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:41 crc kubenswrapper[4703]: I1209 12:05:41.098438 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:41Z","lastTransitionTime":"2025-12-09T12:05:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:41 crc kubenswrapper[4703]: I1209 12:05:41.101004 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4r9tc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65954588-9afb-47ff-8c0b-f83bf290da27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://698ca5256efee75c73f1585ad1f7955676be6dac9cbd1c5aebdbf88848c35cee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aeaa589b823d27c4d29e522f951c5578acd67848c14381e3f61821458f74a966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aeaa589b823d27c4d29e522f951c5578acd67848c14381e3f61821458f74a966\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e7f1224d3313cfd9900663c77a41898ee26d0f91591e2f641a17a6a543c69b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e7f1224d3313cfd9900663c77a41898ee26d0f91591e2f641a17a6a543c69b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f86761f861d07fe4e22b95651d0ed069799b95020e8e168527fd838ea6cc1e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f86761f861d07fe4e22b95651d0ed069799b95020e8e168527fd838ea6cc1e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c0a351532024238eb4c192066350cdd4f94945cfa549586ced1c91a2379490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c0a351532024238eb4c192066350cdd4f94945cfa549586ced1c91a2379490\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7baabf4646f56bb08044eb6cb29976274522e962906d573a279e967e64d191f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7baabf4646f56bb08044eb6cb29976274522e962906d573a279e967e64d191f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87f16f3d165437fc695faa410c055f3f3716fbb47e2dc1d41bba05315872e950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87f16f3d165437fc695faa410c055f3f3716fbb47e2dc1d41bba05315872e950\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4r9tc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:41Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:41 crc kubenswrapper[4703]: I1209 12:05:41.113006 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9zbgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b57e1095-b0e1-4b30-a491-00852a5219e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c902e8b479756afb7cf929d0963afc0f950288658427c1d49ff22ee4319cda70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6sdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9zbgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:41Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:41 crc kubenswrapper[4703]: I1209 12:05:41.125687 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"687e9c7e-e4b8-4bbf-871e-714260501e27\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd9149c63c18f6178219d1f2e4f8e44bc861746b90fb9748105461192a870431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8612defa917cc99574e7762a0260e1b670277ea754e0a66bca960ef6b5a97f2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc0c9f4fdf8ce1ac42d54d3823b0137e0fd8ddf4499236ac3b954f9bff7c491\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd716e645be3932ca219e76ba8d896a8cf27086bbbac8f5e68057873fa68a6f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da708e8829aba2597198cdab64374791885bac9ddf7cb04b296c81d4e81d42ff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 12:05:18.266857 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 12:05:18.266993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 12:05:18.268588 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1075686048/tls.crt::/tmp/serving-cert-1075686048/tls.key\\\\\\\"\\\\nI1209 12:05:18.602670 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 12:05:18.604681 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 12:05:18.604730 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 12:05:18.604783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 12:05:18.604808 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 12:05:18.609177 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 12:05:18.609280 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:05:18.609306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:05:18.609328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 12:05:18.609348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 12:05:18.609367 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 12:05:18.609387 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 12:05:18.609237 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 12:05:18.612865 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://917390e847b6ec24f370316f0fbc305db0f667eab946b4f0e749855a62112cc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55c7e1e49ab4af3fff08ccfa9a7d9b4f889fbdd1d60592bca6c87990f4623d5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55c7e1e49ab4af3fff08ccfa9a7d9b4f889fbdd1d60592bca6c87990f4623d5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:41Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:41 crc kubenswrapper[4703]: I1209 12:05:41.136722 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998159f494b646ac88f1f1ffb09789bd146d7b3554b3a5106174d7c72f3ff583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:41Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:41 crc kubenswrapper[4703]: I1209 12:05:41.146994 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32956ceb-8540-406e-8693-e86efb46cd42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1535901888e91bc0037177a3150033b17d6e2d143faa5a082dba0d096cad474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tv4nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d149dfa980c5e9cee99cd7ba8a51bca19529368e037a1fd8e9b8dbea04179378\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tv4nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q8sfk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:41Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:41 crc kubenswrapper[4703]: I1209 12:05:41.158655 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4sss9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9dac2a02-8ee0-445c-bac8-4d448cda509d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c698b36fa974c95698ac8ac133b3caa9cbe8847626f5abf8088011c1a1a13032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q96fc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://660ecb626e35b5e693783b416e62c773b996e7e39233160e422dc9357ca0fd2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q96fc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4sss9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:41Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:41 crc kubenswrapper[4703]: I1209 12:05:41.172635 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e02414b1-6edb-41df-bef5-da5638c58c13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf2e3462f3adecb5658327310dd9a543e1f7fe8b7d477f0f90c77f9a3c1724cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4933db7a36f445ad8289586a5de8f593c8f565d53b4b310292c2a6b436b86b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dd75d360dcc49fc8db00b8a075ce33da2bd845289f44a0148ab391635ab90d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://feccbfdb1fa7a9935e98927106faf0c924485823950276b473cdb8c06c4b07eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:41Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:41 crc kubenswrapper[4703]: I1209 12:05:41.185524 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:41Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:41 crc kubenswrapper[4703]: I1209 12:05:41.195737 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ncbbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c180fbd9-43db-436b-8166-3cbcb5a14da3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8f7249f78155fea92f6a72919fd391b841a3cdb25583c9b9bc1083c6e34f087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22tv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ncbbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:41Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:41 crc kubenswrapper[4703]: I1209 12:05:41.200647 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:41 crc kubenswrapper[4703]: I1209 12:05:41.200686 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:41 crc kubenswrapper[4703]: I1209 12:05:41.200700 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:41 crc kubenswrapper[4703]: I1209 12:05:41.200715 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:41 crc kubenswrapper[4703]: I1209 12:05:41.200726 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:41Z","lastTransitionTime":"2025-12-09T12:05:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:41 crc kubenswrapper[4703]: I1209 12:05:41.214640 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9173444-5181-4ee4-b651-11d92ccab0d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://382198a5fbf52c39a98bc79aa4e917d50dfbcb3ff73d831389901544d400817c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce8423b98a9127fea76c23c865d9840069fe97e1d10dc9e97816c916192bf8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5989b9b8e8fb619a07c3242f7c4310e042a281648593a05edd6cb16449480428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9652e0d783249140796d5c8586e300acb7918f3ced79a1daec01da21eac363c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06ebee5cde81f7c5fb299c4a346f466d3bd4667be4dd2e89712dcae660a1e08c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02b8fb5400b3ed69b7e65ae9f09aac617bd51972cfa0d5aec436c9f59eb65ac2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fee843e54b1ecda5724a42e485d25a1ddf459ebc6a54c2722cf8cd8d11684ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://305779d1504b21f7379fef82e624a34cad28dfb1fc1a224705497a52fc489093\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T12:05:28Z\\\",\\\"message\\\":\\\" 5959 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1209 12:05:28.724720 5959 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1209 12:05:28.726249 5959 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1209 12:05:28.726274 5959 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1209 12:05:28.726304 5959 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1209 12:05:28.726310 5959 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1209 12:05:28.726312 5959 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1209 12:05:28.726327 5959 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1209 12:05:28.726336 5959 handler.go:208] Removed *v1.Node event handler 2\\\\nI1209 12:05:28.726368 5959 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1209 12:05:28.726395 5959 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1209 12:05:28.726410 5959 factory.go:656] Stopping watch factory\\\\nI1209 12:05:28.726427 5959 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1209 12:05:28.726435 5959 handler.go:208] Removed *v1.Node event handler 7\\\\nI1209 12:05:28.726438 5959 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fee843e54b1ecda5724a42e485d25a1ddf459ebc6a54c2722cf8cd8d11684ca5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T12:05:30Z\\\",\\\"message\\\":\\\"rvices.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1209 12:05:30.205709 6137 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-ncbbx\\\\nF1209 12:05:30.205711 6137 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:30Z is after 2025-08-24T17:21:41Z]\\\\nI1209 12:05:30.205711 6137 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI1209 12:05:30.205722 6137 ovn.go:134] Ensuring zone local for Pod openshift-image-registry/node-ca-ncbbx in node crc\\\\nI1209 12:05:30.205728 6137 base_network_controller_\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88a462e310fdbdf79496906a304fcf953c4e48c9cab9137f52cf5aafa9c4054c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f192db4b078c6bf3ffea6db58842ce7e8b4ad76c12fa8422496288bc0cfe0c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f192db4b078c6bf3ffea6db58842ce7e8b4ad76c12fa8422496288bc0cfe0c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7hrm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:41Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:41 crc kubenswrapper[4703]: I1209 12:05:41.226997 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pf4r7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f199898-7916-48b6-b5e6-c878bacae384\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-856ln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-856ln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pf4r7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:41Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:41 crc kubenswrapper[4703]: I1209 12:05:41.240502 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f05c1cdf163073cebf69a5dcd00e285698d11f7c76995b202500503a9762d467\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:41Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:41 crc kubenswrapper[4703]: I1209 12:05:41.254116 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:41Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:41 crc kubenswrapper[4703]: I1209 12:05:41.267723 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15947b2dcc90c69809979bf2d4a222ee664245d897b1f9d9ca91d29551ad86ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://592c8564eeb877cf87544918cb97d0e32451a6afd1c201efafb4cfec9a1e5857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:41Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:41 crc kubenswrapper[4703]: I1209 12:05:41.278380 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rfmng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a89eb00-454a-44b2-9b8e-6518b4a9d10c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a0c3ac19e4c5217f17ea8cd7c742043ee7708b0f086c99341a4bbc062fa53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qf85c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rfmng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:41Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:41 crc kubenswrapper[4703]: I1209 12:05:41.303642 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:41 crc kubenswrapper[4703]: I1209 12:05:41.303679 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:41 crc kubenswrapper[4703]: I1209 12:05:41.303690 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:41 crc kubenswrapper[4703]: I1209 12:05:41.303705 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:41 crc kubenswrapper[4703]: I1209 12:05:41.303719 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:41Z","lastTransitionTime":"2025-12-09T12:05:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:41 crc kubenswrapper[4703]: I1209 12:05:41.405718 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:41 crc kubenswrapper[4703]: I1209 12:05:41.405988 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:41 crc kubenswrapper[4703]: I1209 12:05:41.406057 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:41 crc kubenswrapper[4703]: I1209 12:05:41.406129 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:41 crc kubenswrapper[4703]: I1209 12:05:41.406211 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:41Z","lastTransitionTime":"2025-12-09T12:05:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:41 crc kubenswrapper[4703]: I1209 12:05:41.509046 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:41 crc kubenswrapper[4703]: I1209 12:05:41.509090 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:41 crc kubenswrapper[4703]: I1209 12:05:41.509105 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:41 crc kubenswrapper[4703]: I1209 12:05:41.509120 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:41 crc kubenswrapper[4703]: I1209 12:05:41.509130 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:41Z","lastTransitionTime":"2025-12-09T12:05:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:41 crc kubenswrapper[4703]: I1209 12:05:41.610908 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:41 crc kubenswrapper[4703]: I1209 12:05:41.610946 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:41 crc kubenswrapper[4703]: I1209 12:05:41.610957 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:41 crc kubenswrapper[4703]: I1209 12:05:41.610972 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:41 crc kubenswrapper[4703]: I1209 12:05:41.610981 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:41Z","lastTransitionTime":"2025-12-09T12:05:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:41 crc kubenswrapper[4703]: I1209 12:05:41.713474 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:41 crc kubenswrapper[4703]: I1209 12:05:41.713514 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:41 crc kubenswrapper[4703]: I1209 12:05:41.713524 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:41 crc kubenswrapper[4703]: I1209 12:05:41.713539 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:41 crc kubenswrapper[4703]: I1209 12:05:41.713548 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:41Z","lastTransitionTime":"2025-12-09T12:05:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:41 crc kubenswrapper[4703]: I1209 12:05:41.816352 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:41 crc kubenswrapper[4703]: I1209 12:05:41.816392 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:41 crc kubenswrapper[4703]: I1209 12:05:41.816402 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:41 crc kubenswrapper[4703]: I1209 12:05:41.816415 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:41 crc kubenswrapper[4703]: I1209 12:05:41.816427 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:41Z","lastTransitionTime":"2025-12-09T12:05:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:41 crc kubenswrapper[4703]: I1209 12:05:41.919103 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:41 crc kubenswrapper[4703]: I1209 12:05:41.919148 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:41 crc kubenswrapper[4703]: I1209 12:05:41.919156 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:41 crc kubenswrapper[4703]: I1209 12:05:41.919171 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:41 crc kubenswrapper[4703]: I1209 12:05:41.919179 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:41Z","lastTransitionTime":"2025-12-09T12:05:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:41 crc kubenswrapper[4703]: I1209 12:05:41.926158 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:41 crc kubenswrapper[4703]: I1209 12:05:41.926357 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:41 crc kubenswrapper[4703]: I1209 12:05:41.926431 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:41 crc kubenswrapper[4703]: I1209 12:05:41.926525 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:41 crc kubenswrapper[4703]: I1209 12:05:41.926721 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:41Z","lastTransitionTime":"2025-12-09T12:05:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:41 crc kubenswrapper[4703]: E1209 12:05:41.938013 4703 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:05:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:05:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:05:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:05:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cc650f57-0f1e-4118-b5e8-874027bb4fd3\\\",\\\"systemUUID\\\":\\\"538480e3-ee75-4c42-9816-5a001726e0b5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:41Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:41 crc kubenswrapper[4703]: I1209 12:05:41.941414 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:41 crc kubenswrapper[4703]: I1209 12:05:41.941624 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:41 crc kubenswrapper[4703]: I1209 12:05:41.941702 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:41 crc kubenswrapper[4703]: I1209 12:05:41.941789 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:41 crc kubenswrapper[4703]: I1209 12:05:41.941868 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:41Z","lastTransitionTime":"2025-12-09T12:05:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:41 crc kubenswrapper[4703]: E1209 12:05:41.952047 4703 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:05:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:05:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:05:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:05:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cc650f57-0f1e-4118-b5e8-874027bb4fd3\\\",\\\"systemUUID\\\":\\\"538480e3-ee75-4c42-9816-5a001726e0b5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:41Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:41 crc kubenswrapper[4703]: I1209 12:05:41.955424 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:41 crc kubenswrapper[4703]: I1209 12:05:41.955469 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:41 crc kubenswrapper[4703]: I1209 12:05:41.955479 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:41 crc kubenswrapper[4703]: I1209 12:05:41.955493 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:41 crc kubenswrapper[4703]: I1209 12:05:41.955503 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:41Z","lastTransitionTime":"2025-12-09T12:05:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:41 crc kubenswrapper[4703]: E1209 12:05:41.965806 4703 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:05:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:05:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:05:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:05:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cc650f57-0f1e-4118-b5e8-874027bb4fd3\\\",\\\"systemUUID\\\":\\\"538480e3-ee75-4c42-9816-5a001726e0b5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:41Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:41 crc kubenswrapper[4703]: I1209 12:05:41.968675 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:41 crc kubenswrapper[4703]: I1209 12:05:41.968785 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:41 crc kubenswrapper[4703]: I1209 12:05:41.968939 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:41 crc kubenswrapper[4703]: I1209 12:05:41.969088 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:41 crc kubenswrapper[4703]: I1209 12:05:41.969246 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:41Z","lastTransitionTime":"2025-12-09T12:05:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:41 crc kubenswrapper[4703]: E1209 12:05:41.980297 4703 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:05:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:05:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:05:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:05:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cc650f57-0f1e-4118-b5e8-874027bb4fd3\\\",\\\"systemUUID\\\":\\\"538480e3-ee75-4c42-9816-5a001726e0b5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:41Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:41 crc kubenswrapper[4703]: I1209 12:05:41.983438 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:41 crc kubenswrapper[4703]: I1209 12:05:41.983467 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:41 crc kubenswrapper[4703]: I1209 12:05:41.983475 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:41 crc kubenswrapper[4703]: I1209 12:05:41.983490 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:41 crc kubenswrapper[4703]: I1209 12:05:41.983499 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:41Z","lastTransitionTime":"2025-12-09T12:05:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:41 crc kubenswrapper[4703]: E1209 12:05:41.993918 4703 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:05:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:05:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:05:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:05:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cc650f57-0f1e-4118-b5e8-874027bb4fd3\\\",\\\"systemUUID\\\":\\\"538480e3-ee75-4c42-9816-5a001726e0b5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:41Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:41 crc kubenswrapper[4703]: E1209 12:05:41.994045 4703 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 09 12:05:42 crc kubenswrapper[4703]: I1209 12:05:42.021374 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:42 crc kubenswrapper[4703]: I1209 12:05:42.021419 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:42 crc kubenswrapper[4703]: I1209 12:05:42.021428 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:42 crc kubenswrapper[4703]: I1209 12:05:42.021442 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:42 crc kubenswrapper[4703]: I1209 12:05:42.021454 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:42Z","lastTransitionTime":"2025-12-09T12:05:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:42 crc kubenswrapper[4703]: I1209 12:05:42.123591 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:42 crc kubenswrapper[4703]: I1209 12:05:42.123886 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:42 crc kubenswrapper[4703]: I1209 12:05:42.124053 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:42 crc kubenswrapper[4703]: I1209 12:05:42.124287 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:42 crc kubenswrapper[4703]: I1209 12:05:42.124446 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:42Z","lastTransitionTime":"2025-12-09T12:05:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:42 crc kubenswrapper[4703]: I1209 12:05:42.227612 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:42 crc kubenswrapper[4703]: I1209 12:05:42.227889 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:42 crc kubenswrapper[4703]: I1209 12:05:42.227901 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:42 crc kubenswrapper[4703]: I1209 12:05:42.227917 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:42 crc kubenswrapper[4703]: I1209 12:05:42.227932 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:42Z","lastTransitionTime":"2025-12-09T12:05:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:42 crc kubenswrapper[4703]: I1209 12:05:42.331246 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:42 crc kubenswrapper[4703]: I1209 12:05:42.331286 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:42 crc kubenswrapper[4703]: I1209 12:05:42.331296 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:42 crc kubenswrapper[4703]: I1209 12:05:42.331310 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:42 crc kubenswrapper[4703]: I1209 12:05:42.331318 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:42Z","lastTransitionTime":"2025-12-09T12:05:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:42 crc kubenswrapper[4703]: I1209 12:05:42.433822 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:42 crc kubenswrapper[4703]: I1209 12:05:42.433850 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:42 crc kubenswrapper[4703]: I1209 12:05:42.433857 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:42 crc kubenswrapper[4703]: I1209 12:05:42.433871 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:42 crc kubenswrapper[4703]: I1209 12:05:42.433880 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:42Z","lastTransitionTime":"2025-12-09T12:05:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:42 crc kubenswrapper[4703]: I1209 12:05:42.537135 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:42 crc kubenswrapper[4703]: I1209 12:05:42.537808 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:42 crc kubenswrapper[4703]: I1209 12:05:42.537930 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:42 crc kubenswrapper[4703]: I1209 12:05:42.538018 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:42 crc kubenswrapper[4703]: I1209 12:05:42.538085 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:42Z","lastTransitionTime":"2025-12-09T12:05:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:42 crc kubenswrapper[4703]: I1209 12:05:42.640670 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:42 crc kubenswrapper[4703]: I1209 12:05:42.640908 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:42 crc kubenswrapper[4703]: I1209 12:05:42.641001 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:42 crc kubenswrapper[4703]: I1209 12:05:42.641111 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:42 crc kubenswrapper[4703]: I1209 12:05:42.641285 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:42Z","lastTransitionTime":"2025-12-09T12:05:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:42 crc kubenswrapper[4703]: I1209 12:05:42.744219 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:42 crc kubenswrapper[4703]: I1209 12:05:42.744261 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:42 crc kubenswrapper[4703]: I1209 12:05:42.744271 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:42 crc kubenswrapper[4703]: I1209 12:05:42.744289 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:42 crc kubenswrapper[4703]: I1209 12:05:42.744308 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:42Z","lastTransitionTime":"2025-12-09T12:05:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:42 crc kubenswrapper[4703]: I1209 12:05:42.846312 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:42 crc kubenswrapper[4703]: I1209 12:05:42.846590 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:42 crc kubenswrapper[4703]: I1209 12:05:42.846711 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:42 crc kubenswrapper[4703]: I1209 12:05:42.846796 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:42 crc kubenswrapper[4703]: I1209 12:05:42.846898 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:42Z","lastTransitionTime":"2025-12-09T12:05:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:42 crc kubenswrapper[4703]: I1209 12:05:42.948892 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:42 crc kubenswrapper[4703]: I1209 12:05:42.948920 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:42 crc kubenswrapper[4703]: I1209 12:05:42.948929 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:42 crc kubenswrapper[4703]: I1209 12:05:42.948944 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:42 crc kubenswrapper[4703]: I1209 12:05:42.948952 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:42Z","lastTransitionTime":"2025-12-09T12:05:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:43 crc kubenswrapper[4703]: I1209 12:05:43.051164 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:43 crc kubenswrapper[4703]: I1209 12:05:43.051420 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:43 crc kubenswrapper[4703]: I1209 12:05:43.051480 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:43 crc kubenswrapper[4703]: I1209 12:05:43.051563 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:43 crc kubenswrapper[4703]: I1209 12:05:43.051646 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:43Z","lastTransitionTime":"2025-12-09T12:05:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:43 crc kubenswrapper[4703]: I1209 12:05:43.068644 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pf4r7" Dec 09 12:05:43 crc kubenswrapper[4703]: I1209 12:05:43.068653 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:05:43 crc kubenswrapper[4703]: I1209 12:05:43.068917 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:05:43 crc kubenswrapper[4703]: E1209 12:05:43.069040 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pf4r7" podUID="9f199898-7916-48b6-b5e6-c878bacae384" Dec 09 12:05:43 crc kubenswrapper[4703]: I1209 12:05:43.069139 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:05:43 crc kubenswrapper[4703]: E1209 12:05:43.069217 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 12:05:43 crc kubenswrapper[4703]: E1209 12:05:43.069138 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 12:05:43 crc kubenswrapper[4703]: E1209 12:05:43.069426 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 12:05:43 crc kubenswrapper[4703]: I1209 12:05:43.153653 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:43 crc kubenswrapper[4703]: I1209 12:05:43.153697 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:43 crc kubenswrapper[4703]: I1209 12:05:43.153709 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:43 crc kubenswrapper[4703]: I1209 12:05:43.153726 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:43 crc kubenswrapper[4703]: I1209 12:05:43.153739 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:43Z","lastTransitionTime":"2025-12-09T12:05:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:43 crc kubenswrapper[4703]: I1209 12:05:43.256491 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:43 crc kubenswrapper[4703]: I1209 12:05:43.256528 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:43 crc kubenswrapper[4703]: I1209 12:05:43.256540 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:43 crc kubenswrapper[4703]: I1209 12:05:43.256636 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:43 crc kubenswrapper[4703]: I1209 12:05:43.256652 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:43Z","lastTransitionTime":"2025-12-09T12:05:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:43 crc kubenswrapper[4703]: I1209 12:05:43.358417 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:43 crc kubenswrapper[4703]: I1209 12:05:43.358444 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:43 crc kubenswrapper[4703]: I1209 12:05:43.358452 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:43 crc kubenswrapper[4703]: I1209 12:05:43.358464 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:43 crc kubenswrapper[4703]: I1209 12:05:43.358473 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:43Z","lastTransitionTime":"2025-12-09T12:05:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:43 crc kubenswrapper[4703]: I1209 12:05:43.461580 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:43 crc kubenswrapper[4703]: I1209 12:05:43.462048 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:43 crc kubenswrapper[4703]: I1209 12:05:43.462150 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:43 crc kubenswrapper[4703]: I1209 12:05:43.462296 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:43 crc kubenswrapper[4703]: I1209 12:05:43.462396 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:43Z","lastTransitionTime":"2025-12-09T12:05:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:43 crc kubenswrapper[4703]: I1209 12:05:43.565333 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:43 crc kubenswrapper[4703]: I1209 12:05:43.565438 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:43 crc kubenswrapper[4703]: I1209 12:05:43.565460 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:43 crc kubenswrapper[4703]: I1209 12:05:43.565489 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:43 crc kubenswrapper[4703]: I1209 12:05:43.565508 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:43Z","lastTransitionTime":"2025-12-09T12:05:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:43 crc kubenswrapper[4703]: I1209 12:05:43.668481 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:43 crc kubenswrapper[4703]: I1209 12:05:43.668542 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:43 crc kubenswrapper[4703]: I1209 12:05:43.668553 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:43 crc kubenswrapper[4703]: I1209 12:05:43.668575 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:43 crc kubenswrapper[4703]: I1209 12:05:43.668601 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:43Z","lastTransitionTime":"2025-12-09T12:05:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:43 crc kubenswrapper[4703]: I1209 12:05:43.771582 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:43 crc kubenswrapper[4703]: I1209 12:05:43.771655 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:43 crc kubenswrapper[4703]: I1209 12:05:43.771671 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:43 crc kubenswrapper[4703]: I1209 12:05:43.771701 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:43 crc kubenswrapper[4703]: I1209 12:05:43.771720 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:43Z","lastTransitionTime":"2025-12-09T12:05:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:43 crc kubenswrapper[4703]: I1209 12:05:43.874360 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:43 crc kubenswrapper[4703]: I1209 12:05:43.874431 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:43 crc kubenswrapper[4703]: I1209 12:05:43.874440 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:43 crc kubenswrapper[4703]: I1209 12:05:43.874454 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:43 crc kubenswrapper[4703]: I1209 12:05:43.874463 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:43Z","lastTransitionTime":"2025-12-09T12:05:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:43 crc kubenswrapper[4703]: I1209 12:05:43.977955 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:43 crc kubenswrapper[4703]: I1209 12:05:43.977996 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:43 crc kubenswrapper[4703]: I1209 12:05:43.978004 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:43 crc kubenswrapper[4703]: I1209 12:05:43.978020 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:43 crc kubenswrapper[4703]: I1209 12:05:43.978037 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:43Z","lastTransitionTime":"2025-12-09T12:05:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:44 crc kubenswrapper[4703]: I1209 12:05:44.080766 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:44 crc kubenswrapper[4703]: I1209 12:05:44.080803 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:44 crc kubenswrapper[4703]: I1209 12:05:44.080811 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:44 crc kubenswrapper[4703]: I1209 12:05:44.080829 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:44 crc kubenswrapper[4703]: I1209 12:05:44.080838 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:44Z","lastTransitionTime":"2025-12-09T12:05:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:44 crc kubenswrapper[4703]: I1209 12:05:44.183154 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:44 crc kubenswrapper[4703]: I1209 12:05:44.183218 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:44 crc kubenswrapper[4703]: I1209 12:05:44.183229 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:44 crc kubenswrapper[4703]: I1209 12:05:44.183249 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:44 crc kubenswrapper[4703]: I1209 12:05:44.183260 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:44Z","lastTransitionTime":"2025-12-09T12:05:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:44 crc kubenswrapper[4703]: I1209 12:05:44.285719 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:44 crc kubenswrapper[4703]: I1209 12:05:44.286014 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:44 crc kubenswrapper[4703]: I1209 12:05:44.286091 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:44 crc kubenswrapper[4703]: I1209 12:05:44.286218 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:44 crc kubenswrapper[4703]: I1209 12:05:44.286294 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:44Z","lastTransitionTime":"2025-12-09T12:05:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:44 crc kubenswrapper[4703]: I1209 12:05:44.389387 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:44 crc kubenswrapper[4703]: I1209 12:05:44.389977 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:44 crc kubenswrapper[4703]: I1209 12:05:44.390297 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:44 crc kubenswrapper[4703]: I1209 12:05:44.390496 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:44 crc kubenswrapper[4703]: I1209 12:05:44.390652 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:44Z","lastTransitionTime":"2025-12-09T12:05:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:44 crc kubenswrapper[4703]: I1209 12:05:44.495050 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:44 crc kubenswrapper[4703]: I1209 12:05:44.495595 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:44 crc kubenswrapper[4703]: I1209 12:05:44.495754 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:44 crc kubenswrapper[4703]: I1209 12:05:44.495937 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:44 crc kubenswrapper[4703]: I1209 12:05:44.496108 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:44Z","lastTransitionTime":"2025-12-09T12:05:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:44 crc kubenswrapper[4703]: I1209 12:05:44.601013 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:44 crc kubenswrapper[4703]: I1209 12:05:44.601066 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:44 crc kubenswrapper[4703]: I1209 12:05:44.601077 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:44 crc kubenswrapper[4703]: I1209 12:05:44.601100 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:44 crc kubenswrapper[4703]: I1209 12:05:44.601113 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:44Z","lastTransitionTime":"2025-12-09T12:05:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:44 crc kubenswrapper[4703]: I1209 12:05:44.704563 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:44 crc kubenswrapper[4703]: I1209 12:05:44.704609 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:44 crc kubenswrapper[4703]: I1209 12:05:44.704624 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:44 crc kubenswrapper[4703]: I1209 12:05:44.704638 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:44 crc kubenswrapper[4703]: I1209 12:05:44.704646 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:44Z","lastTransitionTime":"2025-12-09T12:05:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:44 crc kubenswrapper[4703]: I1209 12:05:44.814737 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:44 crc kubenswrapper[4703]: I1209 12:05:44.814826 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:44 crc kubenswrapper[4703]: I1209 12:05:44.814862 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:44 crc kubenswrapper[4703]: I1209 12:05:44.814911 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:44 crc kubenswrapper[4703]: I1209 12:05:44.814933 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:44Z","lastTransitionTime":"2025-12-09T12:05:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:44 crc kubenswrapper[4703]: I1209 12:05:44.918661 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:44 crc kubenswrapper[4703]: I1209 12:05:44.918723 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:44 crc kubenswrapper[4703]: I1209 12:05:44.918735 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:44 crc kubenswrapper[4703]: I1209 12:05:44.918756 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:44 crc kubenswrapper[4703]: I1209 12:05:44.918769 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:44Z","lastTransitionTime":"2025-12-09T12:05:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:45 crc kubenswrapper[4703]: I1209 12:05:45.021404 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:45 crc kubenswrapper[4703]: I1209 12:05:45.021460 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:45 crc kubenswrapper[4703]: I1209 12:05:45.021473 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:45 crc kubenswrapper[4703]: I1209 12:05:45.021489 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:45 crc kubenswrapper[4703]: I1209 12:05:45.021501 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:45Z","lastTransitionTime":"2025-12-09T12:05:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:45 crc kubenswrapper[4703]: I1209 12:05:45.069173 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:05:45 crc kubenswrapper[4703]: I1209 12:05:45.069242 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:05:45 crc kubenswrapper[4703]: E1209 12:05:45.069329 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 12:05:45 crc kubenswrapper[4703]: I1209 12:05:45.069173 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:05:45 crc kubenswrapper[4703]: E1209 12:05:45.069401 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 12:05:45 crc kubenswrapper[4703]: E1209 12:05:45.069466 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 12:05:45 crc kubenswrapper[4703]: I1209 12:05:45.069531 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pf4r7" Dec 09 12:05:45 crc kubenswrapper[4703]: E1209 12:05:45.069610 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pf4r7" podUID="9f199898-7916-48b6-b5e6-c878bacae384" Dec 09 12:05:45 crc kubenswrapper[4703]: I1209 12:05:45.124130 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:45 crc kubenswrapper[4703]: I1209 12:05:45.124175 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:45 crc kubenswrapper[4703]: I1209 12:05:45.124207 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:45 crc kubenswrapper[4703]: I1209 12:05:45.124221 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:45 crc kubenswrapper[4703]: I1209 12:05:45.124231 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:45Z","lastTransitionTime":"2025-12-09T12:05:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:45 crc kubenswrapper[4703]: I1209 12:05:45.227138 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:45 crc kubenswrapper[4703]: I1209 12:05:45.227216 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:45 crc kubenswrapper[4703]: I1209 12:05:45.227226 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:45 crc kubenswrapper[4703]: I1209 12:05:45.227245 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:45 crc kubenswrapper[4703]: I1209 12:05:45.227255 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:45Z","lastTransitionTime":"2025-12-09T12:05:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:45 crc kubenswrapper[4703]: I1209 12:05:45.330078 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:45 crc kubenswrapper[4703]: I1209 12:05:45.330132 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:45 crc kubenswrapper[4703]: I1209 12:05:45.330145 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:45 crc kubenswrapper[4703]: I1209 12:05:45.330385 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:45 crc kubenswrapper[4703]: I1209 12:05:45.330412 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:45Z","lastTransitionTime":"2025-12-09T12:05:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:45 crc kubenswrapper[4703]: I1209 12:05:45.434372 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:45 crc kubenswrapper[4703]: I1209 12:05:45.434444 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:45 crc kubenswrapper[4703]: I1209 12:05:45.434462 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:45 crc kubenswrapper[4703]: I1209 12:05:45.434492 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:45 crc kubenswrapper[4703]: I1209 12:05:45.434516 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:45Z","lastTransitionTime":"2025-12-09T12:05:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:45 crc kubenswrapper[4703]: I1209 12:05:45.537621 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:45 crc kubenswrapper[4703]: I1209 12:05:45.537685 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:45 crc kubenswrapper[4703]: I1209 12:05:45.537702 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:45 crc kubenswrapper[4703]: I1209 12:05:45.537726 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:45 crc kubenswrapper[4703]: I1209 12:05:45.537745 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:45Z","lastTransitionTime":"2025-12-09T12:05:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:45 crc kubenswrapper[4703]: I1209 12:05:45.641952 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:45 crc kubenswrapper[4703]: I1209 12:05:45.642025 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:45 crc kubenswrapper[4703]: I1209 12:05:45.642055 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:45 crc kubenswrapper[4703]: I1209 12:05:45.642091 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:45 crc kubenswrapper[4703]: I1209 12:05:45.642115 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:45Z","lastTransitionTime":"2025-12-09T12:05:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:45 crc kubenswrapper[4703]: I1209 12:05:45.745359 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:45 crc kubenswrapper[4703]: I1209 12:05:45.745416 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:45 crc kubenswrapper[4703]: I1209 12:05:45.745429 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:45 crc kubenswrapper[4703]: I1209 12:05:45.745454 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:45 crc kubenswrapper[4703]: I1209 12:05:45.745469 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:45Z","lastTransitionTime":"2025-12-09T12:05:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:45 crc kubenswrapper[4703]: I1209 12:05:45.849422 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:45 crc kubenswrapper[4703]: I1209 12:05:45.849512 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:45 crc kubenswrapper[4703]: I1209 12:05:45.849537 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:45 crc kubenswrapper[4703]: I1209 12:05:45.849569 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:45 crc kubenswrapper[4703]: I1209 12:05:45.849588 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:45Z","lastTransitionTime":"2025-12-09T12:05:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:45 crc kubenswrapper[4703]: I1209 12:05:45.953381 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:45 crc kubenswrapper[4703]: I1209 12:05:45.953453 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:45 crc kubenswrapper[4703]: I1209 12:05:45.953472 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:45 crc kubenswrapper[4703]: I1209 12:05:45.953506 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:45 crc kubenswrapper[4703]: I1209 12:05:45.953536 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:45Z","lastTransitionTime":"2025-12-09T12:05:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:46 crc kubenswrapper[4703]: I1209 12:05:46.056639 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:46 crc kubenswrapper[4703]: I1209 12:05:46.056703 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:46 crc kubenswrapper[4703]: I1209 12:05:46.056713 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:46 crc kubenswrapper[4703]: I1209 12:05:46.056736 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:46 crc kubenswrapper[4703]: I1209 12:05:46.056749 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:46Z","lastTransitionTime":"2025-12-09T12:05:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:46 crc kubenswrapper[4703]: I1209 12:05:46.070037 4703 scope.go:117] "RemoveContainer" containerID="fee843e54b1ecda5724a42e485d25a1ddf459ebc6a54c2722cf8cd8d11684ca5" Dec 09 12:05:46 crc kubenswrapper[4703]: I1209 12:05:46.087445 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4sss9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9dac2a02-8ee0-445c-bac8-4d448cda509d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c698b36fa974c95698ac8ac133b3caa9cbe8847626f5abf8088011c1a1a13032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q96fc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://660ecb626e35b5e693783b416e62c773b996e7e39233160e422dc9357ca0fd2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q96fc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4sss9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:46Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:46 crc kubenswrapper[4703]: I1209 12:05:46.103406 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"687e9c7e-e4b8-4bbf-871e-714260501e27\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd9149c63c18f6178219d1f2e4f8e44bc861746b90fb9748105461192a870431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8612defa917cc99574e7762a0260e1b670277ea754e0a66bca960ef6b5a97f2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc0c9f4fdf8ce1ac42d54d3823b0137e0fd8ddf4499236ac3b954f9bff7c491\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd716e645be3932ca219e76ba8d896a8cf27086bbbac8f5e68057873fa68a6f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da708e8829aba2597198cdab64374791885bac9ddf7cb04b296c81d4e81d42ff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 12:05:18.266857 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 12:05:18.266993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 12:05:18.268588 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1075686048/tls.crt::/tmp/serving-cert-1075686048/tls.key\\\\\\\"\\\\nI1209 12:05:18.602670 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 12:05:18.604681 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 12:05:18.604730 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 12:05:18.604783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 12:05:18.604808 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 12:05:18.609177 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 12:05:18.609280 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:05:18.609306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:05:18.609328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 12:05:18.609348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 12:05:18.609367 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 12:05:18.609387 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 12:05:18.609237 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 12:05:18.612865 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://917390e847b6ec24f370316f0fbc305db0f667eab946b4f0e749855a62112cc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55c7e1e49ab4af3fff08ccfa9a7d9b4f889fbdd1d60592bca6c87990f4623d5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55c7e1e49ab4af3fff08ccfa9a7d9b4f889fbdd1d60592bca6c87990f4623d5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:46Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:46 crc kubenswrapper[4703]: I1209 12:05:46.118151 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998159f494b646ac88f1f1ffb09789bd146d7b3554b3a5106174d7c72f3ff583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:46Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:46 crc kubenswrapper[4703]: I1209 12:05:46.130927 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32956ceb-8540-406e-8693-e86efb46cd42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1535901888e91bc0037177a3150033b17d6e2d143faa5a082dba0d096cad474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tv4nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d149dfa980c5e9cee99cd7ba8a51bca19529368e037a1fd8e9b8dbea04179378\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tv4nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q8sfk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:46Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:46 crc kubenswrapper[4703]: I1209 12:05:46.143125 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ncbbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c180fbd9-43db-436b-8166-3cbcb5a14da3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8f7249f78155fea92f6a72919fd391b841a3cdb25583c9b9bc1083c6e34f087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22tv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ncbbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:46Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:46 crc kubenswrapper[4703]: I1209 12:05:46.159060 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:46 crc kubenswrapper[4703]: I1209 12:05:46.159098 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:46 crc kubenswrapper[4703]: I1209 12:05:46.159106 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:46 crc kubenswrapper[4703]: I1209 12:05:46.159122 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:46 crc kubenswrapper[4703]: I1209 12:05:46.159131 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:46Z","lastTransitionTime":"2025-12-09T12:05:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:46 crc kubenswrapper[4703]: I1209 12:05:46.164374 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9173444-5181-4ee4-b651-11d92ccab0d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://382198a5fbf52c39a98bc79aa4e917d50dfbcb3ff73d831389901544d400817c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce8423b98a9127fea76c23c865d9840069fe97e1d10dc9e97816c916192bf8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5989b9b8e8fb619a07c3242f7c4310e042a281648593a05edd6cb16449480428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9652e0d783249140796d5c8586e300acb7918f3ced79a1daec01da21eac363c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06ebee5cde81f7c5fb299c4a346f466d3bd4667be4dd2e89712dcae660a1e08c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02b8fb5400b3ed69b7e65ae9f09aac617bd51972cfa0d5aec436c9f59eb65ac2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fee843e54b1ecda5724a42e485d25a1ddf459ebc6a54c2722cf8cd8d11684ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fee843e54b1ecda5724a42e485d25a1ddf459ebc6a54c2722cf8cd8d11684ca5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T12:05:30Z\\\",\\\"message\\\":\\\"rvices.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1209 12:05:30.205709 6137 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-ncbbx\\\\nF1209 12:05:30.205711 6137 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:30Z is after 2025-08-24T17:21:41Z]\\\\nI1209 12:05:30.205711 6137 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI1209 12:05:30.205722 6137 ovn.go:134] Ensuring zone local for Pod openshift-image-registry/node-ca-ncbbx in node crc\\\\nI1209 12:05:30.205728 6137 base_network_controller_\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-7hrm8_openshift-ovn-kubernetes(e9173444-5181-4ee4-b651-11d92ccab0d0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88a462e310fdbdf79496906a304fcf953c4e48c9cab9137f52cf5aafa9c4054c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f192db4b078c6bf3ffea6db58842ce7e8b4ad76c12fa8422496288bc0cfe0c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f192db4b078c6bf3ffea6db58842ce7e8b4ad76c12fa8422496288bc0cfe0c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7hrm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:46Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:46 crc kubenswrapper[4703]: I1209 12:05:46.173680 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pf4r7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f199898-7916-48b6-b5e6-c878bacae384\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-856ln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-856ln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pf4r7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:46Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:46 crc kubenswrapper[4703]: I1209 12:05:46.184533 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e02414b1-6edb-41df-bef5-da5638c58c13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf2e3462f3adecb5658327310dd9a543e1f7fe8b7d477f0f90c77f9a3c1724cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4933db7a36f445ad8289586a5de8f593c8f565d53b4b310292c2a6b436b86b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dd75d360dcc49fc8db00b8a075ce33da2bd845289f44a0148ab391635ab90d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://feccbfdb1fa7a9935e98927106faf0c924485823950276b473cdb8c06c4b07eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:46Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:46 crc kubenswrapper[4703]: I1209 12:05:46.196710 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:46Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:46 crc kubenswrapper[4703]: I1209 12:05:46.210828 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f05c1cdf163073cebf69a5dcd00e285698d11f7c76995b202500503a9762d467\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:46Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:46 crc kubenswrapper[4703]: I1209 12:05:46.222254 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:46Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:46 crc kubenswrapper[4703]: I1209 12:05:46.232707 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15947b2dcc90c69809979bf2d4a222ee664245d897b1f9d9ca91d29551ad86ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://592c8564eeb877cf87544918cb97d0e32451a6afd1c201efafb4cfec9a1e5857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:46Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:46 crc kubenswrapper[4703]: I1209 12:05:46.241921 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rfmng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a89eb00-454a-44b2-9b8e-6518b4a9d10c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a0c3ac19e4c5217f17ea8cd7c742043ee7708b0f086c99341a4bbc062fa53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qf85c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rfmng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:46Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:46 crc kubenswrapper[4703]: I1209 12:05:46.253479 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:46Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:46 crc kubenswrapper[4703]: I1209 12:05:46.265098 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:46 crc kubenswrapper[4703]: I1209 12:05:46.265172 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:46 crc kubenswrapper[4703]: I1209 12:05:46.265213 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:46 crc kubenswrapper[4703]: I1209 12:05:46.265232 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:46 crc kubenswrapper[4703]: I1209 12:05:46.265241 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:46Z","lastTransitionTime":"2025-12-09T12:05:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:46 crc kubenswrapper[4703]: I1209 12:05:46.271253 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4r9tc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65954588-9afb-47ff-8c0b-f83bf290da27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://698ca5256efee75c73f1585ad1f7955676be6dac9cbd1c5aebdbf88848c35cee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aeaa589b823d27c4d29e522f951c5578acd67848c14381e3f61821458f74a966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aeaa589b823d27c4d29e522f951c5578acd67848c14381e3f61821458f74a966\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e7f1224d3313cfd9900663c77a41898ee26d0f91591e2f641a17a6a543c69b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e7f1224d3313cfd9900663c77a41898ee26d0f91591e2f641a17a6a543c69b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f86761f861d07fe4e22b95651d0ed069799b95020e8e168527fd838ea6cc1e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f86761f861d07fe4e22b95651d0ed069799b95020e8e168527fd838ea6cc1e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c0a351532024238eb4c192066350cdd4f94945cfa549586ced1c91a2379490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c0a351532024238eb4c192066350cdd4f94945cfa549586ced1c91a2379490\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7baabf4646f56bb08044eb6cb29976274522e962906d573a279e967e64d191f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7baabf4646f56bb08044eb6cb29976274522e962906d573a279e967e64d191f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87f16f3d165437fc695faa410c055f3f3716fbb47e2dc1d41bba05315872e950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87f16f3d165437fc695faa410c055f3f3716fbb47e2dc1d41bba05315872e950\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4r9tc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:46Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:46 crc kubenswrapper[4703]: I1209 12:05:46.314434 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9zbgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b57e1095-b0e1-4b30-a491-00852a5219e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c902e8b479756afb7cf929d0963afc0f950288658427c1d49ff22ee4319cda70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6sdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9zbgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:46Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:46 crc kubenswrapper[4703]: I1209 12:05:46.348917 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7hrm8_e9173444-5181-4ee4-b651-11d92ccab0d0/ovnkube-controller/1.log" Dec 09 12:05:46 crc kubenswrapper[4703]: I1209 12:05:46.351289 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" event={"ID":"e9173444-5181-4ee4-b651-11d92ccab0d0","Type":"ContainerStarted","Data":"81d94ce172f3ac2803f304ca33a0549f1e75e67a45f3acedc0b276a83e885db1"} Dec 09 12:05:46 crc kubenswrapper[4703]: I1209 12:05:46.351491 4703 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 09 12:05:46 crc kubenswrapper[4703]: I1209 12:05:46.366615 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:46Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:46 crc kubenswrapper[4703]: I1209 12:05:46.367420 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:46 crc kubenswrapper[4703]: I1209 12:05:46.367458 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:46 crc kubenswrapper[4703]: I1209 12:05:46.367470 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:46 crc kubenswrapper[4703]: I1209 12:05:46.367488 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:46 crc kubenswrapper[4703]: I1209 12:05:46.367501 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:46Z","lastTransitionTime":"2025-12-09T12:05:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:46 crc kubenswrapper[4703]: I1209 12:05:46.384689 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4r9tc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65954588-9afb-47ff-8c0b-f83bf290da27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://698ca5256efee75c73f1585ad1f7955676be6dac9cbd1c5aebdbf88848c35cee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aeaa589b823d27c4d29e522f951c5578acd67848c14381e3f61821458f74a966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aeaa589b823d27c4d29e522f951c5578acd67848c14381e3f61821458f74a966\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e7f1224d3313cfd9900663c77a41898ee26d0f91591e2f641a17a6a543c69b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e7f1224d3313cfd9900663c77a41898ee26d0f91591e2f641a17a6a543c69b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f86761f861d07fe4e22b95651d0ed069799b95020e8e168527fd838ea6cc1e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f86761f861d07fe4e22b95651d0ed069799b95020e8e168527fd838ea6cc1e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c0a351532024238eb4c192066350cdd4f94945cfa549586ced1c91a2379490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c0a351532024238eb4c192066350cdd4f94945cfa549586ced1c91a2379490\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7baabf4646f56bb08044eb6cb29976274522e962906d573a279e967e64d191f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7baabf4646f56bb08044eb6cb29976274522e962906d573a279e967e64d191f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87f16f3d165437fc695faa410c055f3f3716fbb47e2dc1d41bba05315872e950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87f16f3d165437fc695faa410c055f3f3716fbb47e2dc1d41bba05315872e950\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4r9tc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:46Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:46 crc kubenswrapper[4703]: I1209 12:05:46.397723 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9zbgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b57e1095-b0e1-4b30-a491-00852a5219e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c902e8b479756afb7cf929d0963afc0f950288658427c1d49ff22ee4319cda70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6sdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9zbgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:46Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:46 crc kubenswrapper[4703]: I1209 12:05:46.411925 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32956ceb-8540-406e-8693-e86efb46cd42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1535901888e91bc0037177a3150033b17d6e2d143faa5a082dba0d096cad474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tv4nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d149dfa980c5e9cee99cd7ba8a51bca19529368e037a1fd8e9b8dbea04179378\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tv4nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q8sfk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:46Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:46 crc kubenswrapper[4703]: I1209 12:05:46.424694 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4sss9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9dac2a02-8ee0-445c-bac8-4d448cda509d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c698b36fa974c95698ac8ac133b3caa9cbe8847626f5abf8088011c1a1a13032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q96fc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://660ecb626e35b5e693783b416e62c773b996e7e39233160e422dc9357ca0fd2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q96fc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4sss9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:46Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:46 crc kubenswrapper[4703]: I1209 12:05:46.448024 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"687e9c7e-e4b8-4bbf-871e-714260501e27\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd9149c63c18f6178219d1f2e4f8e44bc861746b90fb9748105461192a870431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8612defa917cc99574e7762a0260e1b670277ea754e0a66bca960ef6b5a97f2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc0c9f4fdf8ce1ac42d54d3823b0137e0fd8ddf4499236ac3b954f9bff7c491\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd716e645be3932ca219e76ba8d896a8cf27086bbbac8f5e68057873fa68a6f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da708e8829aba2597198cdab64374791885bac9ddf7cb04b296c81d4e81d42ff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 12:05:18.266857 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 12:05:18.266993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 12:05:18.268588 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1075686048/tls.crt::/tmp/serving-cert-1075686048/tls.key\\\\\\\"\\\\nI1209 12:05:18.602670 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 12:05:18.604681 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 12:05:18.604730 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 12:05:18.604783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 12:05:18.604808 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 12:05:18.609177 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 12:05:18.609280 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:05:18.609306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:05:18.609328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 12:05:18.609348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 12:05:18.609367 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 12:05:18.609387 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 12:05:18.609237 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 12:05:18.612865 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://917390e847b6ec24f370316f0fbc305db0f667eab946b4f0e749855a62112cc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55c7e1e49ab4af3fff08ccfa9a7d9b4f889fbdd1d60592bca6c87990f4623d5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55c7e1e49ab4af3fff08ccfa9a7d9b4f889fbdd1d60592bca6c87990f4623d5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:46Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:46 crc kubenswrapper[4703]: I1209 12:05:46.466022 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998159f494b646ac88f1f1ffb09789bd146d7b3554b3a5106174d7c72f3ff583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:46Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:46 crc kubenswrapper[4703]: I1209 12:05:46.469869 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:46 crc kubenswrapper[4703]: I1209 12:05:46.469905 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:46 crc kubenswrapper[4703]: I1209 12:05:46.469915 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:46 crc kubenswrapper[4703]: I1209 12:05:46.469947 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:46 crc kubenswrapper[4703]: I1209 12:05:46.469957 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:46Z","lastTransitionTime":"2025-12-09T12:05:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:46 crc kubenswrapper[4703]: I1209 12:05:46.488634 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:46Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:46 crc kubenswrapper[4703]: I1209 12:05:46.501686 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ncbbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c180fbd9-43db-436b-8166-3cbcb5a14da3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8f7249f78155fea92f6a72919fd391b841a3cdb25583c9b9bc1083c6e34f087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22tv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ncbbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:46Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:46 crc kubenswrapper[4703]: I1209 12:05:46.518621 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9173444-5181-4ee4-b651-11d92ccab0d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://382198a5fbf52c39a98bc79aa4e917d50dfbcb3ff73d831389901544d400817c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce8423b98a9127fea76c23c865d9840069fe97e1d10dc9e97816c916192bf8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5989b9b8e8fb619a07c3242f7c4310e042a281648593a05edd6cb16449480428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9652e0d783249140796d5c8586e300acb7918f3ced79a1daec01da21eac363c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06ebee5cde81f7c5fb299c4a346f466d3bd4667be4dd2e89712dcae660a1e08c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02b8fb5400b3ed69b7e65ae9f09aac617bd51972cfa0d5aec436c9f59eb65ac2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://81d94ce172f3ac2803f304ca33a0549f1e75e67a45f3acedc0b276a83e885db1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fee843e54b1ecda5724a42e485d25a1ddf459ebc6a54c2722cf8cd8d11684ca5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T12:05:30Z\\\",\\\"message\\\":\\\"rvices.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1209 12:05:30.205709 6137 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-ncbbx\\\\nF1209 12:05:30.205711 6137 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:30Z is after 2025-08-24T17:21:41Z]\\\\nI1209 12:05:30.205711 6137 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI1209 12:05:30.205722 6137 ovn.go:134] Ensuring zone local for Pod openshift-image-registry/node-ca-ncbbx in node crc\\\\nI1209 12:05:30.205728 6137 base_network_controller_\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88a462e310fdbdf79496906a304fcf953c4e48c9cab9137f52cf5aafa9c4054c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f192db4b078c6bf3ffea6db58842ce7e8b4ad76c12fa8422496288bc0cfe0c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f192db4b078c6bf3ffea6db58842ce7e8b4ad76c12fa8422496288bc0cfe0c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7hrm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:46Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:46 crc kubenswrapper[4703]: I1209 12:05:46.528507 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pf4r7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f199898-7916-48b6-b5e6-c878bacae384\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-856ln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-856ln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pf4r7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:46Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:46 crc kubenswrapper[4703]: I1209 12:05:46.540292 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e02414b1-6edb-41df-bef5-da5638c58c13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf2e3462f3adecb5658327310dd9a543e1f7fe8b7d477f0f90c77f9a3c1724cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4933db7a36f445ad8289586a5de8f593c8f565d53b4b310292c2a6b436b86b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dd75d360dcc49fc8db00b8a075ce33da2bd845289f44a0148ab391635ab90d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://feccbfdb1fa7a9935e98927106faf0c924485823950276b473cdb8c06c4b07eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:46Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:46 crc kubenswrapper[4703]: I1209 12:05:46.552575 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rfmng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a89eb00-454a-44b2-9b8e-6518b4a9d10c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a0c3ac19e4c5217f17ea8cd7c742043ee7708b0f086c99341a4bbc062fa53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qf85c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rfmng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:46Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:46 crc kubenswrapper[4703]: I1209 12:05:46.564670 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f05c1cdf163073cebf69a5dcd00e285698d11f7c76995b202500503a9762d467\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:46Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:46 crc kubenswrapper[4703]: I1209 12:05:46.572222 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:46 crc kubenswrapper[4703]: I1209 12:05:46.572249 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:46 crc kubenswrapper[4703]: I1209 12:05:46.572258 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:46 crc kubenswrapper[4703]: I1209 12:05:46.572272 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:46 crc kubenswrapper[4703]: I1209 12:05:46.572282 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:46Z","lastTransitionTime":"2025-12-09T12:05:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:46 crc kubenswrapper[4703]: I1209 12:05:46.580258 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:46Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:46 crc kubenswrapper[4703]: I1209 12:05:46.592463 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15947b2dcc90c69809979bf2d4a222ee664245d897b1f9d9ca91d29551ad86ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://592c8564eeb877cf87544918cb97d0e32451a6afd1c201efafb4cfec9a1e5857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:46Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:46 crc kubenswrapper[4703]: I1209 12:05:46.675244 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:46 crc kubenswrapper[4703]: I1209 12:05:46.675291 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:46 crc kubenswrapper[4703]: I1209 12:05:46.675300 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:46 crc kubenswrapper[4703]: I1209 12:05:46.675314 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:46 crc kubenswrapper[4703]: I1209 12:05:46.675324 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:46Z","lastTransitionTime":"2025-12-09T12:05:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:46 crc kubenswrapper[4703]: I1209 12:05:46.777533 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:46 crc kubenswrapper[4703]: I1209 12:05:46.777575 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:46 crc kubenswrapper[4703]: I1209 12:05:46.777583 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:46 crc kubenswrapper[4703]: I1209 12:05:46.777598 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:46 crc kubenswrapper[4703]: I1209 12:05:46.777607 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:46Z","lastTransitionTime":"2025-12-09T12:05:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:46 crc kubenswrapper[4703]: I1209 12:05:46.879560 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:46 crc kubenswrapper[4703]: I1209 12:05:46.879601 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:46 crc kubenswrapper[4703]: I1209 12:05:46.879614 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:46 crc kubenswrapper[4703]: I1209 12:05:46.879629 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:46 crc kubenswrapper[4703]: I1209 12:05:46.879640 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:46Z","lastTransitionTime":"2025-12-09T12:05:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:46 crc kubenswrapper[4703]: I1209 12:05:46.982090 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:46 crc kubenswrapper[4703]: I1209 12:05:46.982124 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:46 crc kubenswrapper[4703]: I1209 12:05:46.982132 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:46 crc kubenswrapper[4703]: I1209 12:05:46.982146 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:46 crc kubenswrapper[4703]: I1209 12:05:46.982154 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:46Z","lastTransitionTime":"2025-12-09T12:05:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:47 crc kubenswrapper[4703]: I1209 12:05:47.069257 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:05:47 crc kubenswrapper[4703]: I1209 12:05:47.069326 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:05:47 crc kubenswrapper[4703]: E1209 12:05:47.069392 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 12:05:47 crc kubenswrapper[4703]: I1209 12:05:47.069349 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:05:47 crc kubenswrapper[4703]: I1209 12:05:47.069348 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pf4r7" Dec 09 12:05:47 crc kubenswrapper[4703]: E1209 12:05:47.069473 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 12:05:47 crc kubenswrapper[4703]: E1209 12:05:47.069554 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 12:05:47 crc kubenswrapper[4703]: E1209 12:05:47.069639 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pf4r7" podUID="9f199898-7916-48b6-b5e6-c878bacae384" Dec 09 12:05:47 crc kubenswrapper[4703]: I1209 12:05:47.084842 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:47 crc kubenswrapper[4703]: I1209 12:05:47.084900 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:47 crc kubenswrapper[4703]: I1209 12:05:47.084915 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:47 crc kubenswrapper[4703]: I1209 12:05:47.084937 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:47 crc kubenswrapper[4703]: I1209 12:05:47.084949 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:47Z","lastTransitionTime":"2025-12-09T12:05:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:47 crc kubenswrapper[4703]: I1209 12:05:47.187368 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:47 crc kubenswrapper[4703]: I1209 12:05:47.187396 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:47 crc kubenswrapper[4703]: I1209 12:05:47.187403 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:47 crc kubenswrapper[4703]: I1209 12:05:47.187418 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:47 crc kubenswrapper[4703]: I1209 12:05:47.187427 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:47Z","lastTransitionTime":"2025-12-09T12:05:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:47 crc kubenswrapper[4703]: I1209 12:05:47.290076 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:47 crc kubenswrapper[4703]: I1209 12:05:47.290394 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:47 crc kubenswrapper[4703]: I1209 12:05:47.290420 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:47 crc kubenswrapper[4703]: I1209 12:05:47.290438 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:47 crc kubenswrapper[4703]: I1209 12:05:47.290451 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:47Z","lastTransitionTime":"2025-12-09T12:05:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:47 crc kubenswrapper[4703]: I1209 12:05:47.355453 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7hrm8_e9173444-5181-4ee4-b651-11d92ccab0d0/ovnkube-controller/2.log" Dec 09 12:05:47 crc kubenswrapper[4703]: I1209 12:05:47.356466 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7hrm8_e9173444-5181-4ee4-b651-11d92ccab0d0/ovnkube-controller/1.log" Dec 09 12:05:47 crc kubenswrapper[4703]: I1209 12:05:47.358887 4703 generic.go:334] "Generic (PLEG): container finished" podID="e9173444-5181-4ee4-b651-11d92ccab0d0" containerID="81d94ce172f3ac2803f304ca33a0549f1e75e67a45f3acedc0b276a83e885db1" exitCode=1 Dec 09 12:05:47 crc kubenswrapper[4703]: I1209 12:05:47.358917 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" event={"ID":"e9173444-5181-4ee4-b651-11d92ccab0d0","Type":"ContainerDied","Data":"81d94ce172f3ac2803f304ca33a0549f1e75e67a45f3acedc0b276a83e885db1"} Dec 09 12:05:47 crc kubenswrapper[4703]: I1209 12:05:47.358945 4703 scope.go:117] "RemoveContainer" containerID="fee843e54b1ecda5724a42e485d25a1ddf459ebc6a54c2722cf8cd8d11684ca5" Dec 09 12:05:47 crc kubenswrapper[4703]: I1209 12:05:47.359649 4703 scope.go:117] "RemoveContainer" containerID="81d94ce172f3ac2803f304ca33a0549f1e75e67a45f3acedc0b276a83e885db1" Dec 09 12:05:47 crc kubenswrapper[4703]: E1209 12:05:47.359799 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-7hrm8_openshift-ovn-kubernetes(e9173444-5181-4ee4-b651-11d92ccab0d0)\"" pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" podUID="e9173444-5181-4ee4-b651-11d92ccab0d0" Dec 09 12:05:47 crc kubenswrapper[4703]: I1209 12:05:47.375072 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9zbgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b57e1095-b0e1-4b30-a491-00852a5219e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c902e8b479756afb7cf929d0963afc0f950288658427c1d49ff22ee4319cda70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6sdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9zbgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:47Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:47 crc kubenswrapper[4703]: I1209 12:05:47.390451 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:47Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:47 crc kubenswrapper[4703]: I1209 12:05:47.391947 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:47 crc kubenswrapper[4703]: I1209 12:05:47.392036 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:47 crc kubenswrapper[4703]: I1209 12:05:47.392123 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:47 crc kubenswrapper[4703]: I1209 12:05:47.392204 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:47 crc kubenswrapper[4703]: I1209 12:05:47.392342 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:47Z","lastTransitionTime":"2025-12-09T12:05:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:47 crc kubenswrapper[4703]: I1209 12:05:47.403117 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4r9tc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65954588-9afb-47ff-8c0b-f83bf290da27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://698ca5256efee75c73f1585ad1f7955676be6dac9cbd1c5aebdbf88848c35cee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aeaa589b823d27c4d29e522f951c5578acd67848c14381e3f61821458f74a966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aeaa589b823d27c4d29e522f951c5578acd67848c14381e3f61821458f74a966\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e7f1224d3313cfd9900663c77a41898ee26d0f91591e2f641a17a6a543c69b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e7f1224d3313cfd9900663c77a41898ee26d0f91591e2f641a17a6a543c69b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f86761f861d07fe4e22b95651d0ed069799b95020e8e168527fd838ea6cc1e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f86761f861d07fe4e22b95651d0ed069799b95020e8e168527fd838ea6cc1e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c0a351532024238eb4c192066350cdd4f94945cfa549586ced1c91a2379490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c0a351532024238eb4c192066350cdd4f94945cfa549586ced1c91a2379490\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7baabf4646f56bb08044eb6cb29976274522e962906d573a279e967e64d191f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7baabf4646f56bb08044eb6cb29976274522e962906d573a279e967e64d191f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87f16f3d165437fc695faa410c055f3f3716fbb47e2dc1d41bba05315872e950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87f16f3d165437fc695faa410c055f3f3716fbb47e2dc1d41bba05315872e950\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4r9tc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:47Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:47 crc kubenswrapper[4703]: I1209 12:05:47.413784 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998159f494b646ac88f1f1ffb09789bd146d7b3554b3a5106174d7c72f3ff583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:47Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:47 crc kubenswrapper[4703]: I1209 12:05:47.423822 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32956ceb-8540-406e-8693-e86efb46cd42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1535901888e91bc0037177a3150033b17d6e2d143faa5a082dba0d096cad474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tv4nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d149dfa980c5e9cee99cd7ba8a51bca19529368e037a1fd8e9b8dbea04179378\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tv4nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q8sfk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:47Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:47 crc kubenswrapper[4703]: I1209 12:05:47.433032 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4sss9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9dac2a02-8ee0-445c-bac8-4d448cda509d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c698b36fa974c95698ac8ac133b3caa9cbe8847626f5abf8088011c1a1a13032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q96fc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://660ecb626e35b5e693783b416e62c773b996e7e39233160e422dc9357ca0fd2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q96fc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4sss9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:47Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:47 crc kubenswrapper[4703]: I1209 12:05:47.446093 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"687e9c7e-e4b8-4bbf-871e-714260501e27\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd9149c63c18f6178219d1f2e4f8e44bc861746b90fb9748105461192a870431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8612defa917cc99574e7762a0260e1b670277ea754e0a66bca960ef6b5a97f2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc0c9f4fdf8ce1ac42d54d3823b0137e0fd8ddf4499236ac3b954f9bff7c491\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd716e645be3932ca219e76ba8d896a8cf27086bbbac8f5e68057873fa68a6f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da708e8829aba2597198cdab64374791885bac9ddf7cb04b296c81d4e81d42ff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 12:05:18.266857 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 12:05:18.266993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 12:05:18.268588 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1075686048/tls.crt::/tmp/serving-cert-1075686048/tls.key\\\\\\\"\\\\nI1209 12:05:18.602670 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 12:05:18.604681 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 12:05:18.604730 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 12:05:18.604783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 12:05:18.604808 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 12:05:18.609177 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 12:05:18.609280 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:05:18.609306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:05:18.609328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 12:05:18.609348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 12:05:18.609367 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 12:05:18.609387 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 12:05:18.609237 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 12:05:18.612865 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://917390e847b6ec24f370316f0fbc305db0f667eab946b4f0e749855a62112cc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55c7e1e49ab4af3fff08ccfa9a7d9b4f889fbdd1d60592bca6c87990f4623d5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55c7e1e49ab4af3fff08ccfa9a7d9b4f889fbdd1d60592bca6c87990f4623d5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:47Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:47 crc kubenswrapper[4703]: I1209 12:05:47.458681 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:47Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:47 crc kubenswrapper[4703]: I1209 12:05:47.466968 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ncbbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c180fbd9-43db-436b-8166-3cbcb5a14da3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8f7249f78155fea92f6a72919fd391b841a3cdb25583c9b9bc1083c6e34f087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22tv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ncbbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:47Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:47 crc kubenswrapper[4703]: I1209 12:05:47.483881 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9173444-5181-4ee4-b651-11d92ccab0d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://382198a5fbf52c39a98bc79aa4e917d50dfbcb3ff73d831389901544d400817c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce8423b98a9127fea76c23c865d9840069fe97e1d10dc9e97816c916192bf8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5989b9b8e8fb619a07c3242f7c4310e042a281648593a05edd6cb16449480428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9652e0d783249140796d5c8586e300acb7918f3ced79a1daec01da21eac363c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06ebee5cde81f7c5fb299c4a346f466d3bd4667be4dd2e89712dcae660a1e08c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02b8fb5400b3ed69b7e65ae9f09aac617bd51972cfa0d5aec436c9f59eb65ac2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://81d94ce172f3ac2803f304ca33a0549f1e75e67a45f3acedc0b276a83e885db1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fee843e54b1ecda5724a42e485d25a1ddf459ebc6a54c2722cf8cd8d11684ca5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T12:05:30Z\\\",\\\"message\\\":\\\"rvices.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1209 12:05:30.205709 6137 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-ncbbx\\\\nF1209 12:05:30.205711 6137 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:30Z is after 2025-08-24T17:21:41Z]\\\\nI1209 12:05:30.205711 6137 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI1209 12:05:30.205722 6137 ovn.go:134] Ensuring zone local for Pod openshift-image-registry/node-ca-ncbbx in node crc\\\\nI1209 12:05:30.205728 6137 base_network_controller_\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81d94ce172f3ac2803f304ca33a0549f1e75e67a45f3acedc0b276a83e885db1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T12:05:47Z\\\",\\\"message\\\":\\\"cluster\\\\\\\", UUID:\\\\\\\"d4efc4a8-c514-4a6b-901c-2953978b50d3\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-multus/multus-admission-controller\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-multus/multus-admission-controller_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-multus/multus-admission-controller\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.119\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.119\\\\\\\", Port:8443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1209 12:05:46.875029 6369 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[extern\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88a462e310fdbdf79496906a304fcf953c4e48c9cab9137f52cf5aafa9c4054c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f192db4b078c6bf3ffea6db58842ce7e8b4ad76c12fa8422496288bc0cfe0c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f192db4b078c6bf3ffea6db58842ce7e8b4ad76c12fa8422496288bc0cfe0c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7hrm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:47Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:47 crc kubenswrapper[4703]: I1209 12:05:47.493048 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pf4r7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f199898-7916-48b6-b5e6-c878bacae384\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-856ln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-856ln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pf4r7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:47Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:47 crc kubenswrapper[4703]: I1209 12:05:47.499873 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:47 crc kubenswrapper[4703]: I1209 12:05:47.499901 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:47 crc kubenswrapper[4703]: I1209 12:05:47.499909 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:47 crc kubenswrapper[4703]: I1209 12:05:47.499921 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:47 crc kubenswrapper[4703]: I1209 12:05:47.499930 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:47Z","lastTransitionTime":"2025-12-09T12:05:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:47 crc kubenswrapper[4703]: I1209 12:05:47.507439 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e02414b1-6edb-41df-bef5-da5638c58c13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf2e3462f3adecb5658327310dd9a543e1f7fe8b7d477f0f90c77f9a3c1724cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4933db7a36f445ad8289586a5de8f593c8f565d53b4b310292c2a6b436b86b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dd75d360dcc49fc8db00b8a075ce33da2bd845289f44a0148ab391635ab90d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://feccbfdb1fa7a9935e98927106faf0c924485823950276b473cdb8c06c4b07eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:47Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:47 crc kubenswrapper[4703]: I1209 12:05:47.517046 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15947b2dcc90c69809979bf2d4a222ee664245d897b1f9d9ca91d29551ad86ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://592c8564eeb877cf87544918cb97d0e32451a6afd1c201efafb4cfec9a1e5857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:47Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:47 crc kubenswrapper[4703]: I1209 12:05:47.525716 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rfmng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a89eb00-454a-44b2-9b8e-6518b4a9d10c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a0c3ac19e4c5217f17ea8cd7c742043ee7708b0f086c99341a4bbc062fa53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qf85c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rfmng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:47Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:47 crc kubenswrapper[4703]: I1209 12:05:47.536898 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f05c1cdf163073cebf69a5dcd00e285698d11f7c76995b202500503a9762d467\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:47Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:47 crc kubenswrapper[4703]: I1209 12:05:47.547678 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:47Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:47 crc kubenswrapper[4703]: I1209 12:05:47.602084 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:47 crc kubenswrapper[4703]: I1209 12:05:47.602116 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:47 crc kubenswrapper[4703]: I1209 12:05:47.602123 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:47 crc kubenswrapper[4703]: I1209 12:05:47.602136 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:47 crc kubenswrapper[4703]: I1209 12:05:47.602144 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:47Z","lastTransitionTime":"2025-12-09T12:05:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:47 crc kubenswrapper[4703]: I1209 12:05:47.705235 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:47 crc kubenswrapper[4703]: I1209 12:05:47.705268 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:47 crc kubenswrapper[4703]: I1209 12:05:47.705283 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:47 crc kubenswrapper[4703]: I1209 12:05:47.705302 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:47 crc kubenswrapper[4703]: I1209 12:05:47.705320 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:47Z","lastTransitionTime":"2025-12-09T12:05:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:47 crc kubenswrapper[4703]: I1209 12:05:47.807527 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:47 crc kubenswrapper[4703]: I1209 12:05:47.807585 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:47 crc kubenswrapper[4703]: I1209 12:05:47.807596 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:47 crc kubenswrapper[4703]: I1209 12:05:47.807617 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:47 crc kubenswrapper[4703]: I1209 12:05:47.807629 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:47Z","lastTransitionTime":"2025-12-09T12:05:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:47 crc kubenswrapper[4703]: I1209 12:05:47.909962 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:47 crc kubenswrapper[4703]: I1209 12:05:47.910265 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:47 crc kubenswrapper[4703]: I1209 12:05:47.910389 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:47 crc kubenswrapper[4703]: I1209 12:05:47.910517 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:47 crc kubenswrapper[4703]: I1209 12:05:47.910635 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:47Z","lastTransitionTime":"2025-12-09T12:05:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:48 crc kubenswrapper[4703]: I1209 12:05:48.013558 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:48 crc kubenswrapper[4703]: I1209 12:05:48.013921 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:48 crc kubenswrapper[4703]: I1209 12:05:48.014026 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:48 crc kubenswrapper[4703]: I1209 12:05:48.014139 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:48 crc kubenswrapper[4703]: I1209 12:05:48.014266 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:48Z","lastTransitionTime":"2025-12-09T12:05:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:48 crc kubenswrapper[4703]: I1209 12:05:48.116160 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:48 crc kubenswrapper[4703]: I1209 12:05:48.116236 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:48 crc kubenswrapper[4703]: I1209 12:05:48.116247 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:48 crc kubenswrapper[4703]: I1209 12:05:48.116262 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:48 crc kubenswrapper[4703]: I1209 12:05:48.116272 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:48Z","lastTransitionTime":"2025-12-09T12:05:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:48 crc kubenswrapper[4703]: I1209 12:05:48.218478 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:48 crc kubenswrapper[4703]: I1209 12:05:48.218559 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:48 crc kubenswrapper[4703]: I1209 12:05:48.218575 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:48 crc kubenswrapper[4703]: I1209 12:05:48.218593 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:48 crc kubenswrapper[4703]: I1209 12:05:48.218630 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:48Z","lastTransitionTime":"2025-12-09T12:05:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:48 crc kubenswrapper[4703]: I1209 12:05:48.320815 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:48 crc kubenswrapper[4703]: I1209 12:05:48.320845 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:48 crc kubenswrapper[4703]: I1209 12:05:48.320854 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:48 crc kubenswrapper[4703]: I1209 12:05:48.320865 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:48 crc kubenswrapper[4703]: I1209 12:05:48.320874 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:48Z","lastTransitionTime":"2025-12-09T12:05:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:48 crc kubenswrapper[4703]: I1209 12:05:48.363352 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7hrm8_e9173444-5181-4ee4-b651-11d92ccab0d0/ovnkube-controller/2.log" Dec 09 12:05:48 crc kubenswrapper[4703]: I1209 12:05:48.423674 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:48 crc kubenswrapper[4703]: I1209 12:05:48.423742 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:48 crc kubenswrapper[4703]: I1209 12:05:48.423754 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:48 crc kubenswrapper[4703]: I1209 12:05:48.423772 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:48 crc kubenswrapper[4703]: I1209 12:05:48.423787 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:48Z","lastTransitionTime":"2025-12-09T12:05:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:48 crc kubenswrapper[4703]: I1209 12:05:48.530577 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:48 crc kubenswrapper[4703]: I1209 12:05:48.530603 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:48 crc kubenswrapper[4703]: I1209 12:05:48.530611 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:48 crc kubenswrapper[4703]: I1209 12:05:48.531005 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:48 crc kubenswrapper[4703]: I1209 12:05:48.531046 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:48Z","lastTransitionTime":"2025-12-09T12:05:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:48 crc kubenswrapper[4703]: I1209 12:05:48.569404 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" Dec 09 12:05:48 crc kubenswrapper[4703]: I1209 12:05:48.570130 4703 scope.go:117] "RemoveContainer" containerID="81d94ce172f3ac2803f304ca33a0549f1e75e67a45f3acedc0b276a83e885db1" Dec 09 12:05:48 crc kubenswrapper[4703]: E1209 12:05:48.570364 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-7hrm8_openshift-ovn-kubernetes(e9173444-5181-4ee4-b651-11d92ccab0d0)\"" pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" podUID="e9173444-5181-4ee4-b651-11d92ccab0d0" Dec 09 12:05:48 crc kubenswrapper[4703]: I1209 12:05:48.583483 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:48Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:48 crc kubenswrapper[4703]: I1209 12:05:48.594330 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15947b2dcc90c69809979bf2d4a222ee664245d897b1f9d9ca91d29551ad86ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://592c8564eeb877cf87544918cb97d0e32451a6afd1c201efafb4cfec9a1e5857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:48Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:48 crc kubenswrapper[4703]: I1209 12:05:48.605252 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rfmng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a89eb00-454a-44b2-9b8e-6518b4a9d10c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a0c3ac19e4c5217f17ea8cd7c742043ee7708b0f086c99341a4bbc062fa53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qf85c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rfmng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:48Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:48 crc kubenswrapper[4703]: I1209 12:05:48.616580 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f05c1cdf163073cebf69a5dcd00e285698d11f7c76995b202500503a9762d467\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:48Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:48 crc kubenswrapper[4703]: I1209 12:05:48.629277 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4r9tc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65954588-9afb-47ff-8c0b-f83bf290da27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://698ca5256efee75c73f1585ad1f7955676be6dac9cbd1c5aebdbf88848c35cee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aeaa589b823d27c4d29e522f951c5578acd67848c14381e3f61821458f74a966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aeaa589b823d27c4d29e522f951c5578acd67848c14381e3f61821458f74a966\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e7f1224d3313cfd9900663c77a41898ee26d0f91591e2f641a17a6a543c69b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e7f1224d3313cfd9900663c77a41898ee26d0f91591e2f641a17a6a543c69b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f86761f861d07fe4e22b95651d0ed069799b95020e8e168527fd838ea6cc1e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f86761f861d07fe4e22b95651d0ed069799b95020e8e168527fd838ea6cc1e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c0a351532024238eb4c192066350cdd4f94945cfa549586ced1c91a2379490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c0a351532024238eb4c192066350cdd4f94945cfa549586ced1c91a2379490\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7baabf4646f56bb08044eb6cb29976274522e962906d573a279e967e64d191f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7baabf4646f56bb08044eb6cb29976274522e962906d573a279e967e64d191f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87f16f3d165437fc695faa410c055f3f3716fbb47e2dc1d41bba05315872e950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87f16f3d165437fc695faa410c055f3f3716fbb47e2dc1d41bba05315872e950\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4r9tc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:48Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:48 crc kubenswrapper[4703]: I1209 12:05:48.633484 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:48 crc kubenswrapper[4703]: I1209 12:05:48.633527 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:48 crc kubenswrapper[4703]: I1209 12:05:48.633538 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:48 crc kubenswrapper[4703]: I1209 12:05:48.633560 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:48 crc kubenswrapper[4703]: I1209 12:05:48.633576 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:48Z","lastTransitionTime":"2025-12-09T12:05:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:48 crc kubenswrapper[4703]: I1209 12:05:48.643339 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9zbgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b57e1095-b0e1-4b30-a491-00852a5219e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c902e8b479756afb7cf929d0963afc0f950288658427c1d49ff22ee4319cda70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6sdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9zbgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:48Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:48 crc kubenswrapper[4703]: I1209 12:05:48.656528 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:48Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:48 crc kubenswrapper[4703]: I1209 12:05:48.675887 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"687e9c7e-e4b8-4bbf-871e-714260501e27\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd9149c63c18f6178219d1f2e4f8e44bc861746b90fb9748105461192a870431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8612defa917cc99574e7762a0260e1b670277ea754e0a66bca960ef6b5a97f2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc0c9f4fdf8ce1ac42d54d3823b0137e0fd8ddf4499236ac3b954f9bff7c491\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd716e645be3932ca219e76ba8d896a8cf27086bbbac8f5e68057873fa68a6f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da708e8829aba2597198cdab64374791885bac9ddf7cb04b296c81d4e81d42ff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 12:05:18.266857 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 12:05:18.266993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 12:05:18.268588 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1075686048/tls.crt::/tmp/serving-cert-1075686048/tls.key\\\\\\\"\\\\nI1209 12:05:18.602670 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 12:05:18.604681 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 12:05:18.604730 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 12:05:18.604783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 12:05:18.604808 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 12:05:18.609177 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 12:05:18.609280 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:05:18.609306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:05:18.609328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 12:05:18.609348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 12:05:18.609367 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 12:05:18.609387 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 12:05:18.609237 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 12:05:18.612865 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://917390e847b6ec24f370316f0fbc305db0f667eab946b4f0e749855a62112cc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55c7e1e49ab4af3fff08ccfa9a7d9b4f889fbdd1d60592bca6c87990f4623d5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55c7e1e49ab4af3fff08ccfa9a7d9b4f889fbdd1d60592bca6c87990f4623d5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:48Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:48 crc kubenswrapper[4703]: I1209 12:05:48.690333 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998159f494b646ac88f1f1ffb09789bd146d7b3554b3a5106174d7c72f3ff583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:48Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:48 crc kubenswrapper[4703]: I1209 12:05:48.700505 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32956ceb-8540-406e-8693-e86efb46cd42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1535901888e91bc0037177a3150033b17d6e2d143faa5a082dba0d096cad474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tv4nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d149dfa980c5e9cee99cd7ba8a51bca19529368e037a1fd8e9b8dbea04179378\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tv4nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q8sfk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:48Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:48 crc kubenswrapper[4703]: I1209 12:05:48.713312 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4sss9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9dac2a02-8ee0-445c-bac8-4d448cda509d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c698b36fa974c95698ac8ac133b3caa9cbe8847626f5abf8088011c1a1a13032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q96fc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://660ecb626e35b5e693783b416e62c773b996e7e39233160e422dc9357ca0fd2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q96fc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4sss9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:48Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:48 crc kubenswrapper[4703]: I1209 12:05:48.726443 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e02414b1-6edb-41df-bef5-da5638c58c13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf2e3462f3adecb5658327310dd9a543e1f7fe8b7d477f0f90c77f9a3c1724cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4933db7a36f445ad8289586a5de8f593c8f565d53b4b310292c2a6b436b86b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dd75d360dcc49fc8db00b8a075ce33da2bd845289f44a0148ab391635ab90d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://feccbfdb1fa7a9935e98927106faf0c924485823950276b473cdb8c06c4b07eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:48Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:48 crc kubenswrapper[4703]: I1209 12:05:48.736139 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:48 crc kubenswrapper[4703]: I1209 12:05:48.736177 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:48 crc kubenswrapper[4703]: I1209 12:05:48.736203 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:48 crc kubenswrapper[4703]: I1209 12:05:48.736220 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:48 crc kubenswrapper[4703]: I1209 12:05:48.736233 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:48Z","lastTransitionTime":"2025-12-09T12:05:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:48 crc kubenswrapper[4703]: I1209 12:05:48.740556 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:48Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:48 crc kubenswrapper[4703]: I1209 12:05:48.751389 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ncbbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c180fbd9-43db-436b-8166-3cbcb5a14da3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8f7249f78155fea92f6a72919fd391b841a3cdb25583c9b9bc1083c6e34f087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22tv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ncbbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:48Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:48 crc kubenswrapper[4703]: I1209 12:05:48.773295 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9173444-5181-4ee4-b651-11d92ccab0d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://382198a5fbf52c39a98bc79aa4e917d50dfbcb3ff73d831389901544d400817c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce8423b98a9127fea76c23c865d9840069fe97e1d10dc9e97816c916192bf8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5989b9b8e8fb619a07c3242f7c4310e042a281648593a05edd6cb16449480428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9652e0d783249140796d5c8586e300acb7918f3ced79a1daec01da21eac363c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06ebee5cde81f7c5fb299c4a346f466d3bd4667be4dd2e89712dcae660a1e08c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02b8fb5400b3ed69b7e65ae9f09aac617bd51972cfa0d5aec436c9f59eb65ac2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://81d94ce172f3ac2803f304ca33a0549f1e75e67a45f3acedc0b276a83e885db1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81d94ce172f3ac2803f304ca33a0549f1e75e67a45f3acedc0b276a83e885db1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T12:05:47Z\\\",\\\"message\\\":\\\"cluster\\\\\\\", UUID:\\\\\\\"d4efc4a8-c514-4a6b-901c-2953978b50d3\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-multus/multus-admission-controller\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-multus/multus-admission-controller_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-multus/multus-admission-controller\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.119\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.119\\\\\\\", Port:8443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1209 12:05:46.875029 6369 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[extern\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-7hrm8_openshift-ovn-kubernetes(e9173444-5181-4ee4-b651-11d92ccab0d0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88a462e310fdbdf79496906a304fcf953c4e48c9cab9137f52cf5aafa9c4054c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f192db4b078c6bf3ffea6db58842ce7e8b4ad76c12fa8422496288bc0cfe0c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f192db4b078c6bf3ffea6db58842ce7e8b4ad76c12fa8422496288bc0cfe0c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7hrm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:48Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:48 crc kubenswrapper[4703]: I1209 12:05:48.784652 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pf4r7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f199898-7916-48b6-b5e6-c878bacae384\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-856ln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-856ln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pf4r7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:48Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:48 crc kubenswrapper[4703]: I1209 12:05:48.839070 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:48 crc kubenswrapper[4703]: I1209 12:05:48.839123 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:48 crc kubenswrapper[4703]: I1209 12:05:48.839139 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:48 crc kubenswrapper[4703]: I1209 12:05:48.839159 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:48 crc kubenswrapper[4703]: I1209 12:05:48.839169 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:48Z","lastTransitionTime":"2025-12-09T12:05:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:48 crc kubenswrapper[4703]: I1209 12:05:48.912737 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9f199898-7916-48b6-b5e6-c878bacae384-metrics-certs\") pod \"network-metrics-daemon-pf4r7\" (UID: \"9f199898-7916-48b6-b5e6-c878bacae384\") " pod="openshift-multus/network-metrics-daemon-pf4r7" Dec 09 12:05:48 crc kubenswrapper[4703]: E1209 12:05:48.912868 4703 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 12:05:48 crc kubenswrapper[4703]: E1209 12:05:48.912934 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9f199898-7916-48b6-b5e6-c878bacae384-metrics-certs podName:9f199898-7916-48b6-b5e6-c878bacae384 nodeName:}" failed. No retries permitted until 2025-12-09 12:06:04.912915614 +0000 UTC m=+64.161679133 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9f199898-7916-48b6-b5e6-c878bacae384-metrics-certs") pod "network-metrics-daemon-pf4r7" (UID: "9f199898-7916-48b6-b5e6-c878bacae384") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 12:05:48 crc kubenswrapper[4703]: I1209 12:05:48.941440 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:48 crc kubenswrapper[4703]: I1209 12:05:48.941491 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:48 crc kubenswrapper[4703]: I1209 12:05:48.941504 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:48 crc kubenswrapper[4703]: I1209 12:05:48.941518 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:48 crc kubenswrapper[4703]: I1209 12:05:48.941526 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:48Z","lastTransitionTime":"2025-12-09T12:05:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:49 crc kubenswrapper[4703]: I1209 12:05:49.044092 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:49 crc kubenswrapper[4703]: I1209 12:05:49.044136 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:49 crc kubenswrapper[4703]: I1209 12:05:49.044146 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:49 crc kubenswrapper[4703]: I1209 12:05:49.044161 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:49 crc kubenswrapper[4703]: I1209 12:05:49.044173 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:49Z","lastTransitionTime":"2025-12-09T12:05:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:49 crc kubenswrapper[4703]: I1209 12:05:49.068575 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:05:49 crc kubenswrapper[4703]: I1209 12:05:49.068661 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:05:49 crc kubenswrapper[4703]: E1209 12:05:49.068718 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 12:05:49 crc kubenswrapper[4703]: I1209 12:05:49.068738 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pf4r7" Dec 09 12:05:49 crc kubenswrapper[4703]: E1209 12:05:49.068787 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 12:05:49 crc kubenswrapper[4703]: I1209 12:05:49.068806 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:05:49 crc kubenswrapper[4703]: E1209 12:05:49.068829 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pf4r7" podUID="9f199898-7916-48b6-b5e6-c878bacae384" Dec 09 12:05:49 crc kubenswrapper[4703]: E1209 12:05:49.069137 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 12:05:49 crc kubenswrapper[4703]: I1209 12:05:49.147304 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:49 crc kubenswrapper[4703]: I1209 12:05:49.147377 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:49 crc kubenswrapper[4703]: I1209 12:05:49.147388 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:49 crc kubenswrapper[4703]: I1209 12:05:49.147410 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:49 crc kubenswrapper[4703]: I1209 12:05:49.147448 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:49Z","lastTransitionTime":"2025-12-09T12:05:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:49 crc kubenswrapper[4703]: I1209 12:05:49.249571 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:49 crc kubenswrapper[4703]: I1209 12:05:49.249610 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:49 crc kubenswrapper[4703]: I1209 12:05:49.249618 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:49 crc kubenswrapper[4703]: I1209 12:05:49.249633 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:49 crc kubenswrapper[4703]: I1209 12:05:49.249687 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:49Z","lastTransitionTime":"2025-12-09T12:05:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:49 crc kubenswrapper[4703]: I1209 12:05:49.352021 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:49 crc kubenswrapper[4703]: I1209 12:05:49.352073 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:49 crc kubenswrapper[4703]: I1209 12:05:49.352082 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:49 crc kubenswrapper[4703]: I1209 12:05:49.352095 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:49 crc kubenswrapper[4703]: I1209 12:05:49.352105 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:49Z","lastTransitionTime":"2025-12-09T12:05:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:49 crc kubenswrapper[4703]: I1209 12:05:49.454713 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:49 crc kubenswrapper[4703]: I1209 12:05:49.454753 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:49 crc kubenswrapper[4703]: I1209 12:05:49.454761 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:49 crc kubenswrapper[4703]: I1209 12:05:49.454775 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:49 crc kubenswrapper[4703]: I1209 12:05:49.454784 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:49Z","lastTransitionTime":"2025-12-09T12:05:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:49 crc kubenswrapper[4703]: I1209 12:05:49.557691 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:49 crc kubenswrapper[4703]: I1209 12:05:49.557743 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:49 crc kubenswrapper[4703]: I1209 12:05:49.557751 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:49 crc kubenswrapper[4703]: I1209 12:05:49.557769 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:49 crc kubenswrapper[4703]: I1209 12:05:49.557778 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:49Z","lastTransitionTime":"2025-12-09T12:05:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:49 crc kubenswrapper[4703]: I1209 12:05:49.659894 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:49 crc kubenswrapper[4703]: I1209 12:05:49.659940 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:49 crc kubenswrapper[4703]: I1209 12:05:49.659951 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:49 crc kubenswrapper[4703]: I1209 12:05:49.659966 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:49 crc kubenswrapper[4703]: I1209 12:05:49.659978 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:49Z","lastTransitionTime":"2025-12-09T12:05:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:49 crc kubenswrapper[4703]: I1209 12:05:49.762143 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:49 crc kubenswrapper[4703]: I1209 12:05:49.762180 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:49 crc kubenswrapper[4703]: I1209 12:05:49.762208 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:49 crc kubenswrapper[4703]: I1209 12:05:49.762224 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:49 crc kubenswrapper[4703]: I1209 12:05:49.762237 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:49Z","lastTransitionTime":"2025-12-09T12:05:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:49 crc kubenswrapper[4703]: I1209 12:05:49.864365 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:49 crc kubenswrapper[4703]: I1209 12:05:49.864408 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:49 crc kubenswrapper[4703]: I1209 12:05:49.864421 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:49 crc kubenswrapper[4703]: I1209 12:05:49.864439 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:49 crc kubenswrapper[4703]: I1209 12:05:49.864449 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:49Z","lastTransitionTime":"2025-12-09T12:05:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:49 crc kubenswrapper[4703]: I1209 12:05:49.966774 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:49 crc kubenswrapper[4703]: I1209 12:05:49.966816 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:49 crc kubenswrapper[4703]: I1209 12:05:49.966825 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:49 crc kubenswrapper[4703]: I1209 12:05:49.966879 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:49 crc kubenswrapper[4703]: I1209 12:05:49.966892 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:49Z","lastTransitionTime":"2025-12-09T12:05:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:50 crc kubenswrapper[4703]: I1209 12:05:50.069008 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:50 crc kubenswrapper[4703]: I1209 12:05:50.069055 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:50 crc kubenswrapper[4703]: I1209 12:05:50.069064 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:50 crc kubenswrapper[4703]: I1209 12:05:50.069077 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:50 crc kubenswrapper[4703]: I1209 12:05:50.069086 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:50Z","lastTransitionTime":"2025-12-09T12:05:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:50 crc kubenswrapper[4703]: I1209 12:05:50.171780 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:50 crc kubenswrapper[4703]: I1209 12:05:50.171834 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:50 crc kubenswrapper[4703]: I1209 12:05:50.171843 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:50 crc kubenswrapper[4703]: I1209 12:05:50.171862 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:50 crc kubenswrapper[4703]: I1209 12:05:50.171880 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:50Z","lastTransitionTime":"2025-12-09T12:05:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:50 crc kubenswrapper[4703]: I1209 12:05:50.273837 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:50 crc kubenswrapper[4703]: I1209 12:05:50.273876 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:50 crc kubenswrapper[4703]: I1209 12:05:50.273884 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:50 crc kubenswrapper[4703]: I1209 12:05:50.273898 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:50 crc kubenswrapper[4703]: I1209 12:05:50.273906 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:50Z","lastTransitionTime":"2025-12-09T12:05:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:50 crc kubenswrapper[4703]: I1209 12:05:50.375726 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:50 crc kubenswrapper[4703]: I1209 12:05:50.375769 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:50 crc kubenswrapper[4703]: I1209 12:05:50.375780 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:50 crc kubenswrapper[4703]: I1209 12:05:50.375797 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:50 crc kubenswrapper[4703]: I1209 12:05:50.375807 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:50Z","lastTransitionTime":"2025-12-09T12:05:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:50 crc kubenswrapper[4703]: I1209 12:05:50.478376 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:50 crc kubenswrapper[4703]: I1209 12:05:50.478415 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:50 crc kubenswrapper[4703]: I1209 12:05:50.478426 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:50 crc kubenswrapper[4703]: I1209 12:05:50.478441 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:50 crc kubenswrapper[4703]: I1209 12:05:50.478454 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:50Z","lastTransitionTime":"2025-12-09T12:05:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:50 crc kubenswrapper[4703]: I1209 12:05:50.581220 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:50 crc kubenswrapper[4703]: I1209 12:05:50.581295 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:50 crc kubenswrapper[4703]: I1209 12:05:50.581303 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:50 crc kubenswrapper[4703]: I1209 12:05:50.581318 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:50 crc kubenswrapper[4703]: I1209 12:05:50.581327 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:50Z","lastTransitionTime":"2025-12-09T12:05:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:50 crc kubenswrapper[4703]: I1209 12:05:50.683497 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:50 crc kubenswrapper[4703]: I1209 12:05:50.683525 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:50 crc kubenswrapper[4703]: I1209 12:05:50.683535 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:50 crc kubenswrapper[4703]: I1209 12:05:50.683548 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:50 crc kubenswrapper[4703]: I1209 12:05:50.683557 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:50Z","lastTransitionTime":"2025-12-09T12:05:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:50 crc kubenswrapper[4703]: I1209 12:05:50.785700 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:50 crc kubenswrapper[4703]: I1209 12:05:50.785729 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:50 crc kubenswrapper[4703]: I1209 12:05:50.785743 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:50 crc kubenswrapper[4703]: I1209 12:05:50.785756 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:50 crc kubenswrapper[4703]: I1209 12:05:50.785765 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:50Z","lastTransitionTime":"2025-12-09T12:05:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:50 crc kubenswrapper[4703]: I1209 12:05:50.887890 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:50 crc kubenswrapper[4703]: I1209 12:05:50.887936 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:50 crc kubenswrapper[4703]: I1209 12:05:50.887945 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:50 crc kubenswrapper[4703]: I1209 12:05:50.887959 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:50 crc kubenswrapper[4703]: I1209 12:05:50.887969 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:50Z","lastTransitionTime":"2025-12-09T12:05:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:50 crc kubenswrapper[4703]: I1209 12:05:50.936311 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:05:50 crc kubenswrapper[4703]: E1209 12:05:50.936412 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:06:22.936393626 +0000 UTC m=+82.185157145 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:05:50 crc kubenswrapper[4703]: I1209 12:05:50.990498 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:50 crc kubenswrapper[4703]: I1209 12:05:50.990540 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:50 crc kubenswrapper[4703]: I1209 12:05:50.990548 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:50 crc kubenswrapper[4703]: I1209 12:05:50.990562 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:50 crc kubenswrapper[4703]: I1209 12:05:50.990571 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:50Z","lastTransitionTime":"2025-12-09T12:05:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:51 crc kubenswrapper[4703]: I1209 12:05:51.037966 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:05:51 crc kubenswrapper[4703]: I1209 12:05:51.038026 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:05:51 crc kubenswrapper[4703]: I1209 12:05:51.038051 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:05:51 crc kubenswrapper[4703]: I1209 12:05:51.038071 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:05:51 crc kubenswrapper[4703]: E1209 12:05:51.038210 4703 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 12:05:51 crc kubenswrapper[4703]: E1209 12:05:51.038226 4703 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 12:05:51 crc kubenswrapper[4703]: E1209 12:05:51.038231 4703 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 12:05:51 crc kubenswrapper[4703]: E1209 12:05:51.038270 4703 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 12:05:51 crc kubenswrapper[4703]: E1209 12:05:51.038279 4703 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 12:05:51 crc kubenswrapper[4703]: E1209 12:05:51.038274 4703 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 12:05:51 crc kubenswrapper[4703]: E1209 12:05:51.038287 4703 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 12:05:51 crc kubenswrapper[4703]: E1209 12:05:51.038236 4703 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 12:05:51 crc kubenswrapper[4703]: E1209 12:05:51.038312 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-09 12:06:23.038293903 +0000 UTC m=+82.287057422 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 12:05:51 crc kubenswrapper[4703]: E1209 12:05:51.038698 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-09 12:06:23.038667324 +0000 UTC m=+82.287430843 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 12:05:51 crc kubenswrapper[4703]: E1209 12:05:51.038727 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-09 12:06:23.038718976 +0000 UTC m=+82.287482495 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 12:05:51 crc kubenswrapper[4703]: E1209 12:05:51.038742 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-09 12:06:23.038735696 +0000 UTC m=+82.287499315 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 12:05:51 crc kubenswrapper[4703]: I1209 12:05:51.069514 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:05:51 crc kubenswrapper[4703]: I1209 12:05:51.069534 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:05:51 crc kubenswrapper[4703]: I1209 12:05:51.069549 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pf4r7" Dec 09 12:05:51 crc kubenswrapper[4703]: E1209 12:05:51.069912 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 12:05:51 crc kubenswrapper[4703]: E1209 12:05:51.069734 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 12:05:51 crc kubenswrapper[4703]: E1209 12:05:51.069942 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pf4r7" podUID="9f199898-7916-48b6-b5e6-c878bacae384" Dec 09 12:05:51 crc kubenswrapper[4703]: I1209 12:05:51.069570 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:05:51 crc kubenswrapper[4703]: E1209 12:05:51.070015 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 12:05:51 crc kubenswrapper[4703]: I1209 12:05:51.086455 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f05c1cdf163073cebf69a5dcd00e285698d11f7c76995b202500503a9762d467\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:51Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:51 crc kubenswrapper[4703]: I1209 12:05:51.092719 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:51 crc kubenswrapper[4703]: I1209 12:05:51.092751 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:51 crc kubenswrapper[4703]: I1209 12:05:51.092759 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:51 crc kubenswrapper[4703]: I1209 12:05:51.092772 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:51 crc kubenswrapper[4703]: I1209 12:05:51.092780 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:51Z","lastTransitionTime":"2025-12-09T12:05:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:51 crc kubenswrapper[4703]: I1209 12:05:51.100079 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:51Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:51 crc kubenswrapper[4703]: I1209 12:05:51.111576 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15947b2dcc90c69809979bf2d4a222ee664245d897b1f9d9ca91d29551ad86ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://592c8564eeb877cf87544918cb97d0e32451a6afd1c201efafb4cfec9a1e5857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:51Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:51 crc kubenswrapper[4703]: I1209 12:05:51.121264 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rfmng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a89eb00-454a-44b2-9b8e-6518b4a9d10c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a0c3ac19e4c5217f17ea8cd7c742043ee7708b0f086c99341a4bbc062fa53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qf85c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rfmng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:51Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:51 crc kubenswrapper[4703]: I1209 12:05:51.132490 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:51Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:51 crc kubenswrapper[4703]: I1209 12:05:51.143964 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4r9tc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65954588-9afb-47ff-8c0b-f83bf290da27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://698ca5256efee75c73f1585ad1f7955676be6dac9cbd1c5aebdbf88848c35cee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aeaa589b823d27c4d29e522f951c5578acd67848c14381e3f61821458f74a966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aeaa589b823d27c4d29e522f951c5578acd67848c14381e3f61821458f74a966\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e7f1224d3313cfd9900663c77a41898ee26d0f91591e2f641a17a6a543c69b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e7f1224d3313cfd9900663c77a41898ee26d0f91591e2f641a17a6a543c69b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f86761f861d07fe4e22b95651d0ed069799b95020e8e168527fd838ea6cc1e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f86761f861d07fe4e22b95651d0ed069799b95020e8e168527fd838ea6cc1e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c0a351532024238eb4c192066350cdd4f94945cfa549586ced1c91a2379490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c0a351532024238eb4c192066350cdd4f94945cfa549586ced1c91a2379490\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7baabf4646f56bb08044eb6cb29976274522e962906d573a279e967e64d191f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7baabf4646f56bb08044eb6cb29976274522e962906d573a279e967e64d191f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87f16f3d165437fc695faa410c055f3f3716fbb47e2dc1d41bba05315872e950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87f16f3d165437fc695faa410c055f3f3716fbb47e2dc1d41bba05315872e950\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4r9tc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:51Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:51 crc kubenswrapper[4703]: I1209 12:05:51.158830 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9zbgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b57e1095-b0e1-4b30-a491-00852a5219e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c902e8b479756afb7cf929d0963afc0f950288658427c1d49ff22ee4319cda70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6sdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9zbgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:51Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:51 crc kubenswrapper[4703]: I1209 12:05:51.168076 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4sss9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9dac2a02-8ee0-445c-bac8-4d448cda509d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c698b36fa974c95698ac8ac133b3caa9cbe8847626f5abf8088011c1a1a13032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q96fc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://660ecb626e35b5e693783b416e62c773b996e7e39233160e422dc9357ca0fd2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q96fc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4sss9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:51Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:51 crc kubenswrapper[4703]: I1209 12:05:51.179851 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"687e9c7e-e4b8-4bbf-871e-714260501e27\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd9149c63c18f6178219d1f2e4f8e44bc861746b90fb9748105461192a870431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8612defa917cc99574e7762a0260e1b670277ea754e0a66bca960ef6b5a97f2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc0c9f4fdf8ce1ac42d54d3823b0137e0fd8ddf4499236ac3b954f9bff7c491\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd716e645be3932ca219e76ba8d896a8cf27086bbbac8f5e68057873fa68a6f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da708e8829aba2597198cdab64374791885bac9ddf7cb04b296c81d4e81d42ff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 12:05:18.266857 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 12:05:18.266993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 12:05:18.268588 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1075686048/tls.crt::/tmp/serving-cert-1075686048/tls.key\\\\\\\"\\\\nI1209 12:05:18.602670 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 12:05:18.604681 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 12:05:18.604730 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 12:05:18.604783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 12:05:18.604808 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 12:05:18.609177 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 12:05:18.609280 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:05:18.609306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:05:18.609328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 12:05:18.609348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 12:05:18.609367 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 12:05:18.609387 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 12:05:18.609237 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 12:05:18.612865 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://917390e847b6ec24f370316f0fbc305db0f667eab946b4f0e749855a62112cc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55c7e1e49ab4af3fff08ccfa9a7d9b4f889fbdd1d60592bca6c87990f4623d5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55c7e1e49ab4af3fff08ccfa9a7d9b4f889fbdd1d60592bca6c87990f4623d5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:51Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:51 crc kubenswrapper[4703]: I1209 12:05:51.189454 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998159f494b646ac88f1f1ffb09789bd146d7b3554b3a5106174d7c72f3ff583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:51Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:51 crc kubenswrapper[4703]: I1209 12:05:51.195001 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:51 crc kubenswrapper[4703]: I1209 12:05:51.195052 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:51 crc kubenswrapper[4703]: I1209 12:05:51.195060 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:51 crc kubenswrapper[4703]: I1209 12:05:51.195074 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:51 crc kubenswrapper[4703]: I1209 12:05:51.195111 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:51Z","lastTransitionTime":"2025-12-09T12:05:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:51 crc kubenswrapper[4703]: I1209 12:05:51.199260 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32956ceb-8540-406e-8693-e86efb46cd42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1535901888e91bc0037177a3150033b17d6e2d143faa5a082dba0d096cad474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tv4nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d149dfa980c5e9cee99cd7ba8a51bca19529368e037a1fd8e9b8dbea04179378\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tv4nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q8sfk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:51Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:51 crc kubenswrapper[4703]: I1209 12:05:51.208401 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ncbbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c180fbd9-43db-436b-8166-3cbcb5a14da3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8f7249f78155fea92f6a72919fd391b841a3cdb25583c9b9bc1083c6e34f087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22tv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ncbbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:51Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:51 crc kubenswrapper[4703]: I1209 12:05:51.225769 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9173444-5181-4ee4-b651-11d92ccab0d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://382198a5fbf52c39a98bc79aa4e917d50dfbcb3ff73d831389901544d400817c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce8423b98a9127fea76c23c865d9840069fe97e1d10dc9e97816c916192bf8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5989b9b8e8fb619a07c3242f7c4310e042a281648593a05edd6cb16449480428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9652e0d783249140796d5c8586e300acb7918f3ced79a1daec01da21eac363c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06ebee5cde81f7c5fb299c4a346f466d3bd4667be4dd2e89712dcae660a1e08c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02b8fb5400b3ed69b7e65ae9f09aac617bd51972cfa0d5aec436c9f59eb65ac2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://81d94ce172f3ac2803f304ca33a0549f1e75e67a45f3acedc0b276a83e885db1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81d94ce172f3ac2803f304ca33a0549f1e75e67a45f3acedc0b276a83e885db1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T12:05:47Z\\\",\\\"message\\\":\\\"cluster\\\\\\\", UUID:\\\\\\\"d4efc4a8-c514-4a6b-901c-2953978b50d3\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-multus/multus-admission-controller\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-multus/multus-admission-controller_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-multus/multus-admission-controller\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.119\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.119\\\\\\\", Port:8443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1209 12:05:46.875029 6369 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[extern\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-7hrm8_openshift-ovn-kubernetes(e9173444-5181-4ee4-b651-11d92ccab0d0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88a462e310fdbdf79496906a304fcf953c4e48c9cab9137f52cf5aafa9c4054c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f192db4b078c6bf3ffea6db58842ce7e8b4ad76c12fa8422496288bc0cfe0c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f192db4b078c6bf3ffea6db58842ce7e8b4ad76c12fa8422496288bc0cfe0c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7hrm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:51Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:51 crc kubenswrapper[4703]: I1209 12:05:51.235730 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pf4r7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f199898-7916-48b6-b5e6-c878bacae384\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-856ln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-856ln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pf4r7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:51Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:51 crc kubenswrapper[4703]: I1209 12:05:51.248310 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e02414b1-6edb-41df-bef5-da5638c58c13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf2e3462f3adecb5658327310dd9a543e1f7fe8b7d477f0f90c77f9a3c1724cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4933db7a36f445ad8289586a5de8f593c8f565d53b4b310292c2a6b436b86b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dd75d360dcc49fc8db00b8a075ce33da2bd845289f44a0148ab391635ab90d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://feccbfdb1fa7a9935e98927106faf0c924485823950276b473cdb8c06c4b07eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:51Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:51 crc kubenswrapper[4703]: I1209 12:05:51.258931 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:51Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:51 crc kubenswrapper[4703]: I1209 12:05:51.297418 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:51 crc kubenswrapper[4703]: I1209 12:05:51.297470 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:51 crc kubenswrapper[4703]: I1209 12:05:51.297490 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:51 crc kubenswrapper[4703]: I1209 12:05:51.297507 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:51 crc kubenswrapper[4703]: I1209 12:05:51.297517 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:51Z","lastTransitionTime":"2025-12-09T12:05:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:51 crc kubenswrapper[4703]: I1209 12:05:51.400705 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:51 crc kubenswrapper[4703]: I1209 12:05:51.400748 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:51 crc kubenswrapper[4703]: I1209 12:05:51.400758 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:51 crc kubenswrapper[4703]: I1209 12:05:51.400775 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:51 crc kubenswrapper[4703]: I1209 12:05:51.400789 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:51Z","lastTransitionTime":"2025-12-09T12:05:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:51 crc kubenswrapper[4703]: I1209 12:05:51.504165 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:51 crc kubenswrapper[4703]: I1209 12:05:51.504313 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:51 crc kubenswrapper[4703]: I1209 12:05:51.504330 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:51 crc kubenswrapper[4703]: I1209 12:05:51.504358 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:51 crc kubenswrapper[4703]: I1209 12:05:51.504376 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:51Z","lastTransitionTime":"2025-12-09T12:05:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:51 crc kubenswrapper[4703]: I1209 12:05:51.608510 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:51 crc kubenswrapper[4703]: I1209 12:05:51.608914 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:51 crc kubenswrapper[4703]: I1209 12:05:51.608926 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:51 crc kubenswrapper[4703]: I1209 12:05:51.608945 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:51 crc kubenswrapper[4703]: I1209 12:05:51.608958 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:51Z","lastTransitionTime":"2025-12-09T12:05:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:51 crc kubenswrapper[4703]: I1209 12:05:51.710856 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:51 crc kubenswrapper[4703]: I1209 12:05:51.710901 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:51 crc kubenswrapper[4703]: I1209 12:05:51.710909 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:51 crc kubenswrapper[4703]: I1209 12:05:51.710924 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:51 crc kubenswrapper[4703]: I1209 12:05:51.710936 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:51Z","lastTransitionTime":"2025-12-09T12:05:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:51 crc kubenswrapper[4703]: I1209 12:05:51.813926 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:51 crc kubenswrapper[4703]: I1209 12:05:51.813959 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:51 crc kubenswrapper[4703]: I1209 12:05:51.813969 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:51 crc kubenswrapper[4703]: I1209 12:05:51.813985 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:51 crc kubenswrapper[4703]: I1209 12:05:51.813995 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:51Z","lastTransitionTime":"2025-12-09T12:05:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:51 crc kubenswrapper[4703]: I1209 12:05:51.917231 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:51 crc kubenswrapper[4703]: I1209 12:05:51.917269 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:51 crc kubenswrapper[4703]: I1209 12:05:51.917278 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:51 crc kubenswrapper[4703]: I1209 12:05:51.917290 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:51 crc kubenswrapper[4703]: I1209 12:05:51.917300 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:51Z","lastTransitionTime":"2025-12-09T12:05:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:52 crc kubenswrapper[4703]: I1209 12:05:52.019621 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:52 crc kubenswrapper[4703]: I1209 12:05:52.019670 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:52 crc kubenswrapper[4703]: I1209 12:05:52.019679 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:52 crc kubenswrapper[4703]: I1209 12:05:52.019693 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:52 crc kubenswrapper[4703]: I1209 12:05:52.019702 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:52Z","lastTransitionTime":"2025-12-09T12:05:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:52 crc kubenswrapper[4703]: I1209 12:05:52.091688 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:52 crc kubenswrapper[4703]: I1209 12:05:52.091739 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:52 crc kubenswrapper[4703]: I1209 12:05:52.091754 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:52 crc kubenswrapper[4703]: I1209 12:05:52.091770 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:52 crc kubenswrapper[4703]: I1209 12:05:52.091782 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:52Z","lastTransitionTime":"2025-12-09T12:05:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:52 crc kubenswrapper[4703]: E1209 12:05:52.104785 4703 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:05:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:05:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:05:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:05:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cc650f57-0f1e-4118-b5e8-874027bb4fd3\\\",\\\"systemUUID\\\":\\\"538480e3-ee75-4c42-9816-5a001726e0b5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:52Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:52 crc kubenswrapper[4703]: I1209 12:05:52.108646 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:52 crc kubenswrapper[4703]: I1209 12:05:52.108685 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:52 crc kubenswrapper[4703]: I1209 12:05:52.108700 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:52 crc kubenswrapper[4703]: I1209 12:05:52.108716 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:52 crc kubenswrapper[4703]: I1209 12:05:52.108727 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:52Z","lastTransitionTime":"2025-12-09T12:05:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:52 crc kubenswrapper[4703]: E1209 12:05:52.121942 4703 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:05:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:05:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:05:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:05:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cc650f57-0f1e-4118-b5e8-874027bb4fd3\\\",\\\"systemUUID\\\":\\\"538480e3-ee75-4c42-9816-5a001726e0b5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:52Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:52 crc kubenswrapper[4703]: I1209 12:05:52.123994 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 09 12:05:52 crc kubenswrapper[4703]: I1209 12:05:52.130740 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:52 crc kubenswrapper[4703]: I1209 12:05:52.130803 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:52 crc kubenswrapper[4703]: I1209 12:05:52.130817 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:52 crc kubenswrapper[4703]: I1209 12:05:52.130839 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:52 crc kubenswrapper[4703]: I1209 12:05:52.130854 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:52Z","lastTransitionTime":"2025-12-09T12:05:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:52 crc kubenswrapper[4703]: I1209 12:05:52.138745 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Dec 09 12:05:52 crc kubenswrapper[4703]: I1209 12:05:52.139501 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9zbgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b57e1095-b0e1-4b30-a491-00852a5219e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c902e8b479756afb7cf929d0963afc0f950288658427c1d49ff22ee4319cda70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6sdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9zbgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:52Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:52 crc kubenswrapper[4703]: E1209 12:05:52.142512 4703 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:05:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:05:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:05:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:05:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cc650f57-0f1e-4118-b5e8-874027bb4fd3\\\",\\\"systemUUID\\\":\\\"538480e3-ee75-4c42-9816-5a001726e0b5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:52Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:52 crc kubenswrapper[4703]: I1209 12:05:52.145794 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:52 crc kubenswrapper[4703]: I1209 12:05:52.145824 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:52 crc kubenswrapper[4703]: I1209 12:05:52.145834 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:52 crc kubenswrapper[4703]: I1209 12:05:52.145850 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:52 crc kubenswrapper[4703]: I1209 12:05:52.145861 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:52Z","lastTransitionTime":"2025-12-09T12:05:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:52 crc kubenswrapper[4703]: I1209 12:05:52.152947 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:52Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:52 crc kubenswrapper[4703]: E1209 12:05:52.161296 4703 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:05:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:05:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:05:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:05:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cc650f57-0f1e-4118-b5e8-874027bb4fd3\\\",\\\"systemUUID\\\":\\\"538480e3-ee75-4c42-9816-5a001726e0b5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:52Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:52 crc kubenswrapper[4703]: I1209 12:05:52.164899 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:52 crc kubenswrapper[4703]: I1209 12:05:52.164930 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:52 crc kubenswrapper[4703]: I1209 12:05:52.164960 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:52 crc kubenswrapper[4703]: I1209 12:05:52.164977 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:52 crc kubenswrapper[4703]: I1209 12:05:52.164989 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:52Z","lastTransitionTime":"2025-12-09T12:05:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:52 crc kubenswrapper[4703]: I1209 12:05:52.168022 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4r9tc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65954588-9afb-47ff-8c0b-f83bf290da27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://698ca5256efee75c73f1585ad1f7955676be6dac9cbd1c5aebdbf88848c35cee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aeaa589b823d27c4d29e522f951c5578acd67848c14381e3f61821458f74a966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aeaa589b823d27c4d29e522f951c5578acd67848c14381e3f61821458f74a966\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e7f1224d3313cfd9900663c77a41898ee26d0f91591e2f641a17a6a543c69b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e7f1224d3313cfd9900663c77a41898ee26d0f91591e2f641a17a6a543c69b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f86761f861d07fe4e22b95651d0ed069799b95020e8e168527fd838ea6cc1e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f86761f861d07fe4e22b95651d0ed069799b95020e8e168527fd838ea6cc1e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c0a351532024238eb4c192066350cdd4f94945cfa549586ced1c91a2379490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c0a351532024238eb4c192066350cdd4f94945cfa549586ced1c91a2379490\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7baabf4646f56bb08044eb6cb29976274522e962906d573a279e967e64d191f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7baabf4646f56bb08044eb6cb29976274522e962906d573a279e967e64d191f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87f16f3d165437fc695faa410c055f3f3716fbb47e2dc1d41bba05315872e950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87f16f3d165437fc695faa410c055f3f3716fbb47e2dc1d41bba05315872e950\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4r9tc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:52Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:52 crc kubenswrapper[4703]: E1209 12:05:52.177076 4703 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:05:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:05:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:05:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:05:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cc650f57-0f1e-4118-b5e8-874027bb4fd3\\\",\\\"systemUUID\\\":\\\"538480e3-ee75-4c42-9816-5a001726e0b5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:52Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:52 crc kubenswrapper[4703]: E1209 12:05:52.177205 4703 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 09 12:05:52 crc kubenswrapper[4703]: I1209 12:05:52.178617 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:52 crc kubenswrapper[4703]: I1209 12:05:52.178653 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:52 crc kubenswrapper[4703]: I1209 12:05:52.178662 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:52 crc kubenswrapper[4703]: I1209 12:05:52.178676 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:52 crc kubenswrapper[4703]: I1209 12:05:52.178685 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:52Z","lastTransitionTime":"2025-12-09T12:05:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:52 crc kubenswrapper[4703]: I1209 12:05:52.179995 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998159f494b646ac88f1f1ffb09789bd146d7b3554b3a5106174d7c72f3ff583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:52Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:52 crc kubenswrapper[4703]: I1209 12:05:52.191120 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32956ceb-8540-406e-8693-e86efb46cd42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1535901888e91bc0037177a3150033b17d6e2d143faa5a082dba0d096cad474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tv4nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d149dfa980c5e9cee99cd7ba8a51bca19529368e037a1fd8e9b8dbea04179378\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tv4nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q8sfk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:52Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:52 crc kubenswrapper[4703]: I1209 12:05:52.201658 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4sss9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9dac2a02-8ee0-445c-bac8-4d448cda509d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c698b36fa974c95698ac8ac133b3caa9cbe8847626f5abf8088011c1a1a13032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q96fc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://660ecb626e35b5e693783b416e62c773b996e7e39233160e422dc9357ca0fd2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q96fc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4sss9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:52Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:52 crc kubenswrapper[4703]: I1209 12:05:52.213455 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"687e9c7e-e4b8-4bbf-871e-714260501e27\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd9149c63c18f6178219d1f2e4f8e44bc861746b90fb9748105461192a870431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8612defa917cc99574e7762a0260e1b670277ea754e0a66bca960ef6b5a97f2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc0c9f4fdf8ce1ac42d54d3823b0137e0fd8ddf4499236ac3b954f9bff7c491\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd716e645be3932ca219e76ba8d896a8cf27086bbbac8f5e68057873fa68a6f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da708e8829aba2597198cdab64374791885bac9ddf7cb04b296c81d4e81d42ff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 12:05:18.266857 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 12:05:18.266993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 12:05:18.268588 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1075686048/tls.crt::/tmp/serving-cert-1075686048/tls.key\\\\\\\"\\\\nI1209 12:05:18.602670 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 12:05:18.604681 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 12:05:18.604730 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 12:05:18.604783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 12:05:18.604808 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 12:05:18.609177 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 12:05:18.609280 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:05:18.609306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:05:18.609328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 12:05:18.609348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 12:05:18.609367 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 12:05:18.609387 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 12:05:18.609237 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 12:05:18.612865 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://917390e847b6ec24f370316f0fbc305db0f667eab946b4f0e749855a62112cc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55c7e1e49ab4af3fff08ccfa9a7d9b4f889fbdd1d60592bca6c87990f4623d5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55c7e1e49ab4af3fff08ccfa9a7d9b4f889fbdd1d60592bca6c87990f4623d5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:52Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:52 crc kubenswrapper[4703]: I1209 12:05:52.224480 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:52Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:52 crc kubenswrapper[4703]: I1209 12:05:52.234843 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ncbbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c180fbd9-43db-436b-8166-3cbcb5a14da3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8f7249f78155fea92f6a72919fd391b841a3cdb25583c9b9bc1083c6e34f087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22tv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ncbbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:52Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:52 crc kubenswrapper[4703]: I1209 12:05:52.252700 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9173444-5181-4ee4-b651-11d92ccab0d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://382198a5fbf52c39a98bc79aa4e917d50dfbcb3ff73d831389901544d400817c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce8423b98a9127fea76c23c865d9840069fe97e1d10dc9e97816c916192bf8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5989b9b8e8fb619a07c3242f7c4310e042a281648593a05edd6cb16449480428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9652e0d783249140796d5c8586e300acb7918f3ced79a1daec01da21eac363c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06ebee5cde81f7c5fb299c4a346f466d3bd4667be4dd2e89712dcae660a1e08c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02b8fb5400b3ed69b7e65ae9f09aac617bd51972cfa0d5aec436c9f59eb65ac2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://81d94ce172f3ac2803f304ca33a0549f1e75e67a45f3acedc0b276a83e885db1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81d94ce172f3ac2803f304ca33a0549f1e75e67a45f3acedc0b276a83e885db1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T12:05:47Z\\\",\\\"message\\\":\\\"cluster\\\\\\\", UUID:\\\\\\\"d4efc4a8-c514-4a6b-901c-2953978b50d3\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-multus/multus-admission-controller\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-multus/multus-admission-controller_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-multus/multus-admission-controller\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.119\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.119\\\\\\\", Port:8443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1209 12:05:46.875029 6369 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[extern\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-7hrm8_openshift-ovn-kubernetes(e9173444-5181-4ee4-b651-11d92ccab0d0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88a462e310fdbdf79496906a304fcf953c4e48c9cab9137f52cf5aafa9c4054c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f192db4b078c6bf3ffea6db58842ce7e8b4ad76c12fa8422496288bc0cfe0c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f192db4b078c6bf3ffea6db58842ce7e8b4ad76c12fa8422496288bc0cfe0c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7hrm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:52Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:52 crc kubenswrapper[4703]: I1209 12:05:52.264143 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pf4r7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f199898-7916-48b6-b5e6-c878bacae384\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-856ln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-856ln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pf4r7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:52Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:52 crc kubenswrapper[4703]: I1209 12:05:52.276028 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e02414b1-6edb-41df-bef5-da5638c58c13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf2e3462f3adecb5658327310dd9a543e1f7fe8b7d477f0f90c77f9a3c1724cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4933db7a36f445ad8289586a5de8f593c8f565d53b4b310292c2a6b436b86b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dd75d360dcc49fc8db00b8a075ce33da2bd845289f44a0148ab391635ab90d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://feccbfdb1fa7a9935e98927106faf0c924485823950276b473cdb8c06c4b07eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:52Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:52 crc kubenswrapper[4703]: I1209 12:05:52.280646 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:52 crc kubenswrapper[4703]: I1209 12:05:52.280673 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:52 crc kubenswrapper[4703]: I1209 12:05:52.280681 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:52 crc kubenswrapper[4703]: I1209 12:05:52.280695 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:52 crc kubenswrapper[4703]: I1209 12:05:52.280705 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:52Z","lastTransitionTime":"2025-12-09T12:05:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:52 crc kubenswrapper[4703]: I1209 12:05:52.287679 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15947b2dcc90c69809979bf2d4a222ee664245d897b1f9d9ca91d29551ad86ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://592c8564eeb877cf87544918cb97d0e32451a6afd1c201efafb4cfec9a1e5857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:52Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:52 crc kubenswrapper[4703]: I1209 12:05:52.297568 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rfmng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a89eb00-454a-44b2-9b8e-6518b4a9d10c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a0c3ac19e4c5217f17ea8cd7c742043ee7708b0f086c99341a4bbc062fa53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qf85c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rfmng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:52Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:52 crc kubenswrapper[4703]: I1209 12:05:52.309071 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f05c1cdf163073cebf69a5dcd00e285698d11f7c76995b202500503a9762d467\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:52Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:52 crc kubenswrapper[4703]: I1209 12:05:52.320554 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:05:52Z is after 2025-08-24T17:21:41Z" Dec 09 12:05:52 crc kubenswrapper[4703]: I1209 12:05:52.382370 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:52 crc kubenswrapper[4703]: I1209 12:05:52.382439 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:52 crc kubenswrapper[4703]: I1209 12:05:52.382448 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:52 crc kubenswrapper[4703]: I1209 12:05:52.382461 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:52 crc kubenswrapper[4703]: I1209 12:05:52.382470 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:52Z","lastTransitionTime":"2025-12-09T12:05:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:52 crc kubenswrapper[4703]: I1209 12:05:52.484969 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:52 crc kubenswrapper[4703]: I1209 12:05:52.485005 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:52 crc kubenswrapper[4703]: I1209 12:05:52.485014 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:52 crc kubenswrapper[4703]: I1209 12:05:52.485026 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:52 crc kubenswrapper[4703]: I1209 12:05:52.485035 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:52Z","lastTransitionTime":"2025-12-09T12:05:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:52 crc kubenswrapper[4703]: I1209 12:05:52.587478 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:52 crc kubenswrapper[4703]: I1209 12:05:52.587530 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:52 crc kubenswrapper[4703]: I1209 12:05:52.587542 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:52 crc kubenswrapper[4703]: I1209 12:05:52.587560 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:52 crc kubenswrapper[4703]: I1209 12:05:52.587573 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:52Z","lastTransitionTime":"2025-12-09T12:05:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:52 crc kubenswrapper[4703]: I1209 12:05:52.689954 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:52 crc kubenswrapper[4703]: I1209 12:05:52.689998 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:52 crc kubenswrapper[4703]: I1209 12:05:52.690009 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:52 crc kubenswrapper[4703]: I1209 12:05:52.690025 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:52 crc kubenswrapper[4703]: I1209 12:05:52.690037 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:52Z","lastTransitionTime":"2025-12-09T12:05:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:52 crc kubenswrapper[4703]: I1209 12:05:52.792353 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:52 crc kubenswrapper[4703]: I1209 12:05:52.792422 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:52 crc kubenswrapper[4703]: I1209 12:05:52.792434 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:52 crc kubenswrapper[4703]: I1209 12:05:52.792459 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:52 crc kubenswrapper[4703]: I1209 12:05:52.792486 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:52Z","lastTransitionTime":"2025-12-09T12:05:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:52 crc kubenswrapper[4703]: I1209 12:05:52.895295 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:52 crc kubenswrapper[4703]: I1209 12:05:52.895347 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:52 crc kubenswrapper[4703]: I1209 12:05:52.895356 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:52 crc kubenswrapper[4703]: I1209 12:05:52.895369 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:52 crc kubenswrapper[4703]: I1209 12:05:52.895378 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:52Z","lastTransitionTime":"2025-12-09T12:05:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:52 crc kubenswrapper[4703]: I1209 12:05:52.997932 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:52 crc kubenswrapper[4703]: I1209 12:05:52.997976 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:52 crc kubenswrapper[4703]: I1209 12:05:52.998008 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:52 crc kubenswrapper[4703]: I1209 12:05:52.998038 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:52 crc kubenswrapper[4703]: I1209 12:05:52.998050 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:52Z","lastTransitionTime":"2025-12-09T12:05:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:53 crc kubenswrapper[4703]: I1209 12:05:53.068918 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:05:53 crc kubenswrapper[4703]: I1209 12:05:53.068985 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:05:53 crc kubenswrapper[4703]: I1209 12:05:53.068992 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pf4r7" Dec 09 12:05:53 crc kubenswrapper[4703]: I1209 12:05:53.069072 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:05:53 crc kubenswrapper[4703]: E1209 12:05:53.069075 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 12:05:53 crc kubenswrapper[4703]: E1209 12:05:53.069182 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pf4r7" podUID="9f199898-7916-48b6-b5e6-c878bacae384" Dec 09 12:05:53 crc kubenswrapper[4703]: E1209 12:05:53.069275 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 12:05:53 crc kubenswrapper[4703]: E1209 12:05:53.069327 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 12:05:53 crc kubenswrapper[4703]: I1209 12:05:53.101230 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:53 crc kubenswrapper[4703]: I1209 12:05:53.101261 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:53 crc kubenswrapper[4703]: I1209 12:05:53.101270 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:53 crc kubenswrapper[4703]: I1209 12:05:53.101284 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:53 crc kubenswrapper[4703]: I1209 12:05:53.101294 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:53Z","lastTransitionTime":"2025-12-09T12:05:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:53 crc kubenswrapper[4703]: I1209 12:05:53.204653 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:53 crc kubenswrapper[4703]: I1209 12:05:53.204693 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:53 crc kubenswrapper[4703]: I1209 12:05:53.204703 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:53 crc kubenswrapper[4703]: I1209 12:05:53.204722 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:53 crc kubenswrapper[4703]: I1209 12:05:53.204735 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:53Z","lastTransitionTime":"2025-12-09T12:05:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:53 crc kubenswrapper[4703]: I1209 12:05:53.306788 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:53 crc kubenswrapper[4703]: I1209 12:05:53.306821 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:53 crc kubenswrapper[4703]: I1209 12:05:53.306831 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:53 crc kubenswrapper[4703]: I1209 12:05:53.306846 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:53 crc kubenswrapper[4703]: I1209 12:05:53.306859 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:53Z","lastTransitionTime":"2025-12-09T12:05:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:53 crc kubenswrapper[4703]: I1209 12:05:53.409216 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:53 crc kubenswrapper[4703]: I1209 12:05:53.409253 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:53 crc kubenswrapper[4703]: I1209 12:05:53.409269 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:53 crc kubenswrapper[4703]: I1209 12:05:53.409285 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:53 crc kubenswrapper[4703]: I1209 12:05:53.409297 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:53Z","lastTransitionTime":"2025-12-09T12:05:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:53 crc kubenswrapper[4703]: I1209 12:05:53.511453 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:53 crc kubenswrapper[4703]: I1209 12:05:53.511491 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:53 crc kubenswrapper[4703]: I1209 12:05:53.511520 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:53 crc kubenswrapper[4703]: I1209 12:05:53.511532 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:53 crc kubenswrapper[4703]: I1209 12:05:53.511543 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:53Z","lastTransitionTime":"2025-12-09T12:05:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:53 crc kubenswrapper[4703]: I1209 12:05:53.613555 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:53 crc kubenswrapper[4703]: I1209 12:05:53.613587 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:53 crc kubenswrapper[4703]: I1209 12:05:53.613595 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:53 crc kubenswrapper[4703]: I1209 12:05:53.613634 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:53 crc kubenswrapper[4703]: I1209 12:05:53.613644 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:53Z","lastTransitionTime":"2025-12-09T12:05:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:53 crc kubenswrapper[4703]: I1209 12:05:53.715781 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:53 crc kubenswrapper[4703]: I1209 12:05:53.715816 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:53 crc kubenswrapper[4703]: I1209 12:05:53.715826 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:53 crc kubenswrapper[4703]: I1209 12:05:53.715842 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:53 crc kubenswrapper[4703]: I1209 12:05:53.715853 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:53Z","lastTransitionTime":"2025-12-09T12:05:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:53 crc kubenswrapper[4703]: I1209 12:05:53.817825 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:53 crc kubenswrapper[4703]: I1209 12:05:53.817868 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:53 crc kubenswrapper[4703]: I1209 12:05:53.817881 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:53 crc kubenswrapper[4703]: I1209 12:05:53.817898 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:53 crc kubenswrapper[4703]: I1209 12:05:53.817909 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:53Z","lastTransitionTime":"2025-12-09T12:05:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:53 crc kubenswrapper[4703]: I1209 12:05:53.920140 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:53 crc kubenswrapper[4703]: I1209 12:05:53.920208 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:53 crc kubenswrapper[4703]: I1209 12:05:53.920221 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:53 crc kubenswrapper[4703]: I1209 12:05:53.920239 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:53 crc kubenswrapper[4703]: I1209 12:05:53.920251 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:53Z","lastTransitionTime":"2025-12-09T12:05:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:54 crc kubenswrapper[4703]: I1209 12:05:54.022657 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:54 crc kubenswrapper[4703]: I1209 12:05:54.022699 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:54 crc kubenswrapper[4703]: I1209 12:05:54.022711 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:54 crc kubenswrapper[4703]: I1209 12:05:54.022727 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:54 crc kubenswrapper[4703]: I1209 12:05:54.022736 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:54Z","lastTransitionTime":"2025-12-09T12:05:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:54 crc kubenswrapper[4703]: I1209 12:05:54.127530 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:54 crc kubenswrapper[4703]: I1209 12:05:54.127569 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:54 crc kubenswrapper[4703]: I1209 12:05:54.127579 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:54 crc kubenswrapper[4703]: I1209 12:05:54.127593 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:54 crc kubenswrapper[4703]: I1209 12:05:54.127604 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:54Z","lastTransitionTime":"2025-12-09T12:05:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:54 crc kubenswrapper[4703]: I1209 12:05:54.229565 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:54 crc kubenswrapper[4703]: I1209 12:05:54.229618 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:54 crc kubenswrapper[4703]: I1209 12:05:54.229629 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:54 crc kubenswrapper[4703]: I1209 12:05:54.229645 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:54 crc kubenswrapper[4703]: I1209 12:05:54.229653 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:54Z","lastTransitionTime":"2025-12-09T12:05:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:54 crc kubenswrapper[4703]: I1209 12:05:54.331764 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:54 crc kubenswrapper[4703]: I1209 12:05:54.331827 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:54 crc kubenswrapper[4703]: I1209 12:05:54.331840 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:54 crc kubenswrapper[4703]: I1209 12:05:54.331854 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:54 crc kubenswrapper[4703]: I1209 12:05:54.331864 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:54Z","lastTransitionTime":"2025-12-09T12:05:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:54 crc kubenswrapper[4703]: I1209 12:05:54.434013 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:54 crc kubenswrapper[4703]: I1209 12:05:54.434051 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:54 crc kubenswrapper[4703]: I1209 12:05:54.434062 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:54 crc kubenswrapper[4703]: I1209 12:05:54.434077 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:54 crc kubenswrapper[4703]: I1209 12:05:54.434090 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:54Z","lastTransitionTime":"2025-12-09T12:05:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:54 crc kubenswrapper[4703]: I1209 12:05:54.536682 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:54 crc kubenswrapper[4703]: I1209 12:05:54.536717 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:54 crc kubenswrapper[4703]: I1209 12:05:54.536727 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:54 crc kubenswrapper[4703]: I1209 12:05:54.536742 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:54 crc kubenswrapper[4703]: I1209 12:05:54.536753 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:54Z","lastTransitionTime":"2025-12-09T12:05:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:54 crc kubenswrapper[4703]: I1209 12:05:54.639354 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:54 crc kubenswrapper[4703]: I1209 12:05:54.640101 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:54 crc kubenswrapper[4703]: I1209 12:05:54.640152 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:54 crc kubenswrapper[4703]: I1209 12:05:54.640169 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:54 crc kubenswrapper[4703]: I1209 12:05:54.640179 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:54Z","lastTransitionTime":"2025-12-09T12:05:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:54 crc kubenswrapper[4703]: I1209 12:05:54.743124 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:54 crc kubenswrapper[4703]: I1209 12:05:54.743161 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:54 crc kubenswrapper[4703]: I1209 12:05:54.743171 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:54 crc kubenswrapper[4703]: I1209 12:05:54.743185 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:54 crc kubenswrapper[4703]: I1209 12:05:54.743214 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:54Z","lastTransitionTime":"2025-12-09T12:05:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:54 crc kubenswrapper[4703]: I1209 12:05:54.845134 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:54 crc kubenswrapper[4703]: I1209 12:05:54.845171 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:54 crc kubenswrapper[4703]: I1209 12:05:54.845180 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:54 crc kubenswrapper[4703]: I1209 12:05:54.845230 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:54 crc kubenswrapper[4703]: I1209 12:05:54.845241 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:54Z","lastTransitionTime":"2025-12-09T12:05:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:54 crc kubenswrapper[4703]: I1209 12:05:54.947393 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:54 crc kubenswrapper[4703]: I1209 12:05:54.947454 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:54 crc kubenswrapper[4703]: I1209 12:05:54.947464 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:54 crc kubenswrapper[4703]: I1209 12:05:54.947477 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:54 crc kubenswrapper[4703]: I1209 12:05:54.947486 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:54Z","lastTransitionTime":"2025-12-09T12:05:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:55 crc kubenswrapper[4703]: I1209 12:05:55.049810 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:55 crc kubenswrapper[4703]: I1209 12:05:55.049852 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:55 crc kubenswrapper[4703]: I1209 12:05:55.049864 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:55 crc kubenswrapper[4703]: I1209 12:05:55.049880 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:55 crc kubenswrapper[4703]: I1209 12:05:55.049927 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:55Z","lastTransitionTime":"2025-12-09T12:05:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:55 crc kubenswrapper[4703]: I1209 12:05:55.069350 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:05:55 crc kubenswrapper[4703]: E1209 12:05:55.069472 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 12:05:55 crc kubenswrapper[4703]: I1209 12:05:55.069618 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:05:55 crc kubenswrapper[4703]: E1209 12:05:55.069666 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 12:05:55 crc kubenswrapper[4703]: I1209 12:05:55.069762 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:05:55 crc kubenswrapper[4703]: E1209 12:05:55.069825 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 12:05:55 crc kubenswrapper[4703]: I1209 12:05:55.070009 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pf4r7" Dec 09 12:05:55 crc kubenswrapper[4703]: E1209 12:05:55.070064 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pf4r7" podUID="9f199898-7916-48b6-b5e6-c878bacae384" Dec 09 12:05:55 crc kubenswrapper[4703]: I1209 12:05:55.152363 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:55 crc kubenswrapper[4703]: I1209 12:05:55.152412 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:55 crc kubenswrapper[4703]: I1209 12:05:55.152424 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:55 crc kubenswrapper[4703]: I1209 12:05:55.152447 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:55 crc kubenswrapper[4703]: I1209 12:05:55.152460 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:55Z","lastTransitionTime":"2025-12-09T12:05:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:55 crc kubenswrapper[4703]: I1209 12:05:55.254471 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:55 crc kubenswrapper[4703]: I1209 12:05:55.254509 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:55 crc kubenswrapper[4703]: I1209 12:05:55.254521 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:55 crc kubenswrapper[4703]: I1209 12:05:55.254538 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:55 crc kubenswrapper[4703]: I1209 12:05:55.254550 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:55Z","lastTransitionTime":"2025-12-09T12:05:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:55 crc kubenswrapper[4703]: I1209 12:05:55.357406 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:55 crc kubenswrapper[4703]: I1209 12:05:55.357446 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:55 crc kubenswrapper[4703]: I1209 12:05:55.357455 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:55 crc kubenswrapper[4703]: I1209 12:05:55.357470 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:55 crc kubenswrapper[4703]: I1209 12:05:55.357480 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:55Z","lastTransitionTime":"2025-12-09T12:05:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:55 crc kubenswrapper[4703]: I1209 12:05:55.460287 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:55 crc kubenswrapper[4703]: I1209 12:05:55.460348 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:55 crc kubenswrapper[4703]: I1209 12:05:55.460360 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:55 crc kubenswrapper[4703]: I1209 12:05:55.460380 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:55 crc kubenswrapper[4703]: I1209 12:05:55.460394 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:55Z","lastTransitionTime":"2025-12-09T12:05:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:55 crc kubenswrapper[4703]: I1209 12:05:55.562807 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:55 crc kubenswrapper[4703]: I1209 12:05:55.562856 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:55 crc kubenswrapper[4703]: I1209 12:05:55.562869 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:55 crc kubenswrapper[4703]: I1209 12:05:55.562884 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:55 crc kubenswrapper[4703]: I1209 12:05:55.562893 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:55Z","lastTransitionTime":"2025-12-09T12:05:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:55 crc kubenswrapper[4703]: I1209 12:05:55.665303 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:55 crc kubenswrapper[4703]: I1209 12:05:55.665365 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:55 crc kubenswrapper[4703]: I1209 12:05:55.665375 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:55 crc kubenswrapper[4703]: I1209 12:05:55.665389 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:55 crc kubenswrapper[4703]: I1209 12:05:55.665400 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:55Z","lastTransitionTime":"2025-12-09T12:05:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:55 crc kubenswrapper[4703]: I1209 12:05:55.767574 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:55 crc kubenswrapper[4703]: I1209 12:05:55.767630 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:55 crc kubenswrapper[4703]: I1209 12:05:55.767639 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:55 crc kubenswrapper[4703]: I1209 12:05:55.767653 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:55 crc kubenswrapper[4703]: I1209 12:05:55.767662 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:55Z","lastTransitionTime":"2025-12-09T12:05:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:55 crc kubenswrapper[4703]: I1209 12:05:55.869413 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:55 crc kubenswrapper[4703]: I1209 12:05:55.869459 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:55 crc kubenswrapper[4703]: I1209 12:05:55.869468 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:55 crc kubenswrapper[4703]: I1209 12:05:55.869482 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:55 crc kubenswrapper[4703]: I1209 12:05:55.869490 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:55Z","lastTransitionTime":"2025-12-09T12:05:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:55 crc kubenswrapper[4703]: I1209 12:05:55.971950 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:55 crc kubenswrapper[4703]: I1209 12:05:55.972051 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:55 crc kubenswrapper[4703]: I1209 12:05:55.972060 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:55 crc kubenswrapper[4703]: I1209 12:05:55.972074 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:55 crc kubenswrapper[4703]: I1209 12:05:55.972083 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:55Z","lastTransitionTime":"2025-12-09T12:05:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:56 crc kubenswrapper[4703]: I1209 12:05:56.073787 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:56 crc kubenswrapper[4703]: I1209 12:05:56.073823 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:56 crc kubenswrapper[4703]: I1209 12:05:56.073834 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:56 crc kubenswrapper[4703]: I1209 12:05:56.073851 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:56 crc kubenswrapper[4703]: I1209 12:05:56.073862 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:56Z","lastTransitionTime":"2025-12-09T12:05:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:56 crc kubenswrapper[4703]: I1209 12:05:56.176015 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:56 crc kubenswrapper[4703]: I1209 12:05:56.176081 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:56 crc kubenswrapper[4703]: I1209 12:05:56.176090 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:56 crc kubenswrapper[4703]: I1209 12:05:56.176103 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:56 crc kubenswrapper[4703]: I1209 12:05:56.176110 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:56Z","lastTransitionTime":"2025-12-09T12:05:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:56 crc kubenswrapper[4703]: I1209 12:05:56.278625 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:56 crc kubenswrapper[4703]: I1209 12:05:56.278662 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:56 crc kubenswrapper[4703]: I1209 12:05:56.278671 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:56 crc kubenswrapper[4703]: I1209 12:05:56.278686 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:56 crc kubenswrapper[4703]: I1209 12:05:56.278696 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:56Z","lastTransitionTime":"2025-12-09T12:05:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:56 crc kubenswrapper[4703]: I1209 12:05:56.380476 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:56 crc kubenswrapper[4703]: I1209 12:05:56.380518 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:56 crc kubenswrapper[4703]: I1209 12:05:56.380527 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:56 crc kubenswrapper[4703]: I1209 12:05:56.380542 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:56 crc kubenswrapper[4703]: I1209 12:05:56.380551 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:56Z","lastTransitionTime":"2025-12-09T12:05:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:56 crc kubenswrapper[4703]: I1209 12:05:56.482948 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:56 crc kubenswrapper[4703]: I1209 12:05:56.482994 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:56 crc kubenswrapper[4703]: I1209 12:05:56.483005 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:56 crc kubenswrapper[4703]: I1209 12:05:56.483018 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:56 crc kubenswrapper[4703]: I1209 12:05:56.483026 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:56Z","lastTransitionTime":"2025-12-09T12:05:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:56 crc kubenswrapper[4703]: I1209 12:05:56.585004 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:56 crc kubenswrapper[4703]: I1209 12:05:56.585045 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:56 crc kubenswrapper[4703]: I1209 12:05:56.585056 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:56 crc kubenswrapper[4703]: I1209 12:05:56.585072 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:56 crc kubenswrapper[4703]: I1209 12:05:56.585083 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:56Z","lastTransitionTime":"2025-12-09T12:05:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:56 crc kubenswrapper[4703]: I1209 12:05:56.687126 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:56 crc kubenswrapper[4703]: I1209 12:05:56.687165 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:56 crc kubenswrapper[4703]: I1209 12:05:56.687175 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:56 crc kubenswrapper[4703]: I1209 12:05:56.687207 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:56 crc kubenswrapper[4703]: I1209 12:05:56.687220 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:56Z","lastTransitionTime":"2025-12-09T12:05:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:56 crc kubenswrapper[4703]: I1209 12:05:56.789332 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:56 crc kubenswrapper[4703]: I1209 12:05:56.789373 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:56 crc kubenswrapper[4703]: I1209 12:05:56.789386 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:56 crc kubenswrapper[4703]: I1209 12:05:56.789403 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:56 crc kubenswrapper[4703]: I1209 12:05:56.789413 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:56Z","lastTransitionTime":"2025-12-09T12:05:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:56 crc kubenswrapper[4703]: I1209 12:05:56.891445 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:56 crc kubenswrapper[4703]: I1209 12:05:56.891478 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:56 crc kubenswrapper[4703]: I1209 12:05:56.891486 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:56 crc kubenswrapper[4703]: I1209 12:05:56.891500 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:56 crc kubenswrapper[4703]: I1209 12:05:56.891509 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:56Z","lastTransitionTime":"2025-12-09T12:05:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:56 crc kubenswrapper[4703]: I1209 12:05:56.993728 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:56 crc kubenswrapper[4703]: I1209 12:05:56.993773 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:56 crc kubenswrapper[4703]: I1209 12:05:56.993785 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:56 crc kubenswrapper[4703]: I1209 12:05:56.993806 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:56 crc kubenswrapper[4703]: I1209 12:05:56.993820 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:56Z","lastTransitionTime":"2025-12-09T12:05:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:57 crc kubenswrapper[4703]: I1209 12:05:57.068969 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:05:57 crc kubenswrapper[4703]: I1209 12:05:57.069041 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:05:57 crc kubenswrapper[4703]: I1209 12:05:57.068971 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:05:57 crc kubenswrapper[4703]: E1209 12:05:57.069087 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 12:05:57 crc kubenswrapper[4703]: E1209 12:05:57.069223 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 12:05:57 crc kubenswrapper[4703]: I1209 12:05:57.069256 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pf4r7" Dec 09 12:05:57 crc kubenswrapper[4703]: E1209 12:05:57.069279 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 12:05:57 crc kubenswrapper[4703]: E1209 12:05:57.069323 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pf4r7" podUID="9f199898-7916-48b6-b5e6-c878bacae384" Dec 09 12:05:57 crc kubenswrapper[4703]: I1209 12:05:57.095538 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:57 crc kubenswrapper[4703]: I1209 12:05:57.095565 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:57 crc kubenswrapper[4703]: I1209 12:05:57.095573 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:57 crc kubenswrapper[4703]: I1209 12:05:57.095585 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:57 crc kubenswrapper[4703]: I1209 12:05:57.095594 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:57Z","lastTransitionTime":"2025-12-09T12:05:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:57 crc kubenswrapper[4703]: I1209 12:05:57.198019 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:57 crc kubenswrapper[4703]: I1209 12:05:57.198092 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:57 crc kubenswrapper[4703]: I1209 12:05:57.198113 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:57 crc kubenswrapper[4703]: I1209 12:05:57.198138 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:57 crc kubenswrapper[4703]: I1209 12:05:57.198155 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:57Z","lastTransitionTime":"2025-12-09T12:05:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:57 crc kubenswrapper[4703]: I1209 12:05:57.300374 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:57 crc kubenswrapper[4703]: I1209 12:05:57.300417 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:57 crc kubenswrapper[4703]: I1209 12:05:57.300426 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:57 crc kubenswrapper[4703]: I1209 12:05:57.300446 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:57 crc kubenswrapper[4703]: I1209 12:05:57.300455 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:57Z","lastTransitionTime":"2025-12-09T12:05:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:57 crc kubenswrapper[4703]: I1209 12:05:57.403460 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:57 crc kubenswrapper[4703]: I1209 12:05:57.403501 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:57 crc kubenswrapper[4703]: I1209 12:05:57.403516 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:57 crc kubenswrapper[4703]: I1209 12:05:57.403532 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:57 crc kubenswrapper[4703]: I1209 12:05:57.403541 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:57Z","lastTransitionTime":"2025-12-09T12:05:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:57 crc kubenswrapper[4703]: I1209 12:05:57.505801 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:57 crc kubenswrapper[4703]: I1209 12:05:57.505843 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:57 crc kubenswrapper[4703]: I1209 12:05:57.505860 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:57 crc kubenswrapper[4703]: I1209 12:05:57.505877 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:57 crc kubenswrapper[4703]: I1209 12:05:57.505888 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:57Z","lastTransitionTime":"2025-12-09T12:05:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:57 crc kubenswrapper[4703]: I1209 12:05:57.608243 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:57 crc kubenswrapper[4703]: I1209 12:05:57.608277 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:57 crc kubenswrapper[4703]: I1209 12:05:57.608286 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:57 crc kubenswrapper[4703]: I1209 12:05:57.608299 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:57 crc kubenswrapper[4703]: I1209 12:05:57.608308 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:57Z","lastTransitionTime":"2025-12-09T12:05:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:57 crc kubenswrapper[4703]: I1209 12:05:57.710342 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:57 crc kubenswrapper[4703]: I1209 12:05:57.710392 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:57 crc kubenswrapper[4703]: I1209 12:05:57.710403 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:57 crc kubenswrapper[4703]: I1209 12:05:57.710422 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:57 crc kubenswrapper[4703]: I1209 12:05:57.710432 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:57Z","lastTransitionTime":"2025-12-09T12:05:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:57 crc kubenswrapper[4703]: I1209 12:05:57.813326 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:57 crc kubenswrapper[4703]: I1209 12:05:57.813372 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:57 crc kubenswrapper[4703]: I1209 12:05:57.813383 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:57 crc kubenswrapper[4703]: I1209 12:05:57.813398 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:57 crc kubenswrapper[4703]: I1209 12:05:57.813408 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:57Z","lastTransitionTime":"2025-12-09T12:05:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:57 crc kubenswrapper[4703]: I1209 12:05:57.915219 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:57 crc kubenswrapper[4703]: I1209 12:05:57.915257 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:57 crc kubenswrapper[4703]: I1209 12:05:57.915268 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:57 crc kubenswrapper[4703]: I1209 12:05:57.915283 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:57 crc kubenswrapper[4703]: I1209 12:05:57.915294 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:57Z","lastTransitionTime":"2025-12-09T12:05:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:58 crc kubenswrapper[4703]: I1209 12:05:58.017641 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:58 crc kubenswrapper[4703]: I1209 12:05:58.017684 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:58 crc kubenswrapper[4703]: I1209 12:05:58.017694 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:58 crc kubenswrapper[4703]: I1209 12:05:58.017711 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:58 crc kubenswrapper[4703]: I1209 12:05:58.017724 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:58Z","lastTransitionTime":"2025-12-09T12:05:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:58 crc kubenswrapper[4703]: I1209 12:05:58.119633 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:58 crc kubenswrapper[4703]: I1209 12:05:58.119677 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:58 crc kubenswrapper[4703]: I1209 12:05:58.119687 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:58 crc kubenswrapper[4703]: I1209 12:05:58.119699 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:58 crc kubenswrapper[4703]: I1209 12:05:58.119707 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:58Z","lastTransitionTime":"2025-12-09T12:05:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:58 crc kubenswrapper[4703]: I1209 12:05:58.222165 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:58 crc kubenswrapper[4703]: I1209 12:05:58.222214 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:58 crc kubenswrapper[4703]: I1209 12:05:58.222226 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:58 crc kubenswrapper[4703]: I1209 12:05:58.222241 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:58 crc kubenswrapper[4703]: I1209 12:05:58.222251 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:58Z","lastTransitionTime":"2025-12-09T12:05:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:58 crc kubenswrapper[4703]: I1209 12:05:58.324545 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:58 crc kubenswrapper[4703]: I1209 12:05:58.324588 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:58 crc kubenswrapper[4703]: I1209 12:05:58.324599 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:58 crc kubenswrapper[4703]: I1209 12:05:58.324615 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:58 crc kubenswrapper[4703]: I1209 12:05:58.324628 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:58Z","lastTransitionTime":"2025-12-09T12:05:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:58 crc kubenswrapper[4703]: I1209 12:05:58.426569 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:58 crc kubenswrapper[4703]: I1209 12:05:58.426861 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:58 crc kubenswrapper[4703]: I1209 12:05:58.426968 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:58 crc kubenswrapper[4703]: I1209 12:05:58.427125 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:58 crc kubenswrapper[4703]: I1209 12:05:58.427254 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:58Z","lastTransitionTime":"2025-12-09T12:05:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:58 crc kubenswrapper[4703]: I1209 12:05:58.529460 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:58 crc kubenswrapper[4703]: I1209 12:05:58.529504 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:58 crc kubenswrapper[4703]: I1209 12:05:58.529514 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:58 crc kubenswrapper[4703]: I1209 12:05:58.529530 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:58 crc kubenswrapper[4703]: I1209 12:05:58.529540 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:58Z","lastTransitionTime":"2025-12-09T12:05:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:58 crc kubenswrapper[4703]: I1209 12:05:58.632283 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:58 crc kubenswrapper[4703]: I1209 12:05:58.632322 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:58 crc kubenswrapper[4703]: I1209 12:05:58.632330 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:58 crc kubenswrapper[4703]: I1209 12:05:58.632345 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:58 crc kubenswrapper[4703]: I1209 12:05:58.632356 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:58Z","lastTransitionTime":"2025-12-09T12:05:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:58 crc kubenswrapper[4703]: I1209 12:05:58.734839 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:58 crc kubenswrapper[4703]: I1209 12:05:58.734911 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:58 crc kubenswrapper[4703]: I1209 12:05:58.734925 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:58 crc kubenswrapper[4703]: I1209 12:05:58.734943 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:58 crc kubenswrapper[4703]: I1209 12:05:58.734958 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:58Z","lastTransitionTime":"2025-12-09T12:05:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:58 crc kubenswrapper[4703]: I1209 12:05:58.837111 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:58 crc kubenswrapper[4703]: I1209 12:05:58.837149 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:58 crc kubenswrapper[4703]: I1209 12:05:58.837159 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:58 crc kubenswrapper[4703]: I1209 12:05:58.837174 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:58 crc kubenswrapper[4703]: I1209 12:05:58.837183 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:58Z","lastTransitionTime":"2025-12-09T12:05:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:58 crc kubenswrapper[4703]: I1209 12:05:58.938589 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:58 crc kubenswrapper[4703]: I1209 12:05:58.938650 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:58 crc kubenswrapper[4703]: I1209 12:05:58.938661 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:58 crc kubenswrapper[4703]: I1209 12:05:58.938681 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:58 crc kubenswrapper[4703]: I1209 12:05:58.938693 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:58Z","lastTransitionTime":"2025-12-09T12:05:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:59 crc kubenswrapper[4703]: I1209 12:05:59.040631 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:59 crc kubenswrapper[4703]: I1209 12:05:59.040674 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:59 crc kubenswrapper[4703]: I1209 12:05:59.040683 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:59 crc kubenswrapper[4703]: I1209 12:05:59.040697 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:59 crc kubenswrapper[4703]: I1209 12:05:59.040707 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:59Z","lastTransitionTime":"2025-12-09T12:05:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:59 crc kubenswrapper[4703]: I1209 12:05:59.069164 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:05:59 crc kubenswrapper[4703]: I1209 12:05:59.069180 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:05:59 crc kubenswrapper[4703]: E1209 12:05:59.069304 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 12:05:59 crc kubenswrapper[4703]: I1209 12:05:59.069320 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:05:59 crc kubenswrapper[4703]: I1209 12:05:59.069345 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pf4r7" Dec 09 12:05:59 crc kubenswrapper[4703]: E1209 12:05:59.069419 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 12:05:59 crc kubenswrapper[4703]: E1209 12:05:59.069486 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 12:05:59 crc kubenswrapper[4703]: E1209 12:05:59.069600 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pf4r7" podUID="9f199898-7916-48b6-b5e6-c878bacae384" Dec 09 12:05:59 crc kubenswrapper[4703]: I1209 12:05:59.142265 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:59 crc kubenswrapper[4703]: I1209 12:05:59.142323 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:59 crc kubenswrapper[4703]: I1209 12:05:59.142335 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:59 crc kubenswrapper[4703]: I1209 12:05:59.142353 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:59 crc kubenswrapper[4703]: I1209 12:05:59.142364 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:59Z","lastTransitionTime":"2025-12-09T12:05:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:59 crc kubenswrapper[4703]: I1209 12:05:59.245023 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:59 crc kubenswrapper[4703]: I1209 12:05:59.246940 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:59 crc kubenswrapper[4703]: I1209 12:05:59.246970 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:59 crc kubenswrapper[4703]: I1209 12:05:59.247191 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:59 crc kubenswrapper[4703]: I1209 12:05:59.247208 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:59Z","lastTransitionTime":"2025-12-09T12:05:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:59 crc kubenswrapper[4703]: I1209 12:05:59.349670 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:59 crc kubenswrapper[4703]: I1209 12:05:59.349709 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:59 crc kubenswrapper[4703]: I1209 12:05:59.349718 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:59 crc kubenswrapper[4703]: I1209 12:05:59.349731 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:59 crc kubenswrapper[4703]: I1209 12:05:59.349741 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:59Z","lastTransitionTime":"2025-12-09T12:05:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:59 crc kubenswrapper[4703]: I1209 12:05:59.451415 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:59 crc kubenswrapper[4703]: I1209 12:05:59.451446 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:59 crc kubenswrapper[4703]: I1209 12:05:59.451453 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:59 crc kubenswrapper[4703]: I1209 12:05:59.451467 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:59 crc kubenswrapper[4703]: I1209 12:05:59.451476 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:59Z","lastTransitionTime":"2025-12-09T12:05:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:59 crc kubenswrapper[4703]: I1209 12:05:59.554418 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:59 crc kubenswrapper[4703]: I1209 12:05:59.554470 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:59 crc kubenswrapper[4703]: I1209 12:05:59.554481 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:59 crc kubenswrapper[4703]: I1209 12:05:59.554496 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:59 crc kubenswrapper[4703]: I1209 12:05:59.554506 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:59Z","lastTransitionTime":"2025-12-09T12:05:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:59 crc kubenswrapper[4703]: I1209 12:05:59.657052 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:59 crc kubenswrapper[4703]: I1209 12:05:59.657087 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:59 crc kubenswrapper[4703]: I1209 12:05:59.657110 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:59 crc kubenswrapper[4703]: I1209 12:05:59.657124 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:59 crc kubenswrapper[4703]: I1209 12:05:59.657133 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:59Z","lastTransitionTime":"2025-12-09T12:05:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:59 crc kubenswrapper[4703]: I1209 12:05:59.759712 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:59 crc kubenswrapper[4703]: I1209 12:05:59.759766 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:59 crc kubenswrapper[4703]: I1209 12:05:59.759777 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:59 crc kubenswrapper[4703]: I1209 12:05:59.759795 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:59 crc kubenswrapper[4703]: I1209 12:05:59.759807 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:59Z","lastTransitionTime":"2025-12-09T12:05:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:59 crc kubenswrapper[4703]: I1209 12:05:59.863817 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:59 crc kubenswrapper[4703]: I1209 12:05:59.863908 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:59 crc kubenswrapper[4703]: I1209 12:05:59.863946 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:59 crc kubenswrapper[4703]: I1209 12:05:59.863964 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:59 crc kubenswrapper[4703]: I1209 12:05:59.863978 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:59Z","lastTransitionTime":"2025-12-09T12:05:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:05:59 crc kubenswrapper[4703]: I1209 12:05:59.966970 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:05:59 crc kubenswrapper[4703]: I1209 12:05:59.967012 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:05:59 crc kubenswrapper[4703]: I1209 12:05:59.967022 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:05:59 crc kubenswrapper[4703]: I1209 12:05:59.967039 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:05:59 crc kubenswrapper[4703]: I1209 12:05:59.967050 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:05:59Z","lastTransitionTime":"2025-12-09T12:05:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:00 crc kubenswrapper[4703]: I1209 12:06:00.069102 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:00 crc kubenswrapper[4703]: I1209 12:06:00.069144 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:00 crc kubenswrapper[4703]: I1209 12:06:00.069152 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:00 crc kubenswrapper[4703]: I1209 12:06:00.069167 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:00 crc kubenswrapper[4703]: I1209 12:06:00.069176 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:00Z","lastTransitionTime":"2025-12-09T12:06:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:00 crc kubenswrapper[4703]: I1209 12:06:00.171606 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:00 crc kubenswrapper[4703]: I1209 12:06:00.171654 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:00 crc kubenswrapper[4703]: I1209 12:06:00.171666 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:00 crc kubenswrapper[4703]: I1209 12:06:00.171683 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:00 crc kubenswrapper[4703]: I1209 12:06:00.171695 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:00Z","lastTransitionTime":"2025-12-09T12:06:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:00 crc kubenswrapper[4703]: I1209 12:06:00.274256 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:00 crc kubenswrapper[4703]: I1209 12:06:00.274371 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:00 crc kubenswrapper[4703]: I1209 12:06:00.274395 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:00 crc kubenswrapper[4703]: I1209 12:06:00.274424 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:00 crc kubenswrapper[4703]: I1209 12:06:00.274446 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:00Z","lastTransitionTime":"2025-12-09T12:06:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:00 crc kubenswrapper[4703]: I1209 12:06:00.376919 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:00 crc kubenswrapper[4703]: I1209 12:06:00.376956 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:00 crc kubenswrapper[4703]: I1209 12:06:00.376967 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:00 crc kubenswrapper[4703]: I1209 12:06:00.376980 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:00 crc kubenswrapper[4703]: I1209 12:06:00.376988 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:00Z","lastTransitionTime":"2025-12-09T12:06:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:00 crc kubenswrapper[4703]: I1209 12:06:00.479289 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:00 crc kubenswrapper[4703]: I1209 12:06:00.479331 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:00 crc kubenswrapper[4703]: I1209 12:06:00.479339 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:00 crc kubenswrapper[4703]: I1209 12:06:00.479353 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:00 crc kubenswrapper[4703]: I1209 12:06:00.479362 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:00Z","lastTransitionTime":"2025-12-09T12:06:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:00 crc kubenswrapper[4703]: I1209 12:06:00.581110 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:00 crc kubenswrapper[4703]: I1209 12:06:00.581409 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:00 crc kubenswrapper[4703]: I1209 12:06:00.581534 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:00 crc kubenswrapper[4703]: I1209 12:06:00.581665 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:00 crc kubenswrapper[4703]: I1209 12:06:00.581765 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:00Z","lastTransitionTime":"2025-12-09T12:06:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:00 crc kubenswrapper[4703]: I1209 12:06:00.684263 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:00 crc kubenswrapper[4703]: I1209 12:06:00.684303 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:00 crc kubenswrapper[4703]: I1209 12:06:00.684312 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:00 crc kubenswrapper[4703]: I1209 12:06:00.684326 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:00 crc kubenswrapper[4703]: I1209 12:06:00.684335 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:00Z","lastTransitionTime":"2025-12-09T12:06:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:00 crc kubenswrapper[4703]: I1209 12:06:00.786359 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:00 crc kubenswrapper[4703]: I1209 12:06:00.786614 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:00 crc kubenswrapper[4703]: I1209 12:06:00.786751 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:00 crc kubenswrapper[4703]: I1209 12:06:00.786855 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:00 crc kubenswrapper[4703]: I1209 12:06:00.786954 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:00Z","lastTransitionTime":"2025-12-09T12:06:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:00 crc kubenswrapper[4703]: I1209 12:06:00.890033 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:00 crc kubenswrapper[4703]: I1209 12:06:00.890364 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:00 crc kubenswrapper[4703]: I1209 12:06:00.890509 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:00 crc kubenswrapper[4703]: I1209 12:06:00.890613 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:00 crc kubenswrapper[4703]: I1209 12:06:00.890702 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:00Z","lastTransitionTime":"2025-12-09T12:06:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:00 crc kubenswrapper[4703]: I1209 12:06:00.992964 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:00 crc kubenswrapper[4703]: I1209 12:06:00.993001 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:00 crc kubenswrapper[4703]: I1209 12:06:00.993010 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:00 crc kubenswrapper[4703]: I1209 12:06:00.993025 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:00 crc kubenswrapper[4703]: I1209 12:06:00.993035 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:00Z","lastTransitionTime":"2025-12-09T12:06:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:01 crc kubenswrapper[4703]: I1209 12:06:01.068787 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:06:01 crc kubenswrapper[4703]: I1209 12:06:01.068867 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:06:01 crc kubenswrapper[4703]: I1209 12:06:01.068927 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:06:01 crc kubenswrapper[4703]: E1209 12:06:01.068924 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 12:06:01 crc kubenswrapper[4703]: E1209 12:06:01.069013 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 12:06:01 crc kubenswrapper[4703]: E1209 12:06:01.069070 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 12:06:01 crc kubenswrapper[4703]: I1209 12:06:01.069742 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pf4r7" Dec 09 12:06:01 crc kubenswrapper[4703]: E1209 12:06:01.069856 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pf4r7" podUID="9f199898-7916-48b6-b5e6-c878bacae384" Dec 09 12:06:01 crc kubenswrapper[4703]: I1209 12:06:01.084955 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9zbgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b57e1095-b0e1-4b30-a491-00852a5219e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c902e8b479756afb7cf929d0963afc0f950288658427c1d49ff22ee4319cda70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6sdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9zbgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:01Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:01 crc kubenswrapper[4703]: I1209 12:06:01.096215 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:01 crc kubenswrapper[4703]: I1209 12:06:01.096216 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:01Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:01 crc kubenswrapper[4703]: I1209 12:06:01.096244 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:01 crc kubenswrapper[4703]: I1209 12:06:01.096402 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:01 crc kubenswrapper[4703]: I1209 12:06:01.096430 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:01 crc kubenswrapper[4703]: I1209 12:06:01.096446 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:01Z","lastTransitionTime":"2025-12-09T12:06:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:01 crc kubenswrapper[4703]: I1209 12:06:01.108178 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4r9tc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65954588-9afb-47ff-8c0b-f83bf290da27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://698ca5256efee75c73f1585ad1f7955676be6dac9cbd1c5aebdbf88848c35cee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aeaa589b823d27c4d29e522f951c5578acd67848c14381e3f61821458f74a966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aeaa589b823d27c4d29e522f951c5578acd67848c14381e3f61821458f74a966\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e7f1224d3313cfd9900663c77a41898ee26d0f91591e2f641a17a6a543c69b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e7f1224d3313cfd9900663c77a41898ee26d0f91591e2f641a17a6a543c69b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f86761f861d07fe4e22b95651d0ed069799b95020e8e168527fd838ea6cc1e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f86761f861d07fe4e22b95651d0ed069799b95020e8e168527fd838ea6cc1e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c0a351532024238eb4c192066350cdd4f94945cfa549586ced1c91a2379490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c0a351532024238eb4c192066350cdd4f94945cfa549586ced1c91a2379490\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7baabf4646f56bb08044eb6cb29976274522e962906d573a279e967e64d191f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7baabf4646f56bb08044eb6cb29976274522e962906d573a279e967e64d191f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87f16f3d165437fc695faa410c055f3f3716fbb47e2dc1d41bba05315872e950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87f16f3d165437fc695faa410c055f3f3716fbb47e2dc1d41bba05315872e950\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4r9tc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:01Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:01 crc kubenswrapper[4703]: I1209 12:06:01.117728 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998159f494b646ac88f1f1ffb09789bd146d7b3554b3a5106174d7c72f3ff583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:01Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:01 crc kubenswrapper[4703]: I1209 12:06:01.126906 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32956ceb-8540-406e-8693-e86efb46cd42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1535901888e91bc0037177a3150033b17d6e2d143faa5a082dba0d096cad474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tv4nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d149dfa980c5e9cee99cd7ba8a51bca19529368e037a1fd8e9b8dbea04179378\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tv4nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q8sfk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:01Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:01 crc kubenswrapper[4703]: I1209 12:06:01.136842 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4sss9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9dac2a02-8ee0-445c-bac8-4d448cda509d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c698b36fa974c95698ac8ac133b3caa9cbe8847626f5abf8088011c1a1a13032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q96fc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://660ecb626e35b5e693783b416e62c773b996e7e39233160e422dc9357ca0fd2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q96fc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4sss9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:01Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:01 crc kubenswrapper[4703]: I1209 12:06:01.148618 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"687e9c7e-e4b8-4bbf-871e-714260501e27\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd9149c63c18f6178219d1f2e4f8e44bc861746b90fb9748105461192a870431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8612defa917cc99574e7762a0260e1b670277ea754e0a66bca960ef6b5a97f2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc0c9f4fdf8ce1ac42d54d3823b0137e0fd8ddf4499236ac3b954f9bff7c491\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd716e645be3932ca219e76ba8d896a8cf27086bbbac8f5e68057873fa68a6f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da708e8829aba2597198cdab64374791885bac9ddf7cb04b296c81d4e81d42ff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 12:05:18.266857 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 12:05:18.266993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 12:05:18.268588 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1075686048/tls.crt::/tmp/serving-cert-1075686048/tls.key\\\\\\\"\\\\nI1209 12:05:18.602670 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 12:05:18.604681 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 12:05:18.604730 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 12:05:18.604783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 12:05:18.604808 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 12:05:18.609177 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 12:05:18.609280 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:05:18.609306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:05:18.609328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 12:05:18.609348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 12:05:18.609367 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 12:05:18.609387 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 12:05:18.609237 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 12:05:18.612865 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://917390e847b6ec24f370316f0fbc305db0f667eab946b4f0e749855a62112cc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55c7e1e49ab4af3fff08ccfa9a7d9b4f889fbdd1d60592bca6c87990f4623d5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55c7e1e49ab4af3fff08ccfa9a7d9b4f889fbdd1d60592bca6c87990f4623d5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:01Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:01 crc kubenswrapper[4703]: I1209 12:06:01.160034 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e04b82b0-1939-4ba9-8913-d41e77c0d60e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4efcec3e0a1bc588b61035ca4ff2f4932da9a6f39c37e48f2d81d3c7d6999822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a54cbf7d80c64289b2c6534ae640af5e06588f6d89e949a307ecd54f87eb429d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22136ab0fb8a3051c168e5b6e650e9b4da83daa189e67b570a453603bd5f4814\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b2cdfd8cad0173fd775a465149996734b60c1c457ffc602c552103ecbb26f68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b2cdfd8cad0173fd775a465149996734b60c1c457ffc602c552103ecbb26f68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:01Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:01Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:01 crc kubenswrapper[4703]: I1209 12:06:01.171942 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:01Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:01 crc kubenswrapper[4703]: I1209 12:06:01.181351 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ncbbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c180fbd9-43db-436b-8166-3cbcb5a14da3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8f7249f78155fea92f6a72919fd391b841a3cdb25583c9b9bc1083c6e34f087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22tv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ncbbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:01Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:01 crc kubenswrapper[4703]: I1209 12:06:01.197562 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9173444-5181-4ee4-b651-11d92ccab0d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://382198a5fbf52c39a98bc79aa4e917d50dfbcb3ff73d831389901544d400817c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce8423b98a9127fea76c23c865d9840069fe97e1d10dc9e97816c916192bf8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5989b9b8e8fb619a07c3242f7c4310e042a281648593a05edd6cb16449480428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9652e0d783249140796d5c8586e300acb7918f3ced79a1daec01da21eac363c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06ebee5cde81f7c5fb299c4a346f466d3bd4667be4dd2e89712dcae660a1e08c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02b8fb5400b3ed69b7e65ae9f09aac617bd51972cfa0d5aec436c9f59eb65ac2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://81d94ce172f3ac2803f304ca33a0549f1e75e67a45f3acedc0b276a83e885db1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81d94ce172f3ac2803f304ca33a0549f1e75e67a45f3acedc0b276a83e885db1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T12:05:47Z\\\",\\\"message\\\":\\\"cluster\\\\\\\", UUID:\\\\\\\"d4efc4a8-c514-4a6b-901c-2953978b50d3\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-multus/multus-admission-controller\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-multus/multus-admission-controller_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-multus/multus-admission-controller\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.119\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.119\\\\\\\", Port:8443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1209 12:05:46.875029 6369 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[extern\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-7hrm8_openshift-ovn-kubernetes(e9173444-5181-4ee4-b651-11d92ccab0d0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88a462e310fdbdf79496906a304fcf953c4e48c9cab9137f52cf5aafa9c4054c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f192db4b078c6bf3ffea6db58842ce7e8b4ad76c12fa8422496288bc0cfe0c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f192db4b078c6bf3ffea6db58842ce7e8b4ad76c12fa8422496288bc0cfe0c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7hrm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:01Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:01 crc kubenswrapper[4703]: I1209 12:06:01.198157 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:01 crc kubenswrapper[4703]: I1209 12:06:01.198182 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:01 crc kubenswrapper[4703]: I1209 12:06:01.198206 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:01 crc kubenswrapper[4703]: I1209 12:06:01.198222 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:01 crc kubenswrapper[4703]: I1209 12:06:01.198233 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:01Z","lastTransitionTime":"2025-12-09T12:06:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:01 crc kubenswrapper[4703]: I1209 12:06:01.206868 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pf4r7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f199898-7916-48b6-b5e6-c878bacae384\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-856ln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-856ln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pf4r7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:01Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:01 crc kubenswrapper[4703]: I1209 12:06:01.216799 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e02414b1-6edb-41df-bef5-da5638c58c13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf2e3462f3adecb5658327310dd9a543e1f7fe8b7d477f0f90c77f9a3c1724cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4933db7a36f445ad8289586a5de8f593c8f565d53b4b310292c2a6b436b86b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dd75d360dcc49fc8db00b8a075ce33da2bd845289f44a0148ab391635ab90d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://feccbfdb1fa7a9935e98927106faf0c924485823950276b473cdb8c06c4b07eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:01Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:01 crc kubenswrapper[4703]: I1209 12:06:01.229098 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15947b2dcc90c69809979bf2d4a222ee664245d897b1f9d9ca91d29551ad86ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://592c8564eeb877cf87544918cb97d0e32451a6afd1c201efafb4cfec9a1e5857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:01Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:01 crc kubenswrapper[4703]: I1209 12:06:01.238050 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rfmng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a89eb00-454a-44b2-9b8e-6518b4a9d10c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a0c3ac19e4c5217f17ea8cd7c742043ee7708b0f086c99341a4bbc062fa53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qf85c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rfmng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:01Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:01 crc kubenswrapper[4703]: I1209 12:06:01.249354 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f05c1cdf163073cebf69a5dcd00e285698d11f7c76995b202500503a9762d467\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:01Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:01 crc kubenswrapper[4703]: I1209 12:06:01.261080 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:01Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:01 crc kubenswrapper[4703]: I1209 12:06:01.300057 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:01 crc kubenswrapper[4703]: I1209 12:06:01.300106 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:01 crc kubenswrapper[4703]: I1209 12:06:01.300118 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:01 crc kubenswrapper[4703]: I1209 12:06:01.300132 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:01 crc kubenswrapper[4703]: I1209 12:06:01.300141 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:01Z","lastTransitionTime":"2025-12-09T12:06:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:01 crc kubenswrapper[4703]: I1209 12:06:01.401889 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:01 crc kubenswrapper[4703]: I1209 12:06:01.401934 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:01 crc kubenswrapper[4703]: I1209 12:06:01.401945 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:01 crc kubenswrapper[4703]: I1209 12:06:01.401959 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:01 crc kubenswrapper[4703]: I1209 12:06:01.401970 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:01Z","lastTransitionTime":"2025-12-09T12:06:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:01 crc kubenswrapper[4703]: I1209 12:06:01.504022 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:01 crc kubenswrapper[4703]: I1209 12:06:01.504061 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:01 crc kubenswrapper[4703]: I1209 12:06:01.504070 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:01 crc kubenswrapper[4703]: I1209 12:06:01.504085 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:01 crc kubenswrapper[4703]: I1209 12:06:01.504094 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:01Z","lastTransitionTime":"2025-12-09T12:06:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:01 crc kubenswrapper[4703]: I1209 12:06:01.606473 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:01 crc kubenswrapper[4703]: I1209 12:06:01.606515 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:01 crc kubenswrapper[4703]: I1209 12:06:01.606526 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:01 crc kubenswrapper[4703]: I1209 12:06:01.606543 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:01 crc kubenswrapper[4703]: I1209 12:06:01.606555 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:01Z","lastTransitionTime":"2025-12-09T12:06:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:01 crc kubenswrapper[4703]: I1209 12:06:01.708940 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:01 crc kubenswrapper[4703]: I1209 12:06:01.708981 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:01 crc kubenswrapper[4703]: I1209 12:06:01.708991 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:01 crc kubenswrapper[4703]: I1209 12:06:01.709014 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:01 crc kubenswrapper[4703]: I1209 12:06:01.709030 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:01Z","lastTransitionTime":"2025-12-09T12:06:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:01 crc kubenswrapper[4703]: I1209 12:06:01.811086 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:01 crc kubenswrapper[4703]: I1209 12:06:01.811123 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:01 crc kubenswrapper[4703]: I1209 12:06:01.811131 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:01 crc kubenswrapper[4703]: I1209 12:06:01.811146 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:01 crc kubenswrapper[4703]: I1209 12:06:01.811160 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:01Z","lastTransitionTime":"2025-12-09T12:06:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:01 crc kubenswrapper[4703]: I1209 12:06:01.914432 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:01 crc kubenswrapper[4703]: I1209 12:06:01.914499 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:01 crc kubenswrapper[4703]: I1209 12:06:01.914510 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:01 crc kubenswrapper[4703]: I1209 12:06:01.914527 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:01 crc kubenswrapper[4703]: I1209 12:06:01.914537 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:01Z","lastTransitionTime":"2025-12-09T12:06:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:02 crc kubenswrapper[4703]: I1209 12:06:02.016405 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:02 crc kubenswrapper[4703]: I1209 12:06:02.016431 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:02 crc kubenswrapper[4703]: I1209 12:06:02.016439 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:02 crc kubenswrapper[4703]: I1209 12:06:02.016451 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:02 crc kubenswrapper[4703]: I1209 12:06:02.016459 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:02Z","lastTransitionTime":"2025-12-09T12:06:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:02 crc kubenswrapper[4703]: I1209 12:06:02.118910 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:02 crc kubenswrapper[4703]: I1209 12:06:02.118942 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:02 crc kubenswrapper[4703]: I1209 12:06:02.118949 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:02 crc kubenswrapper[4703]: I1209 12:06:02.118963 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:02 crc kubenswrapper[4703]: I1209 12:06:02.118978 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:02Z","lastTransitionTime":"2025-12-09T12:06:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:02 crc kubenswrapper[4703]: I1209 12:06:02.221453 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:02 crc kubenswrapper[4703]: I1209 12:06:02.221503 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:02 crc kubenswrapper[4703]: I1209 12:06:02.221514 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:02 crc kubenswrapper[4703]: I1209 12:06:02.221536 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:02 crc kubenswrapper[4703]: I1209 12:06:02.221550 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:02Z","lastTransitionTime":"2025-12-09T12:06:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:02 crc kubenswrapper[4703]: I1209 12:06:02.323597 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:02 crc kubenswrapper[4703]: I1209 12:06:02.323642 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:02 crc kubenswrapper[4703]: I1209 12:06:02.323654 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:02 crc kubenswrapper[4703]: I1209 12:06:02.323669 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:02 crc kubenswrapper[4703]: I1209 12:06:02.323681 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:02Z","lastTransitionTime":"2025-12-09T12:06:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:02 crc kubenswrapper[4703]: I1209 12:06:02.426080 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:02 crc kubenswrapper[4703]: I1209 12:06:02.426218 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:02 crc kubenswrapper[4703]: I1209 12:06:02.426229 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:02 crc kubenswrapper[4703]: I1209 12:06:02.426245 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:02 crc kubenswrapper[4703]: I1209 12:06:02.426256 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:02Z","lastTransitionTime":"2025-12-09T12:06:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:02 crc kubenswrapper[4703]: I1209 12:06:02.528873 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:02 crc kubenswrapper[4703]: I1209 12:06:02.529182 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:02 crc kubenswrapper[4703]: I1209 12:06:02.529218 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:02 crc kubenswrapper[4703]: I1209 12:06:02.529233 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:02 crc kubenswrapper[4703]: I1209 12:06:02.529244 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:02Z","lastTransitionTime":"2025-12-09T12:06:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:02 crc kubenswrapper[4703]: I1209 12:06:02.576671 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:02 crc kubenswrapper[4703]: I1209 12:06:02.576763 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:02 crc kubenswrapper[4703]: I1209 12:06:02.576774 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:02 crc kubenswrapper[4703]: I1209 12:06:02.576790 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:02 crc kubenswrapper[4703]: I1209 12:06:02.576800 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:02Z","lastTransitionTime":"2025-12-09T12:06:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:02 crc kubenswrapper[4703]: E1209 12:06:02.588972 4703 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cc650f57-0f1e-4118-b5e8-874027bb4fd3\\\",\\\"systemUUID\\\":\\\"538480e3-ee75-4c42-9816-5a001726e0b5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:02Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:02 crc kubenswrapper[4703]: I1209 12:06:02.591977 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:02 crc kubenswrapper[4703]: I1209 12:06:02.592020 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:02 crc kubenswrapper[4703]: I1209 12:06:02.592029 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:02 crc kubenswrapper[4703]: I1209 12:06:02.592044 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:02 crc kubenswrapper[4703]: I1209 12:06:02.592054 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:02Z","lastTransitionTime":"2025-12-09T12:06:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:02 crc kubenswrapper[4703]: E1209 12:06:02.602138 4703 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cc650f57-0f1e-4118-b5e8-874027bb4fd3\\\",\\\"systemUUID\\\":\\\"538480e3-ee75-4c42-9816-5a001726e0b5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:02Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:02 crc kubenswrapper[4703]: I1209 12:06:02.604776 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:02 crc kubenswrapper[4703]: I1209 12:06:02.604804 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:02 crc kubenswrapper[4703]: I1209 12:06:02.604817 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:02 crc kubenswrapper[4703]: I1209 12:06:02.604839 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:02 crc kubenswrapper[4703]: I1209 12:06:02.604851 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:02Z","lastTransitionTime":"2025-12-09T12:06:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:02 crc kubenswrapper[4703]: E1209 12:06:02.615936 4703 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cc650f57-0f1e-4118-b5e8-874027bb4fd3\\\",\\\"systemUUID\\\":\\\"538480e3-ee75-4c42-9816-5a001726e0b5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:02Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:02 crc kubenswrapper[4703]: I1209 12:06:02.618763 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:02 crc kubenswrapper[4703]: I1209 12:06:02.618802 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:02 crc kubenswrapper[4703]: I1209 12:06:02.618814 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:02 crc kubenswrapper[4703]: I1209 12:06:02.618830 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:02 crc kubenswrapper[4703]: I1209 12:06:02.618841 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:02Z","lastTransitionTime":"2025-12-09T12:06:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:02 crc kubenswrapper[4703]: E1209 12:06:02.629512 4703 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cc650f57-0f1e-4118-b5e8-874027bb4fd3\\\",\\\"systemUUID\\\":\\\"538480e3-ee75-4c42-9816-5a001726e0b5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:02Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:02 crc kubenswrapper[4703]: I1209 12:06:02.632834 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:02 crc kubenswrapper[4703]: I1209 12:06:02.632870 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:02 crc kubenswrapper[4703]: I1209 12:06:02.632886 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:02 crc kubenswrapper[4703]: I1209 12:06:02.632905 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:02 crc kubenswrapper[4703]: I1209 12:06:02.632916 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:02Z","lastTransitionTime":"2025-12-09T12:06:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:02 crc kubenswrapper[4703]: E1209 12:06:02.643330 4703 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cc650f57-0f1e-4118-b5e8-874027bb4fd3\\\",\\\"systemUUID\\\":\\\"538480e3-ee75-4c42-9816-5a001726e0b5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:02Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:02 crc kubenswrapper[4703]: E1209 12:06:02.643441 4703 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 09 12:06:02 crc kubenswrapper[4703]: I1209 12:06:02.644768 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:02 crc kubenswrapper[4703]: I1209 12:06:02.644821 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:02 crc kubenswrapper[4703]: I1209 12:06:02.644845 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:02 crc kubenswrapper[4703]: I1209 12:06:02.644869 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:02 crc kubenswrapper[4703]: I1209 12:06:02.644885 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:02Z","lastTransitionTime":"2025-12-09T12:06:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:02 crc kubenswrapper[4703]: I1209 12:06:02.747118 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:02 crc kubenswrapper[4703]: I1209 12:06:02.747154 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:02 crc kubenswrapper[4703]: I1209 12:06:02.747165 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:02 crc kubenswrapper[4703]: I1209 12:06:02.747180 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:02 crc kubenswrapper[4703]: I1209 12:06:02.747207 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:02Z","lastTransitionTime":"2025-12-09T12:06:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:02 crc kubenswrapper[4703]: I1209 12:06:02.849950 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:02 crc kubenswrapper[4703]: I1209 12:06:02.849991 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:02 crc kubenswrapper[4703]: I1209 12:06:02.850002 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:02 crc kubenswrapper[4703]: I1209 12:06:02.850017 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:02 crc kubenswrapper[4703]: I1209 12:06:02.850028 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:02Z","lastTransitionTime":"2025-12-09T12:06:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:02 crc kubenswrapper[4703]: I1209 12:06:02.952886 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:02 crc kubenswrapper[4703]: I1209 12:06:02.952939 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:02 crc kubenswrapper[4703]: I1209 12:06:02.952949 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:02 crc kubenswrapper[4703]: I1209 12:06:02.952964 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:02 crc kubenswrapper[4703]: I1209 12:06:02.952975 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:02Z","lastTransitionTime":"2025-12-09T12:06:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:03 crc kubenswrapper[4703]: I1209 12:06:03.055102 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:03 crc kubenswrapper[4703]: I1209 12:06:03.055149 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:03 crc kubenswrapper[4703]: I1209 12:06:03.055162 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:03 crc kubenswrapper[4703]: I1209 12:06:03.055183 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:03 crc kubenswrapper[4703]: I1209 12:06:03.055220 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:03Z","lastTransitionTime":"2025-12-09T12:06:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:03 crc kubenswrapper[4703]: I1209 12:06:03.068796 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:06:03 crc kubenswrapper[4703]: I1209 12:06:03.068830 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:06:03 crc kubenswrapper[4703]: I1209 12:06:03.068835 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:06:03 crc kubenswrapper[4703]: I1209 12:06:03.068840 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pf4r7" Dec 09 12:06:03 crc kubenswrapper[4703]: E1209 12:06:03.068924 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 12:06:03 crc kubenswrapper[4703]: E1209 12:06:03.069356 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 12:06:03 crc kubenswrapper[4703]: E1209 12:06:03.069431 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 12:06:03 crc kubenswrapper[4703]: I1209 12:06:03.069496 4703 scope.go:117] "RemoveContainer" containerID="81d94ce172f3ac2803f304ca33a0549f1e75e67a45f3acedc0b276a83e885db1" Dec 09 12:06:03 crc kubenswrapper[4703]: E1209 12:06:03.069498 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pf4r7" podUID="9f199898-7916-48b6-b5e6-c878bacae384" Dec 09 12:06:03 crc kubenswrapper[4703]: E1209 12:06:03.069639 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-7hrm8_openshift-ovn-kubernetes(e9173444-5181-4ee4-b651-11d92ccab0d0)\"" pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" podUID="e9173444-5181-4ee4-b651-11d92ccab0d0" Dec 09 12:06:03 crc kubenswrapper[4703]: I1209 12:06:03.159011 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:03 crc kubenswrapper[4703]: I1209 12:06:03.159065 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:03 crc kubenswrapper[4703]: I1209 12:06:03.159146 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:03 crc kubenswrapper[4703]: I1209 12:06:03.159174 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:03 crc kubenswrapper[4703]: I1209 12:06:03.159233 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:03Z","lastTransitionTime":"2025-12-09T12:06:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:03 crc kubenswrapper[4703]: I1209 12:06:03.261231 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:03 crc kubenswrapper[4703]: I1209 12:06:03.261272 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:03 crc kubenswrapper[4703]: I1209 12:06:03.261283 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:03 crc kubenswrapper[4703]: I1209 12:06:03.261299 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:03 crc kubenswrapper[4703]: I1209 12:06:03.261311 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:03Z","lastTransitionTime":"2025-12-09T12:06:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:03 crc kubenswrapper[4703]: I1209 12:06:03.363301 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:03 crc kubenswrapper[4703]: I1209 12:06:03.363336 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:03 crc kubenswrapper[4703]: I1209 12:06:03.363344 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:03 crc kubenswrapper[4703]: I1209 12:06:03.363359 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:03 crc kubenswrapper[4703]: I1209 12:06:03.363368 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:03Z","lastTransitionTime":"2025-12-09T12:06:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:03 crc kubenswrapper[4703]: I1209 12:06:03.465772 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:03 crc kubenswrapper[4703]: I1209 12:06:03.465809 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:03 crc kubenswrapper[4703]: I1209 12:06:03.465820 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:03 crc kubenswrapper[4703]: I1209 12:06:03.465834 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:03 crc kubenswrapper[4703]: I1209 12:06:03.465845 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:03Z","lastTransitionTime":"2025-12-09T12:06:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:03 crc kubenswrapper[4703]: I1209 12:06:03.568157 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:03 crc kubenswrapper[4703]: I1209 12:06:03.568220 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:03 crc kubenswrapper[4703]: I1209 12:06:03.568234 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:03 crc kubenswrapper[4703]: I1209 12:06:03.568247 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:03 crc kubenswrapper[4703]: I1209 12:06:03.568257 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:03Z","lastTransitionTime":"2025-12-09T12:06:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:03 crc kubenswrapper[4703]: I1209 12:06:03.670304 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:03 crc kubenswrapper[4703]: I1209 12:06:03.670354 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:03 crc kubenswrapper[4703]: I1209 12:06:03.670366 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:03 crc kubenswrapper[4703]: I1209 12:06:03.670382 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:03 crc kubenswrapper[4703]: I1209 12:06:03.670394 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:03Z","lastTransitionTime":"2025-12-09T12:06:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:03 crc kubenswrapper[4703]: I1209 12:06:03.772161 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:03 crc kubenswrapper[4703]: I1209 12:06:03.772213 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:03 crc kubenswrapper[4703]: I1209 12:06:03.772222 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:03 crc kubenswrapper[4703]: I1209 12:06:03.772236 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:03 crc kubenswrapper[4703]: I1209 12:06:03.772245 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:03Z","lastTransitionTime":"2025-12-09T12:06:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:03 crc kubenswrapper[4703]: I1209 12:06:03.874970 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:03 crc kubenswrapper[4703]: I1209 12:06:03.875014 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:03 crc kubenswrapper[4703]: I1209 12:06:03.875026 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:03 crc kubenswrapper[4703]: I1209 12:06:03.875046 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:03 crc kubenswrapper[4703]: I1209 12:06:03.875060 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:03Z","lastTransitionTime":"2025-12-09T12:06:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:03 crc kubenswrapper[4703]: I1209 12:06:03.977433 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:03 crc kubenswrapper[4703]: I1209 12:06:03.977477 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:03 crc kubenswrapper[4703]: I1209 12:06:03.977493 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:03 crc kubenswrapper[4703]: I1209 12:06:03.977507 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:03 crc kubenswrapper[4703]: I1209 12:06:03.977516 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:03Z","lastTransitionTime":"2025-12-09T12:06:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:04 crc kubenswrapper[4703]: I1209 12:06:04.079915 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:04 crc kubenswrapper[4703]: I1209 12:06:04.079959 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:04 crc kubenswrapper[4703]: I1209 12:06:04.079971 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:04 crc kubenswrapper[4703]: I1209 12:06:04.079986 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:04 crc kubenswrapper[4703]: I1209 12:06:04.079999 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:04Z","lastTransitionTime":"2025-12-09T12:06:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:04 crc kubenswrapper[4703]: I1209 12:06:04.182068 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:04 crc kubenswrapper[4703]: I1209 12:06:04.182119 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:04 crc kubenswrapper[4703]: I1209 12:06:04.182132 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:04 crc kubenswrapper[4703]: I1209 12:06:04.182149 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:04 crc kubenswrapper[4703]: I1209 12:06:04.182490 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:04Z","lastTransitionTime":"2025-12-09T12:06:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:04 crc kubenswrapper[4703]: I1209 12:06:04.285143 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:04 crc kubenswrapper[4703]: I1209 12:06:04.285243 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:04 crc kubenswrapper[4703]: I1209 12:06:04.285258 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:04 crc kubenswrapper[4703]: I1209 12:06:04.285278 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:04 crc kubenswrapper[4703]: I1209 12:06:04.285290 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:04Z","lastTransitionTime":"2025-12-09T12:06:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:04 crc kubenswrapper[4703]: I1209 12:06:04.387639 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:04 crc kubenswrapper[4703]: I1209 12:06:04.387688 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:04 crc kubenswrapper[4703]: I1209 12:06:04.387708 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:04 crc kubenswrapper[4703]: I1209 12:06:04.387730 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:04 crc kubenswrapper[4703]: I1209 12:06:04.387742 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:04Z","lastTransitionTime":"2025-12-09T12:06:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:04 crc kubenswrapper[4703]: I1209 12:06:04.490289 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:04 crc kubenswrapper[4703]: I1209 12:06:04.490354 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:04 crc kubenswrapper[4703]: I1209 12:06:04.490371 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:04 crc kubenswrapper[4703]: I1209 12:06:04.490390 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:04 crc kubenswrapper[4703]: I1209 12:06:04.490403 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:04Z","lastTransitionTime":"2025-12-09T12:06:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:04 crc kubenswrapper[4703]: I1209 12:06:04.593059 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:04 crc kubenswrapper[4703]: I1209 12:06:04.593109 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:04 crc kubenswrapper[4703]: I1209 12:06:04.593121 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:04 crc kubenswrapper[4703]: I1209 12:06:04.593139 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:04 crc kubenswrapper[4703]: I1209 12:06:04.593151 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:04Z","lastTransitionTime":"2025-12-09T12:06:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:04 crc kubenswrapper[4703]: I1209 12:06:04.695573 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:04 crc kubenswrapper[4703]: I1209 12:06:04.695617 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:04 crc kubenswrapper[4703]: I1209 12:06:04.695628 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:04 crc kubenswrapper[4703]: I1209 12:06:04.695644 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:04 crc kubenswrapper[4703]: I1209 12:06:04.695653 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:04Z","lastTransitionTime":"2025-12-09T12:06:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:04 crc kubenswrapper[4703]: I1209 12:06:04.798346 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:04 crc kubenswrapper[4703]: I1209 12:06:04.798375 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:04 crc kubenswrapper[4703]: I1209 12:06:04.798384 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:04 crc kubenswrapper[4703]: I1209 12:06:04.798396 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:04 crc kubenswrapper[4703]: I1209 12:06:04.798406 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:04Z","lastTransitionTime":"2025-12-09T12:06:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:04 crc kubenswrapper[4703]: I1209 12:06:04.900119 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:04 crc kubenswrapper[4703]: I1209 12:06:04.900166 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:04 crc kubenswrapper[4703]: I1209 12:06:04.900176 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:04 crc kubenswrapper[4703]: I1209 12:06:04.900207 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:04 crc kubenswrapper[4703]: I1209 12:06:04.900220 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:04Z","lastTransitionTime":"2025-12-09T12:06:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:04 crc kubenswrapper[4703]: I1209 12:06:04.976521 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9f199898-7916-48b6-b5e6-c878bacae384-metrics-certs\") pod \"network-metrics-daemon-pf4r7\" (UID: \"9f199898-7916-48b6-b5e6-c878bacae384\") " pod="openshift-multus/network-metrics-daemon-pf4r7" Dec 09 12:06:04 crc kubenswrapper[4703]: E1209 12:06:04.976676 4703 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 12:06:04 crc kubenswrapper[4703]: E1209 12:06:04.976762 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9f199898-7916-48b6-b5e6-c878bacae384-metrics-certs podName:9f199898-7916-48b6-b5e6-c878bacae384 nodeName:}" failed. No retries permitted until 2025-12-09 12:06:36.976744166 +0000 UTC m=+96.225507685 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9f199898-7916-48b6-b5e6-c878bacae384-metrics-certs") pod "network-metrics-daemon-pf4r7" (UID: "9f199898-7916-48b6-b5e6-c878bacae384") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 12:06:05 crc kubenswrapper[4703]: I1209 12:06:05.003060 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:05 crc kubenswrapper[4703]: I1209 12:06:05.003115 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:05 crc kubenswrapper[4703]: I1209 12:06:05.003131 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:05 crc kubenswrapper[4703]: I1209 12:06:05.003154 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:05 crc kubenswrapper[4703]: I1209 12:06:05.003171 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:05Z","lastTransitionTime":"2025-12-09T12:06:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:05 crc kubenswrapper[4703]: I1209 12:06:05.068689 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:06:05 crc kubenswrapper[4703]: I1209 12:06:05.068739 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:06:05 crc kubenswrapper[4703]: I1209 12:06:05.068770 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pf4r7" Dec 09 12:06:05 crc kubenswrapper[4703]: I1209 12:06:05.068748 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:06:05 crc kubenswrapper[4703]: E1209 12:06:05.068868 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 12:06:05 crc kubenswrapper[4703]: E1209 12:06:05.068899 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pf4r7" podUID="9f199898-7916-48b6-b5e6-c878bacae384" Dec 09 12:06:05 crc kubenswrapper[4703]: E1209 12:06:05.068959 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 12:06:05 crc kubenswrapper[4703]: E1209 12:06:05.069054 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 12:06:05 crc kubenswrapper[4703]: I1209 12:06:05.105774 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:05 crc kubenswrapper[4703]: I1209 12:06:05.105825 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:05 crc kubenswrapper[4703]: I1209 12:06:05.105833 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:05 crc kubenswrapper[4703]: I1209 12:06:05.105848 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:05 crc kubenswrapper[4703]: I1209 12:06:05.105857 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:05Z","lastTransitionTime":"2025-12-09T12:06:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:05 crc kubenswrapper[4703]: I1209 12:06:05.211864 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:05 crc kubenswrapper[4703]: I1209 12:06:05.211892 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:05 crc kubenswrapper[4703]: I1209 12:06:05.211902 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:05 crc kubenswrapper[4703]: I1209 12:06:05.211916 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:05 crc kubenswrapper[4703]: I1209 12:06:05.211928 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:05Z","lastTransitionTime":"2025-12-09T12:06:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:05 crc kubenswrapper[4703]: I1209 12:06:05.313970 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:05 crc kubenswrapper[4703]: I1209 12:06:05.314022 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:05 crc kubenswrapper[4703]: I1209 12:06:05.314035 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:05 crc kubenswrapper[4703]: I1209 12:06:05.314052 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:05 crc kubenswrapper[4703]: I1209 12:06:05.314066 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:05Z","lastTransitionTime":"2025-12-09T12:06:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:05 crc kubenswrapper[4703]: I1209 12:06:05.416579 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:05 crc kubenswrapper[4703]: I1209 12:06:05.416628 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:05 crc kubenswrapper[4703]: I1209 12:06:05.416640 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:05 crc kubenswrapper[4703]: I1209 12:06:05.416658 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:05 crc kubenswrapper[4703]: I1209 12:06:05.416671 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:05Z","lastTransitionTime":"2025-12-09T12:06:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:05 crc kubenswrapper[4703]: I1209 12:06:05.519395 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:05 crc kubenswrapper[4703]: I1209 12:06:05.519452 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:05 crc kubenswrapper[4703]: I1209 12:06:05.519462 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:05 crc kubenswrapper[4703]: I1209 12:06:05.519478 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:05 crc kubenswrapper[4703]: I1209 12:06:05.519488 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:05Z","lastTransitionTime":"2025-12-09T12:06:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:05 crc kubenswrapper[4703]: I1209 12:06:05.621872 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:05 crc kubenswrapper[4703]: I1209 12:06:05.621919 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:05 crc kubenswrapper[4703]: I1209 12:06:05.621930 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:05 crc kubenswrapper[4703]: I1209 12:06:05.621945 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:05 crc kubenswrapper[4703]: I1209 12:06:05.621957 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:05Z","lastTransitionTime":"2025-12-09T12:06:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:05 crc kubenswrapper[4703]: I1209 12:06:05.724264 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:05 crc kubenswrapper[4703]: I1209 12:06:05.724298 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:05 crc kubenswrapper[4703]: I1209 12:06:05.724309 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:05 crc kubenswrapper[4703]: I1209 12:06:05.724323 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:05 crc kubenswrapper[4703]: I1209 12:06:05.724334 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:05Z","lastTransitionTime":"2025-12-09T12:06:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:05 crc kubenswrapper[4703]: I1209 12:06:05.826586 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:05 crc kubenswrapper[4703]: I1209 12:06:05.826624 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:05 crc kubenswrapper[4703]: I1209 12:06:05.826633 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:05 crc kubenswrapper[4703]: I1209 12:06:05.826647 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:05 crc kubenswrapper[4703]: I1209 12:06:05.826657 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:05Z","lastTransitionTime":"2025-12-09T12:06:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:05 crc kubenswrapper[4703]: I1209 12:06:05.929651 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:05 crc kubenswrapper[4703]: I1209 12:06:05.929684 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:05 crc kubenswrapper[4703]: I1209 12:06:05.929694 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:05 crc kubenswrapper[4703]: I1209 12:06:05.929709 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:05 crc kubenswrapper[4703]: I1209 12:06:05.929719 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:05Z","lastTransitionTime":"2025-12-09T12:06:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:06 crc kubenswrapper[4703]: I1209 12:06:06.031747 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:06 crc kubenswrapper[4703]: I1209 12:06:06.031781 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:06 crc kubenswrapper[4703]: I1209 12:06:06.031794 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:06 crc kubenswrapper[4703]: I1209 12:06:06.031809 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:06 crc kubenswrapper[4703]: I1209 12:06:06.031819 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:06Z","lastTransitionTime":"2025-12-09T12:06:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:06 crc kubenswrapper[4703]: I1209 12:06:06.133772 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:06 crc kubenswrapper[4703]: I1209 12:06:06.133815 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:06 crc kubenswrapper[4703]: I1209 12:06:06.133828 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:06 crc kubenswrapper[4703]: I1209 12:06:06.133842 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:06 crc kubenswrapper[4703]: I1209 12:06:06.133851 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:06Z","lastTransitionTime":"2025-12-09T12:06:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:06 crc kubenswrapper[4703]: I1209 12:06:06.235710 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:06 crc kubenswrapper[4703]: I1209 12:06:06.235750 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:06 crc kubenswrapper[4703]: I1209 12:06:06.235759 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:06 crc kubenswrapper[4703]: I1209 12:06:06.235775 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:06 crc kubenswrapper[4703]: I1209 12:06:06.235785 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:06Z","lastTransitionTime":"2025-12-09T12:06:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:06 crc kubenswrapper[4703]: I1209 12:06:06.337441 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:06 crc kubenswrapper[4703]: I1209 12:06:06.337470 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:06 crc kubenswrapper[4703]: I1209 12:06:06.337479 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:06 crc kubenswrapper[4703]: I1209 12:06:06.337491 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:06 crc kubenswrapper[4703]: I1209 12:06:06.337499 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:06Z","lastTransitionTime":"2025-12-09T12:06:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:06 crc kubenswrapper[4703]: I1209 12:06:06.440540 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:06 crc kubenswrapper[4703]: I1209 12:06:06.440599 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:06 crc kubenswrapper[4703]: I1209 12:06:06.440609 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:06 crc kubenswrapper[4703]: I1209 12:06:06.440626 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:06 crc kubenswrapper[4703]: I1209 12:06:06.440638 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:06Z","lastTransitionTime":"2025-12-09T12:06:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:06 crc kubenswrapper[4703]: I1209 12:06:06.543156 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:06 crc kubenswrapper[4703]: I1209 12:06:06.543240 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:06 crc kubenswrapper[4703]: I1209 12:06:06.543248 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:06 crc kubenswrapper[4703]: I1209 12:06:06.543261 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:06 crc kubenswrapper[4703]: I1209 12:06:06.543279 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:06Z","lastTransitionTime":"2025-12-09T12:06:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:06 crc kubenswrapper[4703]: I1209 12:06:06.645078 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:06 crc kubenswrapper[4703]: I1209 12:06:06.645125 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:06 crc kubenswrapper[4703]: I1209 12:06:06.645133 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:06 crc kubenswrapper[4703]: I1209 12:06:06.645148 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:06 crc kubenswrapper[4703]: I1209 12:06:06.645159 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:06Z","lastTransitionTime":"2025-12-09T12:06:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:06 crc kubenswrapper[4703]: I1209 12:06:06.747012 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:06 crc kubenswrapper[4703]: I1209 12:06:06.747079 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:06 crc kubenswrapper[4703]: I1209 12:06:06.747091 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:06 crc kubenswrapper[4703]: I1209 12:06:06.747113 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:06 crc kubenswrapper[4703]: I1209 12:06:06.747152 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:06Z","lastTransitionTime":"2025-12-09T12:06:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:06 crc kubenswrapper[4703]: I1209 12:06:06.851244 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:06 crc kubenswrapper[4703]: I1209 12:06:06.851299 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:06 crc kubenswrapper[4703]: I1209 12:06:06.851313 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:06 crc kubenswrapper[4703]: I1209 12:06:06.851334 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:06 crc kubenswrapper[4703]: I1209 12:06:06.851351 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:06Z","lastTransitionTime":"2025-12-09T12:06:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:06 crc kubenswrapper[4703]: I1209 12:06:06.954746 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:06 crc kubenswrapper[4703]: I1209 12:06:06.954789 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:06 crc kubenswrapper[4703]: I1209 12:06:06.954801 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:06 crc kubenswrapper[4703]: I1209 12:06:06.954818 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:06 crc kubenswrapper[4703]: I1209 12:06:06.954832 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:06Z","lastTransitionTime":"2025-12-09T12:06:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:07 crc kubenswrapper[4703]: I1209 12:06:07.057020 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:07 crc kubenswrapper[4703]: I1209 12:06:07.057068 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:07 crc kubenswrapper[4703]: I1209 12:06:07.057080 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:07 crc kubenswrapper[4703]: I1209 12:06:07.057096 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:07 crc kubenswrapper[4703]: I1209 12:06:07.057107 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:07Z","lastTransitionTime":"2025-12-09T12:06:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:07 crc kubenswrapper[4703]: I1209 12:06:07.069396 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:06:07 crc kubenswrapper[4703]: I1209 12:06:07.069422 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:06:07 crc kubenswrapper[4703]: I1209 12:06:07.069396 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:06:07 crc kubenswrapper[4703]: I1209 12:06:07.069470 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pf4r7" Dec 09 12:06:07 crc kubenswrapper[4703]: E1209 12:06:07.069523 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 12:06:07 crc kubenswrapper[4703]: E1209 12:06:07.069602 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 12:06:07 crc kubenswrapper[4703]: E1209 12:06:07.069715 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pf4r7" podUID="9f199898-7916-48b6-b5e6-c878bacae384" Dec 09 12:06:07 crc kubenswrapper[4703]: E1209 12:06:07.069774 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 12:06:07 crc kubenswrapper[4703]: I1209 12:06:07.159362 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:07 crc kubenswrapper[4703]: I1209 12:06:07.159403 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:07 crc kubenswrapper[4703]: I1209 12:06:07.159413 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:07 crc kubenswrapper[4703]: I1209 12:06:07.159428 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:07 crc kubenswrapper[4703]: I1209 12:06:07.159437 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:07Z","lastTransitionTime":"2025-12-09T12:06:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:07 crc kubenswrapper[4703]: I1209 12:06:07.261287 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:07 crc kubenswrapper[4703]: I1209 12:06:07.261323 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:07 crc kubenswrapper[4703]: I1209 12:06:07.261332 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:07 crc kubenswrapper[4703]: I1209 12:06:07.261363 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:07 crc kubenswrapper[4703]: I1209 12:06:07.261373 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:07Z","lastTransitionTime":"2025-12-09T12:06:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:07 crc kubenswrapper[4703]: I1209 12:06:07.363460 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:07 crc kubenswrapper[4703]: I1209 12:06:07.363548 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:07 crc kubenswrapper[4703]: I1209 12:06:07.363567 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:07 crc kubenswrapper[4703]: I1209 12:06:07.363618 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:07 crc kubenswrapper[4703]: I1209 12:06:07.363630 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:07Z","lastTransitionTime":"2025-12-09T12:06:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:07 crc kubenswrapper[4703]: I1209 12:06:07.419206 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9zbgq_b57e1095-b0e1-4b30-a491-00852a5219e7/kube-multus/0.log" Dec 09 12:06:07 crc kubenswrapper[4703]: I1209 12:06:07.419272 4703 generic.go:334] "Generic (PLEG): container finished" podID="b57e1095-b0e1-4b30-a491-00852a5219e7" containerID="c902e8b479756afb7cf929d0963afc0f950288658427c1d49ff22ee4319cda70" exitCode=1 Dec 09 12:06:07 crc kubenswrapper[4703]: I1209 12:06:07.419306 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9zbgq" event={"ID":"b57e1095-b0e1-4b30-a491-00852a5219e7","Type":"ContainerDied","Data":"c902e8b479756afb7cf929d0963afc0f950288658427c1d49ff22ee4319cda70"} Dec 09 12:06:07 crc kubenswrapper[4703]: I1209 12:06:07.419677 4703 scope.go:117] "RemoveContainer" containerID="c902e8b479756afb7cf929d0963afc0f950288658427c1d49ff22ee4319cda70" Dec 09 12:06:07 crc kubenswrapper[4703]: I1209 12:06:07.433176 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e02414b1-6edb-41df-bef5-da5638c58c13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf2e3462f3adecb5658327310dd9a543e1f7fe8b7d477f0f90c77f9a3c1724cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4933db7a36f445ad8289586a5de8f593c8f565d53b4b310292c2a6b436b86b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dd75d360dcc49fc8db00b8a075ce33da2bd845289f44a0148ab391635ab90d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://feccbfdb1fa7a9935e98927106faf0c924485823950276b473cdb8c06c4b07eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:07Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:07 crc kubenswrapper[4703]: I1209 12:06:07.443299 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e04b82b0-1939-4ba9-8913-d41e77c0d60e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4efcec3e0a1bc588b61035ca4ff2f4932da9a6f39c37e48f2d81d3c7d6999822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a54cbf7d80c64289b2c6534ae640af5e06588f6d89e949a307ecd54f87eb429d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22136ab0fb8a3051c168e5b6e650e9b4da83daa189e67b570a453603bd5f4814\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b2cdfd8cad0173fd775a465149996734b60c1c457ffc602c552103ecbb26f68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b2cdfd8cad0173fd775a465149996734b60c1c457ffc602c552103ecbb26f68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:01Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:07Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:07 crc kubenswrapper[4703]: I1209 12:06:07.461591 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:07Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:07 crc kubenswrapper[4703]: I1209 12:06:07.465396 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:07 crc kubenswrapper[4703]: I1209 12:06:07.465416 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:07 crc kubenswrapper[4703]: I1209 12:06:07.465423 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:07 crc kubenswrapper[4703]: I1209 12:06:07.465437 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:07 crc kubenswrapper[4703]: I1209 12:06:07.465445 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:07Z","lastTransitionTime":"2025-12-09T12:06:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:07 crc kubenswrapper[4703]: I1209 12:06:07.470601 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ncbbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c180fbd9-43db-436b-8166-3cbcb5a14da3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8f7249f78155fea92f6a72919fd391b841a3cdb25583c9b9bc1083c6e34f087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22tv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ncbbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:07Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:07 crc kubenswrapper[4703]: I1209 12:06:07.487820 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9173444-5181-4ee4-b651-11d92ccab0d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://382198a5fbf52c39a98bc79aa4e917d50dfbcb3ff73d831389901544d400817c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce8423b98a9127fea76c23c865d9840069fe97e1d10dc9e97816c916192bf8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5989b9b8e8fb619a07c3242f7c4310e042a281648593a05edd6cb16449480428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9652e0d783249140796d5c8586e300acb7918f3ced79a1daec01da21eac363c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06ebee5cde81f7c5fb299c4a346f466d3bd4667be4dd2e89712dcae660a1e08c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02b8fb5400b3ed69b7e65ae9f09aac617bd51972cfa0d5aec436c9f59eb65ac2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://81d94ce172f3ac2803f304ca33a0549f1e75e67a45f3acedc0b276a83e885db1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81d94ce172f3ac2803f304ca33a0549f1e75e67a45f3acedc0b276a83e885db1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T12:05:47Z\\\",\\\"message\\\":\\\"cluster\\\\\\\", UUID:\\\\\\\"d4efc4a8-c514-4a6b-901c-2953978b50d3\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-multus/multus-admission-controller\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-multus/multus-admission-controller_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-multus/multus-admission-controller\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.119\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.119\\\\\\\", Port:8443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1209 12:05:46.875029 6369 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[extern\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-7hrm8_openshift-ovn-kubernetes(e9173444-5181-4ee4-b651-11d92ccab0d0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88a462e310fdbdf79496906a304fcf953c4e48c9cab9137f52cf5aafa9c4054c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f192db4b078c6bf3ffea6db58842ce7e8b4ad76c12fa8422496288bc0cfe0c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f192db4b078c6bf3ffea6db58842ce7e8b4ad76c12fa8422496288bc0cfe0c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7hrm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:07Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:07 crc kubenswrapper[4703]: I1209 12:06:07.498257 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pf4r7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f199898-7916-48b6-b5e6-c878bacae384\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-856ln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-856ln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pf4r7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:07Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:07 crc kubenswrapper[4703]: I1209 12:06:07.509149 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f05c1cdf163073cebf69a5dcd00e285698d11f7c76995b202500503a9762d467\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:07Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:07 crc kubenswrapper[4703]: I1209 12:06:07.521417 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:07Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:07 crc kubenswrapper[4703]: I1209 12:06:07.536685 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15947b2dcc90c69809979bf2d4a222ee664245d897b1f9d9ca91d29551ad86ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://592c8564eeb877cf87544918cb97d0e32451a6afd1c201efafb4cfec9a1e5857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:07Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:07 crc kubenswrapper[4703]: I1209 12:06:07.547019 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rfmng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a89eb00-454a-44b2-9b8e-6518b4a9d10c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a0c3ac19e4c5217f17ea8cd7c742043ee7708b0f086c99341a4bbc062fa53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qf85c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rfmng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:07Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:07 crc kubenswrapper[4703]: I1209 12:06:07.557376 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:07Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:07 crc kubenswrapper[4703]: I1209 12:06:07.567584 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:07 crc kubenswrapper[4703]: I1209 12:06:07.567620 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:07 crc kubenswrapper[4703]: I1209 12:06:07.567630 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:07 crc kubenswrapper[4703]: I1209 12:06:07.567646 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:07 crc kubenswrapper[4703]: I1209 12:06:07.567657 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:07Z","lastTransitionTime":"2025-12-09T12:06:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:07 crc kubenswrapper[4703]: I1209 12:06:07.570810 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4r9tc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65954588-9afb-47ff-8c0b-f83bf290da27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://698ca5256efee75c73f1585ad1f7955676be6dac9cbd1c5aebdbf88848c35cee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aeaa589b823d27c4d29e522f951c5578acd67848c14381e3f61821458f74a966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aeaa589b823d27c4d29e522f951c5578acd67848c14381e3f61821458f74a966\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e7f1224d3313cfd9900663c77a41898ee26d0f91591e2f641a17a6a543c69b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e7f1224d3313cfd9900663c77a41898ee26d0f91591e2f641a17a6a543c69b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f86761f861d07fe4e22b95651d0ed069799b95020e8e168527fd838ea6cc1e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f86761f861d07fe4e22b95651d0ed069799b95020e8e168527fd838ea6cc1e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c0a351532024238eb4c192066350cdd4f94945cfa549586ced1c91a2379490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c0a351532024238eb4c192066350cdd4f94945cfa549586ced1c91a2379490\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7baabf4646f56bb08044eb6cb29976274522e962906d573a279e967e64d191f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7baabf4646f56bb08044eb6cb29976274522e962906d573a279e967e64d191f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87f16f3d165437fc695faa410c055f3f3716fbb47e2dc1d41bba05315872e950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87f16f3d165437fc695faa410c055f3f3716fbb47e2dc1d41bba05315872e950\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4r9tc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:07Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:07 crc kubenswrapper[4703]: I1209 12:06:07.583461 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9zbgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b57e1095-b0e1-4b30-a491-00852a5219e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c902e8b479756afb7cf929d0963afc0f950288658427c1d49ff22ee4319cda70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c902e8b479756afb7cf929d0963afc0f950288658427c1d49ff22ee4319cda70\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T12:06:07Z\\\",\\\"message\\\":\\\"2025-12-09T12:05:22+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1457100e-c7f0-4b9d-98ec-8cebbfe75af3\\\\n2025-12-09T12:05:22+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1457100e-c7f0-4b9d-98ec-8cebbfe75af3 to /host/opt/cni/bin/\\\\n2025-12-09T12:05:22Z [verbose] multus-daemon started\\\\n2025-12-09T12:05:22Z [verbose] Readiness Indicator file check\\\\n2025-12-09T12:06:07Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6sdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9zbgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:07Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:07 crc kubenswrapper[4703]: I1209 12:06:07.596689 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"687e9c7e-e4b8-4bbf-871e-714260501e27\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd9149c63c18f6178219d1f2e4f8e44bc861746b90fb9748105461192a870431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8612defa917cc99574e7762a0260e1b670277ea754e0a66bca960ef6b5a97f2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc0c9f4fdf8ce1ac42d54d3823b0137e0fd8ddf4499236ac3b954f9bff7c491\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd716e645be3932ca219e76ba8d896a8cf27086bbbac8f5e68057873fa68a6f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da708e8829aba2597198cdab64374791885bac9ddf7cb04b296c81d4e81d42ff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 12:05:18.266857 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 12:05:18.266993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 12:05:18.268588 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1075686048/tls.crt::/tmp/serving-cert-1075686048/tls.key\\\\\\\"\\\\nI1209 12:05:18.602670 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 12:05:18.604681 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 12:05:18.604730 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 12:05:18.604783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 12:05:18.604808 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 12:05:18.609177 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 12:05:18.609280 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:05:18.609306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:05:18.609328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 12:05:18.609348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 12:05:18.609367 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 12:05:18.609387 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 12:05:18.609237 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 12:05:18.612865 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://917390e847b6ec24f370316f0fbc305db0f667eab946b4f0e749855a62112cc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55c7e1e49ab4af3fff08ccfa9a7d9b4f889fbdd1d60592bca6c87990f4623d5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55c7e1e49ab4af3fff08ccfa9a7d9b4f889fbdd1d60592bca6c87990f4623d5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:07Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:07 crc kubenswrapper[4703]: I1209 12:06:07.608543 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998159f494b646ac88f1f1ffb09789bd146d7b3554b3a5106174d7c72f3ff583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:07Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:07 crc kubenswrapper[4703]: I1209 12:06:07.619301 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32956ceb-8540-406e-8693-e86efb46cd42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1535901888e91bc0037177a3150033b17d6e2d143faa5a082dba0d096cad474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tv4nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d149dfa980c5e9cee99cd7ba8a51bca19529368e037a1fd8e9b8dbea04179378\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tv4nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q8sfk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:07Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:07 crc kubenswrapper[4703]: I1209 12:06:07.630586 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4sss9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9dac2a02-8ee0-445c-bac8-4d448cda509d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c698b36fa974c95698ac8ac133b3caa9cbe8847626f5abf8088011c1a1a13032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q96fc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://660ecb626e35b5e693783b416e62c773b996e7e39233160e422dc9357ca0fd2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q96fc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4sss9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:07Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:07 crc kubenswrapper[4703]: I1209 12:06:07.670481 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:07 crc kubenswrapper[4703]: I1209 12:06:07.670513 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:07 crc kubenswrapper[4703]: I1209 12:06:07.670522 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:07 crc kubenswrapper[4703]: I1209 12:06:07.670535 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:07 crc kubenswrapper[4703]: I1209 12:06:07.670543 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:07Z","lastTransitionTime":"2025-12-09T12:06:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:07 crc kubenswrapper[4703]: I1209 12:06:07.774140 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:07 crc kubenswrapper[4703]: I1209 12:06:07.774176 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:07 crc kubenswrapper[4703]: I1209 12:06:07.774213 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:07 crc kubenswrapper[4703]: I1209 12:06:07.774226 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:07 crc kubenswrapper[4703]: I1209 12:06:07.774234 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:07Z","lastTransitionTime":"2025-12-09T12:06:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:07 crc kubenswrapper[4703]: I1209 12:06:07.876268 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:07 crc kubenswrapper[4703]: I1209 12:06:07.876295 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:07 crc kubenswrapper[4703]: I1209 12:06:07.876303 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:07 crc kubenswrapper[4703]: I1209 12:06:07.876317 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:07 crc kubenswrapper[4703]: I1209 12:06:07.876326 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:07Z","lastTransitionTime":"2025-12-09T12:06:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:07 crc kubenswrapper[4703]: I1209 12:06:07.978003 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:07 crc kubenswrapper[4703]: I1209 12:06:07.978028 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:07 crc kubenswrapper[4703]: I1209 12:06:07.978036 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:07 crc kubenswrapper[4703]: I1209 12:06:07.978048 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:07 crc kubenswrapper[4703]: I1209 12:06:07.978056 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:07Z","lastTransitionTime":"2025-12-09T12:06:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:08 crc kubenswrapper[4703]: I1209 12:06:08.079936 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:08 crc kubenswrapper[4703]: I1209 12:06:08.079969 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:08 crc kubenswrapper[4703]: I1209 12:06:08.079977 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:08 crc kubenswrapper[4703]: I1209 12:06:08.079991 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:08 crc kubenswrapper[4703]: I1209 12:06:08.080003 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:08Z","lastTransitionTime":"2025-12-09T12:06:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:08 crc kubenswrapper[4703]: I1209 12:06:08.182399 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:08 crc kubenswrapper[4703]: I1209 12:06:08.182445 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:08 crc kubenswrapper[4703]: I1209 12:06:08.182463 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:08 crc kubenswrapper[4703]: I1209 12:06:08.182481 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:08 crc kubenswrapper[4703]: I1209 12:06:08.182493 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:08Z","lastTransitionTime":"2025-12-09T12:06:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:08 crc kubenswrapper[4703]: I1209 12:06:08.284486 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:08 crc kubenswrapper[4703]: I1209 12:06:08.284513 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:08 crc kubenswrapper[4703]: I1209 12:06:08.284521 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:08 crc kubenswrapper[4703]: I1209 12:06:08.284534 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:08 crc kubenswrapper[4703]: I1209 12:06:08.284542 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:08Z","lastTransitionTime":"2025-12-09T12:06:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:08 crc kubenswrapper[4703]: I1209 12:06:08.386403 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:08 crc kubenswrapper[4703]: I1209 12:06:08.386432 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:08 crc kubenswrapper[4703]: I1209 12:06:08.386443 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:08 crc kubenswrapper[4703]: I1209 12:06:08.386460 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:08 crc kubenswrapper[4703]: I1209 12:06:08.386469 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:08Z","lastTransitionTime":"2025-12-09T12:06:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:08 crc kubenswrapper[4703]: I1209 12:06:08.424209 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9zbgq_b57e1095-b0e1-4b30-a491-00852a5219e7/kube-multus/0.log" Dec 09 12:06:08 crc kubenswrapper[4703]: I1209 12:06:08.424266 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9zbgq" event={"ID":"b57e1095-b0e1-4b30-a491-00852a5219e7","Type":"ContainerStarted","Data":"71dde398c11e22726cc75a12c047119d7f10bc0a03bb5bda05b6cab5a65c9bda"} Dec 09 12:06:08 crc kubenswrapper[4703]: I1209 12:06:08.439025 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f05c1cdf163073cebf69a5dcd00e285698d11f7c76995b202500503a9762d467\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:08Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:08 crc kubenswrapper[4703]: I1209 12:06:08.450690 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:08Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:08 crc kubenswrapper[4703]: I1209 12:06:08.463160 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15947b2dcc90c69809979bf2d4a222ee664245d897b1f9d9ca91d29551ad86ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://592c8564eeb877cf87544918cb97d0e32451a6afd1c201efafb4cfec9a1e5857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:08Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:08 crc kubenswrapper[4703]: I1209 12:06:08.472998 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rfmng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a89eb00-454a-44b2-9b8e-6518b4a9d10c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a0c3ac19e4c5217f17ea8cd7c742043ee7708b0f086c99341a4bbc062fa53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qf85c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rfmng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:08Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:08 crc kubenswrapper[4703]: I1209 12:06:08.482951 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:08Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:08 crc kubenswrapper[4703]: I1209 12:06:08.488339 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:08 crc kubenswrapper[4703]: I1209 12:06:08.488390 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:08 crc kubenswrapper[4703]: I1209 12:06:08.488400 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:08 crc kubenswrapper[4703]: I1209 12:06:08.488414 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:08 crc kubenswrapper[4703]: I1209 12:06:08.488425 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:08Z","lastTransitionTime":"2025-12-09T12:06:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:08 crc kubenswrapper[4703]: I1209 12:06:08.494792 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4r9tc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65954588-9afb-47ff-8c0b-f83bf290da27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://698ca5256efee75c73f1585ad1f7955676be6dac9cbd1c5aebdbf88848c35cee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aeaa589b823d27c4d29e522f951c5578acd67848c14381e3f61821458f74a966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aeaa589b823d27c4d29e522f951c5578acd67848c14381e3f61821458f74a966\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e7f1224d3313cfd9900663c77a41898ee26d0f91591e2f641a17a6a543c69b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e7f1224d3313cfd9900663c77a41898ee26d0f91591e2f641a17a6a543c69b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f86761f861d07fe4e22b95651d0ed069799b95020e8e168527fd838ea6cc1e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f86761f861d07fe4e22b95651d0ed069799b95020e8e168527fd838ea6cc1e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c0a351532024238eb4c192066350cdd4f94945cfa549586ced1c91a2379490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c0a351532024238eb4c192066350cdd4f94945cfa549586ced1c91a2379490\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7baabf4646f56bb08044eb6cb29976274522e962906d573a279e967e64d191f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7baabf4646f56bb08044eb6cb29976274522e962906d573a279e967e64d191f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87f16f3d165437fc695faa410c055f3f3716fbb47e2dc1d41bba05315872e950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87f16f3d165437fc695faa410c055f3f3716fbb47e2dc1d41bba05315872e950\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4r9tc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:08Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:08 crc kubenswrapper[4703]: I1209 12:06:08.505934 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9zbgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b57e1095-b0e1-4b30-a491-00852a5219e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71dde398c11e22726cc75a12c047119d7f10bc0a03bb5bda05b6cab5a65c9bda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c902e8b479756afb7cf929d0963afc0f950288658427c1d49ff22ee4319cda70\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T12:06:07Z\\\",\\\"message\\\":\\\"2025-12-09T12:05:22+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1457100e-c7f0-4b9d-98ec-8cebbfe75af3\\\\n2025-12-09T12:05:22+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1457100e-c7f0-4b9d-98ec-8cebbfe75af3 to /host/opt/cni/bin/\\\\n2025-12-09T12:05:22Z [verbose] multus-daemon started\\\\n2025-12-09T12:05:22Z [verbose] Readiness Indicator file check\\\\n2025-12-09T12:06:07Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6sdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9zbgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:08Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:08 crc kubenswrapper[4703]: I1209 12:06:08.514880 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4sss9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9dac2a02-8ee0-445c-bac8-4d448cda509d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c698b36fa974c95698ac8ac133b3caa9cbe8847626f5abf8088011c1a1a13032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q96fc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://660ecb626e35b5e693783b416e62c773b996e7e39233160e422dc9357ca0fd2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q96fc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4sss9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:08Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:08 crc kubenswrapper[4703]: I1209 12:06:08.526470 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"687e9c7e-e4b8-4bbf-871e-714260501e27\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd9149c63c18f6178219d1f2e4f8e44bc861746b90fb9748105461192a870431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8612defa917cc99574e7762a0260e1b670277ea754e0a66bca960ef6b5a97f2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc0c9f4fdf8ce1ac42d54d3823b0137e0fd8ddf4499236ac3b954f9bff7c491\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd716e645be3932ca219e76ba8d896a8cf27086bbbac8f5e68057873fa68a6f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da708e8829aba2597198cdab64374791885bac9ddf7cb04b296c81d4e81d42ff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 12:05:18.266857 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 12:05:18.266993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 12:05:18.268588 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1075686048/tls.crt::/tmp/serving-cert-1075686048/tls.key\\\\\\\"\\\\nI1209 12:05:18.602670 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 12:05:18.604681 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 12:05:18.604730 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 12:05:18.604783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 12:05:18.604808 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 12:05:18.609177 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 12:05:18.609280 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:05:18.609306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:05:18.609328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 12:05:18.609348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 12:05:18.609367 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 12:05:18.609387 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 12:05:18.609237 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 12:05:18.612865 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://917390e847b6ec24f370316f0fbc305db0f667eab946b4f0e749855a62112cc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55c7e1e49ab4af3fff08ccfa9a7d9b4f889fbdd1d60592bca6c87990f4623d5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55c7e1e49ab4af3fff08ccfa9a7d9b4f889fbdd1d60592bca6c87990f4623d5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:08Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:08 crc kubenswrapper[4703]: I1209 12:06:08.536736 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998159f494b646ac88f1f1ffb09789bd146d7b3554b3a5106174d7c72f3ff583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:08Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:08 crc kubenswrapper[4703]: I1209 12:06:08.553164 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32956ceb-8540-406e-8693-e86efb46cd42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1535901888e91bc0037177a3150033b17d6e2d143faa5a082dba0d096cad474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tv4nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d149dfa980c5e9cee99cd7ba8a51bca19529368e037a1fd8e9b8dbea04179378\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tv4nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q8sfk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:08Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:08 crc kubenswrapper[4703]: I1209 12:06:08.562148 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ncbbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c180fbd9-43db-436b-8166-3cbcb5a14da3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8f7249f78155fea92f6a72919fd391b841a3cdb25583c9b9bc1083c6e34f087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22tv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ncbbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:08Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:08 crc kubenswrapper[4703]: I1209 12:06:08.580332 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9173444-5181-4ee4-b651-11d92ccab0d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://382198a5fbf52c39a98bc79aa4e917d50dfbcb3ff73d831389901544d400817c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce8423b98a9127fea76c23c865d9840069fe97e1d10dc9e97816c916192bf8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5989b9b8e8fb619a07c3242f7c4310e042a281648593a05edd6cb16449480428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9652e0d783249140796d5c8586e300acb7918f3ced79a1daec01da21eac363c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06ebee5cde81f7c5fb299c4a346f466d3bd4667be4dd2e89712dcae660a1e08c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02b8fb5400b3ed69b7e65ae9f09aac617bd51972cfa0d5aec436c9f59eb65ac2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://81d94ce172f3ac2803f304ca33a0549f1e75e67a45f3acedc0b276a83e885db1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81d94ce172f3ac2803f304ca33a0549f1e75e67a45f3acedc0b276a83e885db1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T12:05:47Z\\\",\\\"message\\\":\\\"cluster\\\\\\\", UUID:\\\\\\\"d4efc4a8-c514-4a6b-901c-2953978b50d3\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-multus/multus-admission-controller\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-multus/multus-admission-controller_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-multus/multus-admission-controller\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.119\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.119\\\\\\\", Port:8443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1209 12:05:46.875029 6369 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[extern\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-7hrm8_openshift-ovn-kubernetes(e9173444-5181-4ee4-b651-11d92ccab0d0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88a462e310fdbdf79496906a304fcf953c4e48c9cab9137f52cf5aafa9c4054c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f192db4b078c6bf3ffea6db58842ce7e8b4ad76c12fa8422496288bc0cfe0c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f192db4b078c6bf3ffea6db58842ce7e8b4ad76c12fa8422496288bc0cfe0c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7hrm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:08Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:08 crc kubenswrapper[4703]: I1209 12:06:08.589557 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pf4r7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f199898-7916-48b6-b5e6-c878bacae384\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-856ln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-856ln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pf4r7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:08Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:08 crc kubenswrapper[4703]: I1209 12:06:08.591182 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:08 crc kubenswrapper[4703]: I1209 12:06:08.591244 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:08 crc kubenswrapper[4703]: I1209 12:06:08.591256 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:08 crc kubenswrapper[4703]: I1209 12:06:08.591271 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:08 crc kubenswrapper[4703]: I1209 12:06:08.591280 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:08Z","lastTransitionTime":"2025-12-09T12:06:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:08 crc kubenswrapper[4703]: I1209 12:06:08.600837 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e02414b1-6edb-41df-bef5-da5638c58c13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf2e3462f3adecb5658327310dd9a543e1f7fe8b7d477f0f90c77f9a3c1724cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4933db7a36f445ad8289586a5de8f593c8f565d53b4b310292c2a6b436b86b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dd75d360dcc49fc8db00b8a075ce33da2bd845289f44a0148ab391635ab90d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://feccbfdb1fa7a9935e98927106faf0c924485823950276b473cdb8c06c4b07eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:08Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:08 crc kubenswrapper[4703]: I1209 12:06:08.612166 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e04b82b0-1939-4ba9-8913-d41e77c0d60e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4efcec3e0a1bc588b61035ca4ff2f4932da9a6f39c37e48f2d81d3c7d6999822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a54cbf7d80c64289b2c6534ae640af5e06588f6d89e949a307ecd54f87eb429d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22136ab0fb8a3051c168e5b6e650e9b4da83daa189e67b570a453603bd5f4814\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b2cdfd8cad0173fd775a465149996734b60c1c457ffc602c552103ecbb26f68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b2cdfd8cad0173fd775a465149996734b60c1c457ffc602c552103ecbb26f68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:01Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:08Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:08 crc kubenswrapper[4703]: I1209 12:06:08.622750 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:08Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:08 crc kubenswrapper[4703]: I1209 12:06:08.693544 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:08 crc kubenswrapper[4703]: I1209 12:06:08.693585 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:08 crc kubenswrapper[4703]: I1209 12:06:08.693596 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:08 crc kubenswrapper[4703]: I1209 12:06:08.693611 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:08 crc kubenswrapper[4703]: I1209 12:06:08.693622 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:08Z","lastTransitionTime":"2025-12-09T12:06:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:08 crc kubenswrapper[4703]: I1209 12:06:08.795748 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:08 crc kubenswrapper[4703]: I1209 12:06:08.795785 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:08 crc kubenswrapper[4703]: I1209 12:06:08.795796 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:08 crc kubenswrapper[4703]: I1209 12:06:08.795810 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:08 crc kubenswrapper[4703]: I1209 12:06:08.795820 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:08Z","lastTransitionTime":"2025-12-09T12:06:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:08 crc kubenswrapper[4703]: I1209 12:06:08.897794 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:08 crc kubenswrapper[4703]: I1209 12:06:08.897837 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:08 crc kubenswrapper[4703]: I1209 12:06:08.897848 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:08 crc kubenswrapper[4703]: I1209 12:06:08.897863 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:08 crc kubenswrapper[4703]: I1209 12:06:08.897874 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:08Z","lastTransitionTime":"2025-12-09T12:06:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:09 crc kubenswrapper[4703]: I1209 12:06:09.000064 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:09 crc kubenswrapper[4703]: I1209 12:06:09.000104 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:09 crc kubenswrapper[4703]: I1209 12:06:09.000114 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:09 crc kubenswrapper[4703]: I1209 12:06:09.000127 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:09 crc kubenswrapper[4703]: I1209 12:06:09.000136 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:09Z","lastTransitionTime":"2025-12-09T12:06:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:09 crc kubenswrapper[4703]: I1209 12:06:09.068786 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:06:09 crc kubenswrapper[4703]: I1209 12:06:09.068851 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:06:09 crc kubenswrapper[4703]: E1209 12:06:09.068910 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 12:06:09 crc kubenswrapper[4703]: I1209 12:06:09.069068 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pf4r7" Dec 09 12:06:09 crc kubenswrapper[4703]: E1209 12:06:09.069122 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pf4r7" podUID="9f199898-7916-48b6-b5e6-c878bacae384" Dec 09 12:06:09 crc kubenswrapper[4703]: E1209 12:06:09.069055 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 12:06:09 crc kubenswrapper[4703]: I1209 12:06:09.069144 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:06:09 crc kubenswrapper[4703]: E1209 12:06:09.069290 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 12:06:09 crc kubenswrapper[4703]: I1209 12:06:09.102292 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:09 crc kubenswrapper[4703]: I1209 12:06:09.102328 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:09 crc kubenswrapper[4703]: I1209 12:06:09.102337 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:09 crc kubenswrapper[4703]: I1209 12:06:09.102351 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:09 crc kubenswrapper[4703]: I1209 12:06:09.102361 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:09Z","lastTransitionTime":"2025-12-09T12:06:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:09 crc kubenswrapper[4703]: I1209 12:06:09.203842 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:09 crc kubenswrapper[4703]: I1209 12:06:09.203872 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:09 crc kubenswrapper[4703]: I1209 12:06:09.203880 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:09 crc kubenswrapper[4703]: I1209 12:06:09.203892 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:09 crc kubenswrapper[4703]: I1209 12:06:09.203902 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:09Z","lastTransitionTime":"2025-12-09T12:06:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:09 crc kubenswrapper[4703]: I1209 12:06:09.306472 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:09 crc kubenswrapper[4703]: I1209 12:06:09.306508 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:09 crc kubenswrapper[4703]: I1209 12:06:09.306545 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:09 crc kubenswrapper[4703]: I1209 12:06:09.306563 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:09 crc kubenswrapper[4703]: I1209 12:06:09.306572 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:09Z","lastTransitionTime":"2025-12-09T12:06:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:09 crc kubenswrapper[4703]: I1209 12:06:09.408609 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:09 crc kubenswrapper[4703]: I1209 12:06:09.408651 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:09 crc kubenswrapper[4703]: I1209 12:06:09.408661 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:09 crc kubenswrapper[4703]: I1209 12:06:09.408675 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:09 crc kubenswrapper[4703]: I1209 12:06:09.408685 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:09Z","lastTransitionTime":"2025-12-09T12:06:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:09 crc kubenswrapper[4703]: I1209 12:06:09.511051 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:09 crc kubenswrapper[4703]: I1209 12:06:09.511101 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:09 crc kubenswrapper[4703]: I1209 12:06:09.511117 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:09 crc kubenswrapper[4703]: I1209 12:06:09.511136 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:09 crc kubenswrapper[4703]: I1209 12:06:09.511149 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:09Z","lastTransitionTime":"2025-12-09T12:06:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:09 crc kubenswrapper[4703]: I1209 12:06:09.613009 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:09 crc kubenswrapper[4703]: I1209 12:06:09.613057 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:09 crc kubenswrapper[4703]: I1209 12:06:09.613070 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:09 crc kubenswrapper[4703]: I1209 12:06:09.613086 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:09 crc kubenswrapper[4703]: I1209 12:06:09.613096 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:09Z","lastTransitionTime":"2025-12-09T12:06:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:09 crc kubenswrapper[4703]: I1209 12:06:09.714942 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:09 crc kubenswrapper[4703]: I1209 12:06:09.714981 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:09 crc kubenswrapper[4703]: I1209 12:06:09.714992 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:09 crc kubenswrapper[4703]: I1209 12:06:09.715008 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:09 crc kubenswrapper[4703]: I1209 12:06:09.715021 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:09Z","lastTransitionTime":"2025-12-09T12:06:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:09 crc kubenswrapper[4703]: I1209 12:06:09.817362 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:09 crc kubenswrapper[4703]: I1209 12:06:09.817408 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:09 crc kubenswrapper[4703]: I1209 12:06:09.817419 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:09 crc kubenswrapper[4703]: I1209 12:06:09.817438 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:09 crc kubenswrapper[4703]: I1209 12:06:09.817449 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:09Z","lastTransitionTime":"2025-12-09T12:06:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:09 crc kubenswrapper[4703]: I1209 12:06:09.919305 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:09 crc kubenswrapper[4703]: I1209 12:06:09.919341 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:09 crc kubenswrapper[4703]: I1209 12:06:09.919352 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:09 crc kubenswrapper[4703]: I1209 12:06:09.919367 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:09 crc kubenswrapper[4703]: I1209 12:06:09.919379 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:09Z","lastTransitionTime":"2025-12-09T12:06:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:10 crc kubenswrapper[4703]: I1209 12:06:10.022167 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:10 crc kubenswrapper[4703]: I1209 12:06:10.022224 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:10 crc kubenswrapper[4703]: I1209 12:06:10.022236 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:10 crc kubenswrapper[4703]: I1209 12:06:10.022254 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:10 crc kubenswrapper[4703]: I1209 12:06:10.022267 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:10Z","lastTransitionTime":"2025-12-09T12:06:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:10 crc kubenswrapper[4703]: I1209 12:06:10.124244 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:10 crc kubenswrapper[4703]: I1209 12:06:10.124289 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:10 crc kubenswrapper[4703]: I1209 12:06:10.124298 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:10 crc kubenswrapper[4703]: I1209 12:06:10.124315 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:10 crc kubenswrapper[4703]: I1209 12:06:10.124326 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:10Z","lastTransitionTime":"2025-12-09T12:06:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:10 crc kubenswrapper[4703]: I1209 12:06:10.226837 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:10 crc kubenswrapper[4703]: I1209 12:06:10.226888 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:10 crc kubenswrapper[4703]: I1209 12:06:10.226902 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:10 crc kubenswrapper[4703]: I1209 12:06:10.226919 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:10 crc kubenswrapper[4703]: I1209 12:06:10.226930 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:10Z","lastTransitionTime":"2025-12-09T12:06:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:10 crc kubenswrapper[4703]: I1209 12:06:10.329870 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:10 crc kubenswrapper[4703]: I1209 12:06:10.329908 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:10 crc kubenswrapper[4703]: I1209 12:06:10.329920 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:10 crc kubenswrapper[4703]: I1209 12:06:10.329938 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:10 crc kubenswrapper[4703]: I1209 12:06:10.329950 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:10Z","lastTransitionTime":"2025-12-09T12:06:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:10 crc kubenswrapper[4703]: I1209 12:06:10.431259 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:10 crc kubenswrapper[4703]: I1209 12:06:10.431288 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:10 crc kubenswrapper[4703]: I1209 12:06:10.431297 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:10 crc kubenswrapper[4703]: I1209 12:06:10.431309 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:10 crc kubenswrapper[4703]: I1209 12:06:10.431319 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:10Z","lastTransitionTime":"2025-12-09T12:06:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:10 crc kubenswrapper[4703]: I1209 12:06:10.535221 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:10 crc kubenswrapper[4703]: I1209 12:06:10.535257 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:10 crc kubenswrapper[4703]: I1209 12:06:10.535268 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:10 crc kubenswrapper[4703]: I1209 12:06:10.535282 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:10 crc kubenswrapper[4703]: I1209 12:06:10.535294 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:10Z","lastTransitionTime":"2025-12-09T12:06:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:10 crc kubenswrapper[4703]: I1209 12:06:10.637566 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:10 crc kubenswrapper[4703]: I1209 12:06:10.637615 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:10 crc kubenswrapper[4703]: I1209 12:06:10.637623 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:10 crc kubenswrapper[4703]: I1209 12:06:10.637638 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:10 crc kubenswrapper[4703]: I1209 12:06:10.637647 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:10Z","lastTransitionTime":"2025-12-09T12:06:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:10 crc kubenswrapper[4703]: I1209 12:06:10.739854 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:10 crc kubenswrapper[4703]: I1209 12:06:10.739889 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:10 crc kubenswrapper[4703]: I1209 12:06:10.739898 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:10 crc kubenswrapper[4703]: I1209 12:06:10.739911 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:10 crc kubenswrapper[4703]: I1209 12:06:10.739921 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:10Z","lastTransitionTime":"2025-12-09T12:06:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:10 crc kubenswrapper[4703]: I1209 12:06:10.842176 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:10 crc kubenswrapper[4703]: I1209 12:06:10.842250 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:10 crc kubenswrapper[4703]: I1209 12:06:10.842262 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:10 crc kubenswrapper[4703]: I1209 12:06:10.842300 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:10 crc kubenswrapper[4703]: I1209 12:06:10.842314 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:10Z","lastTransitionTime":"2025-12-09T12:06:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:10 crc kubenswrapper[4703]: I1209 12:06:10.944285 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:10 crc kubenswrapper[4703]: I1209 12:06:10.944322 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:10 crc kubenswrapper[4703]: I1209 12:06:10.944333 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:10 crc kubenswrapper[4703]: I1209 12:06:10.944347 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:10 crc kubenswrapper[4703]: I1209 12:06:10.944357 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:10Z","lastTransitionTime":"2025-12-09T12:06:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:11 crc kubenswrapper[4703]: I1209 12:06:11.046682 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:11 crc kubenswrapper[4703]: I1209 12:06:11.046722 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:11 crc kubenswrapper[4703]: I1209 12:06:11.046730 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:11 crc kubenswrapper[4703]: I1209 12:06:11.046745 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:11 crc kubenswrapper[4703]: I1209 12:06:11.046754 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:11Z","lastTransitionTime":"2025-12-09T12:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:11 crc kubenswrapper[4703]: I1209 12:06:11.069156 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pf4r7" Dec 09 12:06:11 crc kubenswrapper[4703]: I1209 12:06:11.069219 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:06:11 crc kubenswrapper[4703]: I1209 12:06:11.069210 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:06:11 crc kubenswrapper[4703]: I1209 12:06:11.069157 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:06:11 crc kubenswrapper[4703]: E1209 12:06:11.069343 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pf4r7" podUID="9f199898-7916-48b6-b5e6-c878bacae384" Dec 09 12:06:11 crc kubenswrapper[4703]: E1209 12:06:11.069446 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 12:06:11 crc kubenswrapper[4703]: E1209 12:06:11.069537 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 12:06:11 crc kubenswrapper[4703]: E1209 12:06:11.069623 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 12:06:11 crc kubenswrapper[4703]: I1209 12:06:11.081820 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f05c1cdf163073cebf69a5dcd00e285698d11f7c76995b202500503a9762d467\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:11Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:11 crc kubenswrapper[4703]: I1209 12:06:11.094773 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:11Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:11 crc kubenswrapper[4703]: I1209 12:06:11.106613 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15947b2dcc90c69809979bf2d4a222ee664245d897b1f9d9ca91d29551ad86ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://592c8564eeb877cf87544918cb97d0e32451a6afd1c201efafb4cfec9a1e5857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:11Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:11 crc kubenswrapper[4703]: I1209 12:06:11.116199 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rfmng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a89eb00-454a-44b2-9b8e-6518b4a9d10c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a0c3ac19e4c5217f17ea8cd7c742043ee7708b0f086c99341a4bbc062fa53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qf85c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rfmng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:11Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:11 crc kubenswrapper[4703]: I1209 12:06:11.132027 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:11Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:11 crc kubenswrapper[4703]: I1209 12:06:11.145339 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4r9tc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65954588-9afb-47ff-8c0b-f83bf290da27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://698ca5256efee75c73f1585ad1f7955676be6dac9cbd1c5aebdbf88848c35cee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aeaa589b823d27c4d29e522f951c5578acd67848c14381e3f61821458f74a966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aeaa589b823d27c4d29e522f951c5578acd67848c14381e3f61821458f74a966\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e7f1224d3313cfd9900663c77a41898ee26d0f91591e2f641a17a6a543c69b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e7f1224d3313cfd9900663c77a41898ee26d0f91591e2f641a17a6a543c69b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f86761f861d07fe4e22b95651d0ed069799b95020e8e168527fd838ea6cc1e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f86761f861d07fe4e22b95651d0ed069799b95020e8e168527fd838ea6cc1e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c0a351532024238eb4c192066350cdd4f94945cfa549586ced1c91a2379490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c0a351532024238eb4c192066350cdd4f94945cfa549586ced1c91a2379490\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7baabf4646f56bb08044eb6cb29976274522e962906d573a279e967e64d191f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7baabf4646f56bb08044eb6cb29976274522e962906d573a279e967e64d191f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87f16f3d165437fc695faa410c055f3f3716fbb47e2dc1d41bba05315872e950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87f16f3d165437fc695faa410c055f3f3716fbb47e2dc1d41bba05315872e950\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4r9tc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:11Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:11 crc kubenswrapper[4703]: I1209 12:06:11.148953 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:11 crc kubenswrapper[4703]: I1209 12:06:11.148984 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:11 crc kubenswrapper[4703]: I1209 12:06:11.148993 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:11 crc kubenswrapper[4703]: I1209 12:06:11.149027 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:11 crc kubenswrapper[4703]: I1209 12:06:11.149039 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:11Z","lastTransitionTime":"2025-12-09T12:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:11 crc kubenswrapper[4703]: I1209 12:06:11.157443 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9zbgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b57e1095-b0e1-4b30-a491-00852a5219e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71dde398c11e22726cc75a12c047119d7f10bc0a03bb5bda05b6cab5a65c9bda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c902e8b479756afb7cf929d0963afc0f950288658427c1d49ff22ee4319cda70\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T12:06:07Z\\\",\\\"message\\\":\\\"2025-12-09T12:05:22+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1457100e-c7f0-4b9d-98ec-8cebbfe75af3\\\\n2025-12-09T12:05:22+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1457100e-c7f0-4b9d-98ec-8cebbfe75af3 to /host/opt/cni/bin/\\\\n2025-12-09T12:05:22Z [verbose] multus-daemon started\\\\n2025-12-09T12:05:22Z [verbose] Readiness Indicator file check\\\\n2025-12-09T12:06:07Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6sdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9zbgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:11Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:11 crc kubenswrapper[4703]: I1209 12:06:11.171336 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"687e9c7e-e4b8-4bbf-871e-714260501e27\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd9149c63c18f6178219d1f2e4f8e44bc861746b90fb9748105461192a870431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8612defa917cc99574e7762a0260e1b670277ea754e0a66bca960ef6b5a97f2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc0c9f4fdf8ce1ac42d54d3823b0137e0fd8ddf4499236ac3b954f9bff7c491\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd716e645be3932ca219e76ba8d896a8cf27086bbbac8f5e68057873fa68a6f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da708e8829aba2597198cdab64374791885bac9ddf7cb04b296c81d4e81d42ff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 12:05:18.266857 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 12:05:18.266993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 12:05:18.268588 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1075686048/tls.crt::/tmp/serving-cert-1075686048/tls.key\\\\\\\"\\\\nI1209 12:05:18.602670 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 12:05:18.604681 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 12:05:18.604730 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 12:05:18.604783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 12:05:18.604808 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 12:05:18.609177 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 12:05:18.609280 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:05:18.609306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:05:18.609328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 12:05:18.609348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 12:05:18.609367 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 12:05:18.609387 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 12:05:18.609237 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 12:05:18.612865 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://917390e847b6ec24f370316f0fbc305db0f667eab946b4f0e749855a62112cc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55c7e1e49ab4af3fff08ccfa9a7d9b4f889fbdd1d60592bca6c87990f4623d5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55c7e1e49ab4af3fff08ccfa9a7d9b4f889fbdd1d60592bca6c87990f4623d5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:11Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:11 crc kubenswrapper[4703]: I1209 12:06:11.184828 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998159f494b646ac88f1f1ffb09789bd146d7b3554b3a5106174d7c72f3ff583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:11Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:11 crc kubenswrapper[4703]: I1209 12:06:11.195613 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32956ceb-8540-406e-8693-e86efb46cd42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1535901888e91bc0037177a3150033b17d6e2d143faa5a082dba0d096cad474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tv4nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d149dfa980c5e9cee99cd7ba8a51bca19529368e037a1fd8e9b8dbea04179378\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tv4nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q8sfk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:11Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:11 crc kubenswrapper[4703]: I1209 12:06:11.205311 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4sss9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9dac2a02-8ee0-445c-bac8-4d448cda509d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c698b36fa974c95698ac8ac133b3caa9cbe8847626f5abf8088011c1a1a13032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q96fc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://660ecb626e35b5e693783b416e62c773b996e7e39233160e422dc9357ca0fd2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q96fc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4sss9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:11Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:11 crc kubenswrapper[4703]: I1209 12:06:11.216205 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e02414b1-6edb-41df-bef5-da5638c58c13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf2e3462f3adecb5658327310dd9a543e1f7fe8b7d477f0f90c77f9a3c1724cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4933db7a36f445ad8289586a5de8f593c8f565d53b4b310292c2a6b436b86b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dd75d360dcc49fc8db00b8a075ce33da2bd845289f44a0148ab391635ab90d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://feccbfdb1fa7a9935e98927106faf0c924485823950276b473cdb8c06c4b07eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:11Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:11 crc kubenswrapper[4703]: I1209 12:06:11.227154 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e04b82b0-1939-4ba9-8913-d41e77c0d60e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4efcec3e0a1bc588b61035ca4ff2f4932da9a6f39c37e48f2d81d3c7d6999822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a54cbf7d80c64289b2c6534ae640af5e06588f6d89e949a307ecd54f87eb429d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22136ab0fb8a3051c168e5b6e650e9b4da83daa189e67b570a453603bd5f4814\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b2cdfd8cad0173fd775a465149996734b60c1c457ffc602c552103ecbb26f68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b2cdfd8cad0173fd775a465149996734b60c1c457ffc602c552103ecbb26f68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:01Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:11Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:11 crc kubenswrapper[4703]: I1209 12:06:11.239614 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:11Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:11 crc kubenswrapper[4703]: I1209 12:06:11.249263 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ncbbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c180fbd9-43db-436b-8166-3cbcb5a14da3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8f7249f78155fea92f6a72919fd391b841a3cdb25583c9b9bc1083c6e34f087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22tv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ncbbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:11Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:11 crc kubenswrapper[4703]: I1209 12:06:11.251283 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:11 crc kubenswrapper[4703]: I1209 12:06:11.251329 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:11 crc kubenswrapper[4703]: I1209 12:06:11.251339 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:11 crc kubenswrapper[4703]: I1209 12:06:11.251355 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:11 crc kubenswrapper[4703]: I1209 12:06:11.251364 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:11Z","lastTransitionTime":"2025-12-09T12:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:11 crc kubenswrapper[4703]: I1209 12:06:11.268740 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9173444-5181-4ee4-b651-11d92ccab0d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://382198a5fbf52c39a98bc79aa4e917d50dfbcb3ff73d831389901544d400817c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce8423b98a9127fea76c23c865d9840069fe97e1d10dc9e97816c916192bf8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5989b9b8e8fb619a07c3242f7c4310e042a281648593a05edd6cb16449480428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9652e0d783249140796d5c8586e300acb7918f3ced79a1daec01da21eac363c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06ebee5cde81f7c5fb299c4a346f466d3bd4667be4dd2e89712dcae660a1e08c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02b8fb5400b3ed69b7e65ae9f09aac617bd51972cfa0d5aec436c9f59eb65ac2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://81d94ce172f3ac2803f304ca33a0549f1e75e67a45f3acedc0b276a83e885db1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81d94ce172f3ac2803f304ca33a0549f1e75e67a45f3acedc0b276a83e885db1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T12:05:47Z\\\",\\\"message\\\":\\\"cluster\\\\\\\", UUID:\\\\\\\"d4efc4a8-c514-4a6b-901c-2953978b50d3\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-multus/multus-admission-controller\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-multus/multus-admission-controller_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-multus/multus-admission-controller\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.119\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.119\\\\\\\", Port:8443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1209 12:05:46.875029 6369 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[extern\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-7hrm8_openshift-ovn-kubernetes(e9173444-5181-4ee4-b651-11d92ccab0d0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88a462e310fdbdf79496906a304fcf953c4e48c9cab9137f52cf5aafa9c4054c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f192db4b078c6bf3ffea6db58842ce7e8b4ad76c12fa8422496288bc0cfe0c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f192db4b078c6bf3ffea6db58842ce7e8b4ad76c12fa8422496288bc0cfe0c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7hrm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:11Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:11 crc kubenswrapper[4703]: I1209 12:06:11.278489 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pf4r7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f199898-7916-48b6-b5e6-c878bacae384\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-856ln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-856ln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pf4r7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:11Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:11 crc kubenswrapper[4703]: I1209 12:06:11.352880 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:11 crc kubenswrapper[4703]: I1209 12:06:11.352913 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:11 crc kubenswrapper[4703]: I1209 12:06:11.352928 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:11 crc kubenswrapper[4703]: I1209 12:06:11.352942 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:11 crc kubenswrapper[4703]: I1209 12:06:11.352951 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:11Z","lastTransitionTime":"2025-12-09T12:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:11 crc kubenswrapper[4703]: I1209 12:06:11.454945 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:11 crc kubenswrapper[4703]: I1209 12:06:11.455302 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:11 crc kubenswrapper[4703]: I1209 12:06:11.455311 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:11 crc kubenswrapper[4703]: I1209 12:06:11.455326 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:11 crc kubenswrapper[4703]: I1209 12:06:11.455337 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:11Z","lastTransitionTime":"2025-12-09T12:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:11 crc kubenswrapper[4703]: I1209 12:06:11.558675 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:11 crc kubenswrapper[4703]: I1209 12:06:11.558716 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:11 crc kubenswrapper[4703]: I1209 12:06:11.558728 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:11 crc kubenswrapper[4703]: I1209 12:06:11.558752 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:11 crc kubenswrapper[4703]: I1209 12:06:11.558765 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:11Z","lastTransitionTime":"2025-12-09T12:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:11 crc kubenswrapper[4703]: I1209 12:06:11.661482 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:11 crc kubenswrapper[4703]: I1209 12:06:11.661525 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:11 crc kubenswrapper[4703]: I1209 12:06:11.661534 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:11 crc kubenswrapper[4703]: I1209 12:06:11.661550 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:11 crc kubenswrapper[4703]: I1209 12:06:11.661559 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:11Z","lastTransitionTime":"2025-12-09T12:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:11 crc kubenswrapper[4703]: I1209 12:06:11.763947 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:11 crc kubenswrapper[4703]: I1209 12:06:11.763990 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:11 crc kubenswrapper[4703]: I1209 12:06:11.764001 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:11 crc kubenswrapper[4703]: I1209 12:06:11.764017 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:11 crc kubenswrapper[4703]: I1209 12:06:11.764028 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:11Z","lastTransitionTime":"2025-12-09T12:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:11 crc kubenswrapper[4703]: I1209 12:06:11.866378 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:11 crc kubenswrapper[4703]: I1209 12:06:11.866411 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:11 crc kubenswrapper[4703]: I1209 12:06:11.866419 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:11 crc kubenswrapper[4703]: I1209 12:06:11.866432 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:11 crc kubenswrapper[4703]: I1209 12:06:11.866440 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:11Z","lastTransitionTime":"2025-12-09T12:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:11 crc kubenswrapper[4703]: I1209 12:06:11.968158 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:11 crc kubenswrapper[4703]: I1209 12:06:11.968215 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:11 crc kubenswrapper[4703]: I1209 12:06:11.968227 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:11 crc kubenswrapper[4703]: I1209 12:06:11.968244 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:11 crc kubenswrapper[4703]: I1209 12:06:11.968257 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:11Z","lastTransitionTime":"2025-12-09T12:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:12 crc kubenswrapper[4703]: I1209 12:06:12.070865 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:12 crc kubenswrapper[4703]: I1209 12:06:12.070909 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:12 crc kubenswrapper[4703]: I1209 12:06:12.070921 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:12 crc kubenswrapper[4703]: I1209 12:06:12.070936 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:12 crc kubenswrapper[4703]: I1209 12:06:12.070947 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:12Z","lastTransitionTime":"2025-12-09T12:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:12 crc kubenswrapper[4703]: I1209 12:06:12.172816 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:12 crc kubenswrapper[4703]: I1209 12:06:12.172864 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:12 crc kubenswrapper[4703]: I1209 12:06:12.172876 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:12 crc kubenswrapper[4703]: I1209 12:06:12.172891 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:12 crc kubenswrapper[4703]: I1209 12:06:12.172903 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:12Z","lastTransitionTime":"2025-12-09T12:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:12 crc kubenswrapper[4703]: I1209 12:06:12.275275 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:12 crc kubenswrapper[4703]: I1209 12:06:12.275313 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:12 crc kubenswrapper[4703]: I1209 12:06:12.275321 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:12 crc kubenswrapper[4703]: I1209 12:06:12.275336 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:12 crc kubenswrapper[4703]: I1209 12:06:12.275344 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:12Z","lastTransitionTime":"2025-12-09T12:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:12 crc kubenswrapper[4703]: I1209 12:06:12.377294 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:12 crc kubenswrapper[4703]: I1209 12:06:12.377339 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:12 crc kubenswrapper[4703]: I1209 12:06:12.377349 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:12 crc kubenswrapper[4703]: I1209 12:06:12.377363 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:12 crc kubenswrapper[4703]: I1209 12:06:12.377372 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:12Z","lastTransitionTime":"2025-12-09T12:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:12 crc kubenswrapper[4703]: I1209 12:06:12.479799 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:12 crc kubenswrapper[4703]: I1209 12:06:12.479843 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:12 crc kubenswrapper[4703]: I1209 12:06:12.479853 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:12 crc kubenswrapper[4703]: I1209 12:06:12.479867 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:12 crc kubenswrapper[4703]: I1209 12:06:12.479877 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:12Z","lastTransitionTime":"2025-12-09T12:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:12 crc kubenswrapper[4703]: I1209 12:06:12.582273 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:12 crc kubenswrapper[4703]: I1209 12:06:12.582323 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:12 crc kubenswrapper[4703]: I1209 12:06:12.582334 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:12 crc kubenswrapper[4703]: I1209 12:06:12.582352 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:12 crc kubenswrapper[4703]: I1209 12:06:12.582364 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:12Z","lastTransitionTime":"2025-12-09T12:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:12 crc kubenswrapper[4703]: I1209 12:06:12.666450 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:12 crc kubenswrapper[4703]: I1209 12:06:12.666497 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:12 crc kubenswrapper[4703]: I1209 12:06:12.666508 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:12 crc kubenswrapper[4703]: I1209 12:06:12.666523 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:12 crc kubenswrapper[4703]: I1209 12:06:12.666532 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:12Z","lastTransitionTime":"2025-12-09T12:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:12 crc kubenswrapper[4703]: E1209 12:06:12.679614 4703 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cc650f57-0f1e-4118-b5e8-874027bb4fd3\\\",\\\"systemUUID\\\":\\\"538480e3-ee75-4c42-9816-5a001726e0b5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:12Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:12 crc kubenswrapper[4703]: I1209 12:06:12.683924 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:12 crc kubenswrapper[4703]: I1209 12:06:12.684003 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:12 crc kubenswrapper[4703]: I1209 12:06:12.684021 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:12 crc kubenswrapper[4703]: I1209 12:06:12.684038 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:12 crc kubenswrapper[4703]: I1209 12:06:12.684052 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:12Z","lastTransitionTime":"2025-12-09T12:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:12 crc kubenswrapper[4703]: E1209 12:06:12.696387 4703 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cc650f57-0f1e-4118-b5e8-874027bb4fd3\\\",\\\"systemUUID\\\":\\\"538480e3-ee75-4c42-9816-5a001726e0b5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:12Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:12 crc kubenswrapper[4703]: I1209 12:06:12.700887 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:12 crc kubenswrapper[4703]: I1209 12:06:12.700930 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:12 crc kubenswrapper[4703]: I1209 12:06:12.700941 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:12 crc kubenswrapper[4703]: I1209 12:06:12.700955 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:12 crc kubenswrapper[4703]: I1209 12:06:12.700966 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:12Z","lastTransitionTime":"2025-12-09T12:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:12 crc kubenswrapper[4703]: E1209 12:06:12.713608 4703 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cc650f57-0f1e-4118-b5e8-874027bb4fd3\\\",\\\"systemUUID\\\":\\\"538480e3-ee75-4c42-9816-5a001726e0b5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:12Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:12 crc kubenswrapper[4703]: I1209 12:06:12.716972 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:12 crc kubenswrapper[4703]: I1209 12:06:12.717014 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:12 crc kubenswrapper[4703]: I1209 12:06:12.717026 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:12 crc kubenswrapper[4703]: I1209 12:06:12.717043 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:12 crc kubenswrapper[4703]: I1209 12:06:12.717057 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:12Z","lastTransitionTime":"2025-12-09T12:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:12 crc kubenswrapper[4703]: E1209 12:06:12.728422 4703 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cc650f57-0f1e-4118-b5e8-874027bb4fd3\\\",\\\"systemUUID\\\":\\\"538480e3-ee75-4c42-9816-5a001726e0b5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:12Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:12 crc kubenswrapper[4703]: I1209 12:06:12.731824 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:12 crc kubenswrapper[4703]: I1209 12:06:12.731863 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:12 crc kubenswrapper[4703]: I1209 12:06:12.731874 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:12 crc kubenswrapper[4703]: I1209 12:06:12.731890 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:12 crc kubenswrapper[4703]: I1209 12:06:12.731899 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:12Z","lastTransitionTime":"2025-12-09T12:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:12 crc kubenswrapper[4703]: E1209 12:06:12.748694 4703 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cc650f57-0f1e-4118-b5e8-874027bb4fd3\\\",\\\"systemUUID\\\":\\\"538480e3-ee75-4c42-9816-5a001726e0b5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:12Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:12 crc kubenswrapper[4703]: E1209 12:06:12.748890 4703 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 09 12:06:12 crc kubenswrapper[4703]: I1209 12:06:12.750750 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:12 crc kubenswrapper[4703]: I1209 12:06:12.750786 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:12 crc kubenswrapper[4703]: I1209 12:06:12.750796 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:12 crc kubenswrapper[4703]: I1209 12:06:12.750812 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:12 crc kubenswrapper[4703]: I1209 12:06:12.750826 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:12Z","lastTransitionTime":"2025-12-09T12:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:12 crc kubenswrapper[4703]: I1209 12:06:12.853432 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:12 crc kubenswrapper[4703]: I1209 12:06:12.853479 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:12 crc kubenswrapper[4703]: I1209 12:06:12.853489 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:12 crc kubenswrapper[4703]: I1209 12:06:12.853504 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:12 crc kubenswrapper[4703]: I1209 12:06:12.853515 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:12Z","lastTransitionTime":"2025-12-09T12:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:12 crc kubenswrapper[4703]: I1209 12:06:12.955173 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:12 crc kubenswrapper[4703]: I1209 12:06:12.955231 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:12 crc kubenswrapper[4703]: I1209 12:06:12.955242 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:12 crc kubenswrapper[4703]: I1209 12:06:12.955258 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:12 crc kubenswrapper[4703]: I1209 12:06:12.955270 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:12Z","lastTransitionTime":"2025-12-09T12:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:13 crc kubenswrapper[4703]: I1209 12:06:13.057382 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:13 crc kubenswrapper[4703]: I1209 12:06:13.057433 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:13 crc kubenswrapper[4703]: I1209 12:06:13.057441 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:13 crc kubenswrapper[4703]: I1209 12:06:13.057458 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:13 crc kubenswrapper[4703]: I1209 12:06:13.057493 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:13Z","lastTransitionTime":"2025-12-09T12:06:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:13 crc kubenswrapper[4703]: I1209 12:06:13.068549 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:06:13 crc kubenswrapper[4703]: I1209 12:06:13.068610 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pf4r7" Dec 09 12:06:13 crc kubenswrapper[4703]: I1209 12:06:13.068568 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:06:13 crc kubenswrapper[4703]: I1209 12:06:13.068568 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:06:13 crc kubenswrapper[4703]: E1209 12:06:13.068666 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 12:06:13 crc kubenswrapper[4703]: E1209 12:06:13.068716 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 12:06:13 crc kubenswrapper[4703]: E1209 12:06:13.069011 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pf4r7" podUID="9f199898-7916-48b6-b5e6-c878bacae384" Dec 09 12:06:13 crc kubenswrapper[4703]: E1209 12:06:13.069068 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 12:06:13 crc kubenswrapper[4703]: I1209 12:06:13.159124 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:13 crc kubenswrapper[4703]: I1209 12:06:13.159157 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:13 crc kubenswrapper[4703]: I1209 12:06:13.159167 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:13 crc kubenswrapper[4703]: I1209 12:06:13.159182 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:13 crc kubenswrapper[4703]: I1209 12:06:13.159209 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:13Z","lastTransitionTime":"2025-12-09T12:06:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:13 crc kubenswrapper[4703]: I1209 12:06:13.261278 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:13 crc kubenswrapper[4703]: I1209 12:06:13.261323 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:13 crc kubenswrapper[4703]: I1209 12:06:13.261333 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:13 crc kubenswrapper[4703]: I1209 12:06:13.261349 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:13 crc kubenswrapper[4703]: I1209 12:06:13.261361 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:13Z","lastTransitionTime":"2025-12-09T12:06:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:13 crc kubenswrapper[4703]: I1209 12:06:13.363731 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:13 crc kubenswrapper[4703]: I1209 12:06:13.363781 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:13 crc kubenswrapper[4703]: I1209 12:06:13.363789 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:13 crc kubenswrapper[4703]: I1209 12:06:13.363805 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:13 crc kubenswrapper[4703]: I1209 12:06:13.363814 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:13Z","lastTransitionTime":"2025-12-09T12:06:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:13 crc kubenswrapper[4703]: I1209 12:06:13.466148 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:13 crc kubenswrapper[4703]: I1209 12:06:13.466206 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:13 crc kubenswrapper[4703]: I1209 12:06:13.466222 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:13 crc kubenswrapper[4703]: I1209 12:06:13.466239 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:13 crc kubenswrapper[4703]: I1209 12:06:13.466251 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:13Z","lastTransitionTime":"2025-12-09T12:06:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:13 crc kubenswrapper[4703]: I1209 12:06:13.568236 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:13 crc kubenswrapper[4703]: I1209 12:06:13.568274 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:13 crc kubenswrapper[4703]: I1209 12:06:13.568281 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:13 crc kubenswrapper[4703]: I1209 12:06:13.568295 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:13 crc kubenswrapper[4703]: I1209 12:06:13.568304 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:13Z","lastTransitionTime":"2025-12-09T12:06:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:13 crc kubenswrapper[4703]: I1209 12:06:13.670490 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:13 crc kubenswrapper[4703]: I1209 12:06:13.670531 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:13 crc kubenswrapper[4703]: I1209 12:06:13.670540 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:13 crc kubenswrapper[4703]: I1209 12:06:13.670555 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:13 crc kubenswrapper[4703]: I1209 12:06:13.670564 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:13Z","lastTransitionTime":"2025-12-09T12:06:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:13 crc kubenswrapper[4703]: I1209 12:06:13.773151 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:13 crc kubenswrapper[4703]: I1209 12:06:13.773220 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:13 crc kubenswrapper[4703]: I1209 12:06:13.773238 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:13 crc kubenswrapper[4703]: I1209 12:06:13.773291 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:13 crc kubenswrapper[4703]: I1209 12:06:13.773303 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:13Z","lastTransitionTime":"2025-12-09T12:06:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:13 crc kubenswrapper[4703]: I1209 12:06:13.875180 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:13 crc kubenswrapper[4703]: I1209 12:06:13.875248 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:13 crc kubenswrapper[4703]: I1209 12:06:13.875258 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:13 crc kubenswrapper[4703]: I1209 12:06:13.875272 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:13 crc kubenswrapper[4703]: I1209 12:06:13.875282 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:13Z","lastTransitionTime":"2025-12-09T12:06:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:13 crc kubenswrapper[4703]: I1209 12:06:13.977763 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:13 crc kubenswrapper[4703]: I1209 12:06:13.977811 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:13 crc kubenswrapper[4703]: I1209 12:06:13.977820 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:13 crc kubenswrapper[4703]: I1209 12:06:13.977834 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:13 crc kubenswrapper[4703]: I1209 12:06:13.977842 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:13Z","lastTransitionTime":"2025-12-09T12:06:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:14 crc kubenswrapper[4703]: I1209 12:06:14.080852 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:14 crc kubenswrapper[4703]: I1209 12:06:14.081155 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:14 crc kubenswrapper[4703]: I1209 12:06:14.081270 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:14 crc kubenswrapper[4703]: I1209 12:06:14.081365 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:14 crc kubenswrapper[4703]: I1209 12:06:14.081431 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:14Z","lastTransitionTime":"2025-12-09T12:06:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:14 crc kubenswrapper[4703]: I1209 12:06:14.183796 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:14 crc kubenswrapper[4703]: I1209 12:06:14.184000 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:14 crc kubenswrapper[4703]: I1209 12:06:14.184098 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:14 crc kubenswrapper[4703]: I1209 12:06:14.184171 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:14 crc kubenswrapper[4703]: I1209 12:06:14.184277 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:14Z","lastTransitionTime":"2025-12-09T12:06:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:14 crc kubenswrapper[4703]: I1209 12:06:14.286102 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:14 crc kubenswrapper[4703]: I1209 12:06:14.286144 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:14 crc kubenswrapper[4703]: I1209 12:06:14.286155 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:14 crc kubenswrapper[4703]: I1209 12:06:14.286171 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:14 crc kubenswrapper[4703]: I1209 12:06:14.286201 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:14Z","lastTransitionTime":"2025-12-09T12:06:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:14 crc kubenswrapper[4703]: I1209 12:06:14.387766 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:14 crc kubenswrapper[4703]: I1209 12:06:14.387802 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:14 crc kubenswrapper[4703]: I1209 12:06:14.387812 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:14 crc kubenswrapper[4703]: I1209 12:06:14.387828 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:14 crc kubenswrapper[4703]: I1209 12:06:14.387837 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:14Z","lastTransitionTime":"2025-12-09T12:06:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:14 crc kubenswrapper[4703]: I1209 12:06:14.489988 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:14 crc kubenswrapper[4703]: I1209 12:06:14.490020 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:14 crc kubenswrapper[4703]: I1209 12:06:14.490028 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:14 crc kubenswrapper[4703]: I1209 12:06:14.490041 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:14 crc kubenswrapper[4703]: I1209 12:06:14.490049 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:14Z","lastTransitionTime":"2025-12-09T12:06:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:14 crc kubenswrapper[4703]: I1209 12:06:14.592573 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:14 crc kubenswrapper[4703]: I1209 12:06:14.592618 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:14 crc kubenswrapper[4703]: I1209 12:06:14.592629 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:14 crc kubenswrapper[4703]: I1209 12:06:14.592645 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:14 crc kubenswrapper[4703]: I1209 12:06:14.592655 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:14Z","lastTransitionTime":"2025-12-09T12:06:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:14 crc kubenswrapper[4703]: I1209 12:06:14.695824 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:14 crc kubenswrapper[4703]: I1209 12:06:14.695866 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:14 crc kubenswrapper[4703]: I1209 12:06:14.695879 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:14 crc kubenswrapper[4703]: I1209 12:06:14.695895 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:14 crc kubenswrapper[4703]: I1209 12:06:14.695906 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:14Z","lastTransitionTime":"2025-12-09T12:06:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:14 crc kubenswrapper[4703]: I1209 12:06:14.797517 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:14 crc kubenswrapper[4703]: I1209 12:06:14.797555 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:14 crc kubenswrapper[4703]: I1209 12:06:14.797566 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:14 crc kubenswrapper[4703]: I1209 12:06:14.797580 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:14 crc kubenswrapper[4703]: I1209 12:06:14.797590 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:14Z","lastTransitionTime":"2025-12-09T12:06:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:14 crc kubenswrapper[4703]: I1209 12:06:14.900670 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:14 crc kubenswrapper[4703]: I1209 12:06:14.900731 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:14 crc kubenswrapper[4703]: I1209 12:06:14.900745 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:14 crc kubenswrapper[4703]: I1209 12:06:14.900766 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:14 crc kubenswrapper[4703]: I1209 12:06:14.900781 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:14Z","lastTransitionTime":"2025-12-09T12:06:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:15 crc kubenswrapper[4703]: I1209 12:06:15.003113 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:15 crc kubenswrapper[4703]: I1209 12:06:15.003456 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:15 crc kubenswrapper[4703]: I1209 12:06:15.003542 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:15 crc kubenswrapper[4703]: I1209 12:06:15.003619 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:15 crc kubenswrapper[4703]: I1209 12:06:15.003689 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:15Z","lastTransitionTime":"2025-12-09T12:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:15 crc kubenswrapper[4703]: I1209 12:06:15.069560 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:06:15 crc kubenswrapper[4703]: I1209 12:06:15.069701 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:06:15 crc kubenswrapper[4703]: I1209 12:06:15.069772 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:06:15 crc kubenswrapper[4703]: I1209 12:06:15.069829 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pf4r7" Dec 09 12:06:15 crc kubenswrapper[4703]: E1209 12:06:15.069954 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 12:06:15 crc kubenswrapper[4703]: E1209 12:06:15.070282 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 12:06:15 crc kubenswrapper[4703]: E1209 12:06:15.070180 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 12:06:15 crc kubenswrapper[4703]: E1209 12:06:15.070376 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pf4r7" podUID="9f199898-7916-48b6-b5e6-c878bacae384" Dec 09 12:06:15 crc kubenswrapper[4703]: I1209 12:06:15.105567 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:15 crc kubenswrapper[4703]: I1209 12:06:15.105615 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:15 crc kubenswrapper[4703]: I1209 12:06:15.105624 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:15 crc kubenswrapper[4703]: I1209 12:06:15.105640 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:15 crc kubenswrapper[4703]: I1209 12:06:15.105650 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:15Z","lastTransitionTime":"2025-12-09T12:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:15 crc kubenswrapper[4703]: I1209 12:06:15.207932 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:15 crc kubenswrapper[4703]: I1209 12:06:15.208216 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:15 crc kubenswrapper[4703]: I1209 12:06:15.208339 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:15 crc kubenswrapper[4703]: I1209 12:06:15.208428 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:15 crc kubenswrapper[4703]: I1209 12:06:15.208514 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:15Z","lastTransitionTime":"2025-12-09T12:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:15 crc kubenswrapper[4703]: I1209 12:06:15.310502 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:15 crc kubenswrapper[4703]: I1209 12:06:15.310822 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:15 crc kubenswrapper[4703]: I1209 12:06:15.310890 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:15 crc kubenswrapper[4703]: I1209 12:06:15.310959 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:15 crc kubenswrapper[4703]: I1209 12:06:15.311037 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:15Z","lastTransitionTime":"2025-12-09T12:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:15 crc kubenswrapper[4703]: I1209 12:06:15.413759 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:15 crc kubenswrapper[4703]: I1209 12:06:15.413790 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:15 crc kubenswrapper[4703]: I1209 12:06:15.413798 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:15 crc kubenswrapper[4703]: I1209 12:06:15.413812 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:15 crc kubenswrapper[4703]: I1209 12:06:15.413821 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:15Z","lastTransitionTime":"2025-12-09T12:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:15 crc kubenswrapper[4703]: I1209 12:06:15.516484 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:15 crc kubenswrapper[4703]: I1209 12:06:15.516523 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:15 crc kubenswrapper[4703]: I1209 12:06:15.516531 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:15 crc kubenswrapper[4703]: I1209 12:06:15.516546 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:15 crc kubenswrapper[4703]: I1209 12:06:15.516555 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:15Z","lastTransitionTime":"2025-12-09T12:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:15 crc kubenswrapper[4703]: I1209 12:06:15.618953 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:15 crc kubenswrapper[4703]: I1209 12:06:15.619052 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:15 crc kubenswrapper[4703]: I1209 12:06:15.619064 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:15 crc kubenswrapper[4703]: I1209 12:06:15.619082 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:15 crc kubenswrapper[4703]: I1209 12:06:15.619093 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:15Z","lastTransitionTime":"2025-12-09T12:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:15 crc kubenswrapper[4703]: I1209 12:06:15.721320 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:15 crc kubenswrapper[4703]: I1209 12:06:15.721594 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:15 crc kubenswrapper[4703]: I1209 12:06:15.721659 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:15 crc kubenswrapper[4703]: I1209 12:06:15.721726 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:15 crc kubenswrapper[4703]: I1209 12:06:15.721793 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:15Z","lastTransitionTime":"2025-12-09T12:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:15 crc kubenswrapper[4703]: I1209 12:06:15.823579 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:15 crc kubenswrapper[4703]: I1209 12:06:15.823615 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:15 crc kubenswrapper[4703]: I1209 12:06:15.823623 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:15 crc kubenswrapper[4703]: I1209 12:06:15.823636 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:15 crc kubenswrapper[4703]: I1209 12:06:15.823646 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:15Z","lastTransitionTime":"2025-12-09T12:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:15 crc kubenswrapper[4703]: I1209 12:06:15.926663 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:15 crc kubenswrapper[4703]: I1209 12:06:15.926899 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:15 crc kubenswrapper[4703]: I1209 12:06:15.926913 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:15 crc kubenswrapper[4703]: I1209 12:06:15.926930 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:15 crc kubenswrapper[4703]: I1209 12:06:15.926943 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:15Z","lastTransitionTime":"2025-12-09T12:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:16 crc kubenswrapper[4703]: I1209 12:06:16.029589 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:16 crc kubenswrapper[4703]: I1209 12:06:16.029630 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:16 crc kubenswrapper[4703]: I1209 12:06:16.029639 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:16 crc kubenswrapper[4703]: I1209 12:06:16.029655 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:16 crc kubenswrapper[4703]: I1209 12:06:16.029665 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:16Z","lastTransitionTime":"2025-12-09T12:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:16 crc kubenswrapper[4703]: I1209 12:06:16.132378 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:16 crc kubenswrapper[4703]: I1209 12:06:16.132416 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:16 crc kubenswrapper[4703]: I1209 12:06:16.132426 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:16 crc kubenswrapper[4703]: I1209 12:06:16.132440 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:16 crc kubenswrapper[4703]: I1209 12:06:16.132449 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:16Z","lastTransitionTime":"2025-12-09T12:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:16 crc kubenswrapper[4703]: I1209 12:06:16.234991 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:16 crc kubenswrapper[4703]: I1209 12:06:16.235064 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:16 crc kubenswrapper[4703]: I1209 12:06:16.235087 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:16 crc kubenswrapper[4703]: I1209 12:06:16.235119 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:16 crc kubenswrapper[4703]: I1209 12:06:16.235141 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:16Z","lastTransitionTime":"2025-12-09T12:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:16 crc kubenswrapper[4703]: I1209 12:06:16.337150 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:16 crc kubenswrapper[4703]: I1209 12:06:16.337223 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:16 crc kubenswrapper[4703]: I1209 12:06:16.337232 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:16 crc kubenswrapper[4703]: I1209 12:06:16.337248 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:16 crc kubenswrapper[4703]: I1209 12:06:16.337257 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:16Z","lastTransitionTime":"2025-12-09T12:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:16 crc kubenswrapper[4703]: I1209 12:06:16.439976 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:16 crc kubenswrapper[4703]: I1209 12:06:16.440047 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:16 crc kubenswrapper[4703]: I1209 12:06:16.440065 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:16 crc kubenswrapper[4703]: I1209 12:06:16.440089 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:16 crc kubenswrapper[4703]: I1209 12:06:16.440107 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:16Z","lastTransitionTime":"2025-12-09T12:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:16 crc kubenswrapper[4703]: I1209 12:06:16.542494 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:16 crc kubenswrapper[4703]: I1209 12:06:16.542552 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:16 crc kubenswrapper[4703]: I1209 12:06:16.542570 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:16 crc kubenswrapper[4703]: I1209 12:06:16.542593 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:16 crc kubenswrapper[4703]: I1209 12:06:16.542609 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:16Z","lastTransitionTime":"2025-12-09T12:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:16 crc kubenswrapper[4703]: I1209 12:06:16.645244 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:16 crc kubenswrapper[4703]: I1209 12:06:16.645301 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:16 crc kubenswrapper[4703]: I1209 12:06:16.645323 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:16 crc kubenswrapper[4703]: I1209 12:06:16.645351 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:16 crc kubenswrapper[4703]: I1209 12:06:16.645375 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:16Z","lastTransitionTime":"2025-12-09T12:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:16 crc kubenswrapper[4703]: I1209 12:06:16.747407 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:16 crc kubenswrapper[4703]: I1209 12:06:16.747437 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:16 crc kubenswrapper[4703]: I1209 12:06:16.747445 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:16 crc kubenswrapper[4703]: I1209 12:06:16.747457 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:16 crc kubenswrapper[4703]: I1209 12:06:16.747468 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:16Z","lastTransitionTime":"2025-12-09T12:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:16 crc kubenswrapper[4703]: I1209 12:06:16.849420 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:16 crc kubenswrapper[4703]: I1209 12:06:16.849465 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:16 crc kubenswrapper[4703]: I1209 12:06:16.849473 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:16 crc kubenswrapper[4703]: I1209 12:06:16.849486 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:16 crc kubenswrapper[4703]: I1209 12:06:16.849494 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:16Z","lastTransitionTime":"2025-12-09T12:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:16 crc kubenswrapper[4703]: I1209 12:06:16.951345 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:16 crc kubenswrapper[4703]: I1209 12:06:16.951377 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:16 crc kubenswrapper[4703]: I1209 12:06:16.951394 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:16 crc kubenswrapper[4703]: I1209 12:06:16.951413 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:16 crc kubenswrapper[4703]: I1209 12:06:16.951423 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:16Z","lastTransitionTime":"2025-12-09T12:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:17 crc kubenswrapper[4703]: I1209 12:06:17.053861 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:17 crc kubenswrapper[4703]: I1209 12:06:17.053907 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:17 crc kubenswrapper[4703]: I1209 12:06:17.053921 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:17 crc kubenswrapper[4703]: I1209 12:06:17.053937 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:17 crc kubenswrapper[4703]: I1209 12:06:17.053948 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:17Z","lastTransitionTime":"2025-12-09T12:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:17 crc kubenswrapper[4703]: I1209 12:06:17.068824 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pf4r7" Dec 09 12:06:17 crc kubenswrapper[4703]: I1209 12:06:17.068874 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:06:17 crc kubenswrapper[4703]: I1209 12:06:17.068910 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:06:17 crc kubenswrapper[4703]: E1209 12:06:17.068983 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pf4r7" podUID="9f199898-7916-48b6-b5e6-c878bacae384" Dec 09 12:06:17 crc kubenswrapper[4703]: I1209 12:06:17.069011 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:06:17 crc kubenswrapper[4703]: E1209 12:06:17.069173 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 12:06:17 crc kubenswrapper[4703]: E1209 12:06:17.069322 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 12:06:17 crc kubenswrapper[4703]: E1209 12:06:17.069425 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 12:06:17 crc kubenswrapper[4703]: I1209 12:06:17.155705 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:17 crc kubenswrapper[4703]: I1209 12:06:17.155755 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:17 crc kubenswrapper[4703]: I1209 12:06:17.155762 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:17 crc kubenswrapper[4703]: I1209 12:06:17.155776 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:17 crc kubenswrapper[4703]: I1209 12:06:17.155790 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:17Z","lastTransitionTime":"2025-12-09T12:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:17 crc kubenswrapper[4703]: I1209 12:06:17.259087 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:17 crc kubenswrapper[4703]: I1209 12:06:17.259132 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:17 crc kubenswrapper[4703]: I1209 12:06:17.259142 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:17 crc kubenswrapper[4703]: I1209 12:06:17.259160 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:17 crc kubenswrapper[4703]: I1209 12:06:17.259170 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:17Z","lastTransitionTime":"2025-12-09T12:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:17 crc kubenswrapper[4703]: I1209 12:06:17.362020 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:17 crc kubenswrapper[4703]: I1209 12:06:17.362060 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:17 crc kubenswrapper[4703]: I1209 12:06:17.362069 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:17 crc kubenswrapper[4703]: I1209 12:06:17.362085 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:17 crc kubenswrapper[4703]: I1209 12:06:17.362094 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:17Z","lastTransitionTime":"2025-12-09T12:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:17 crc kubenswrapper[4703]: I1209 12:06:17.464375 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:17 crc kubenswrapper[4703]: I1209 12:06:17.464418 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:17 crc kubenswrapper[4703]: I1209 12:06:17.464429 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:17 crc kubenswrapper[4703]: I1209 12:06:17.464449 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:17 crc kubenswrapper[4703]: I1209 12:06:17.464462 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:17Z","lastTransitionTime":"2025-12-09T12:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:17 crc kubenswrapper[4703]: I1209 12:06:17.567869 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:17 crc kubenswrapper[4703]: I1209 12:06:17.567941 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:17 crc kubenswrapper[4703]: I1209 12:06:17.567966 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:17 crc kubenswrapper[4703]: I1209 12:06:17.567995 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:17 crc kubenswrapper[4703]: I1209 12:06:17.568018 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:17Z","lastTransitionTime":"2025-12-09T12:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:17 crc kubenswrapper[4703]: I1209 12:06:17.670482 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:17 crc kubenswrapper[4703]: I1209 12:06:17.670535 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:17 crc kubenswrapper[4703]: I1209 12:06:17.670557 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:17 crc kubenswrapper[4703]: I1209 12:06:17.670577 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:17 crc kubenswrapper[4703]: I1209 12:06:17.670592 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:17Z","lastTransitionTime":"2025-12-09T12:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:17 crc kubenswrapper[4703]: I1209 12:06:17.772713 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:17 crc kubenswrapper[4703]: I1209 12:06:17.772764 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:17 crc kubenswrapper[4703]: I1209 12:06:17.772776 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:17 crc kubenswrapper[4703]: I1209 12:06:17.772791 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:17 crc kubenswrapper[4703]: I1209 12:06:17.772802 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:17Z","lastTransitionTime":"2025-12-09T12:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:17 crc kubenswrapper[4703]: I1209 12:06:17.875357 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:17 crc kubenswrapper[4703]: I1209 12:06:17.875399 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:17 crc kubenswrapper[4703]: I1209 12:06:17.875410 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:17 crc kubenswrapper[4703]: I1209 12:06:17.875426 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:17 crc kubenswrapper[4703]: I1209 12:06:17.875437 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:17Z","lastTransitionTime":"2025-12-09T12:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:17 crc kubenswrapper[4703]: I1209 12:06:17.977273 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:17 crc kubenswrapper[4703]: I1209 12:06:17.977323 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:17 crc kubenswrapper[4703]: I1209 12:06:17.977335 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:17 crc kubenswrapper[4703]: I1209 12:06:17.977351 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:17 crc kubenswrapper[4703]: I1209 12:06:17.977363 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:17Z","lastTransitionTime":"2025-12-09T12:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:18 crc kubenswrapper[4703]: I1209 12:06:18.069241 4703 scope.go:117] "RemoveContainer" containerID="81d94ce172f3ac2803f304ca33a0549f1e75e67a45f3acedc0b276a83e885db1" Dec 09 12:06:18 crc kubenswrapper[4703]: I1209 12:06:18.079027 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:18 crc kubenswrapper[4703]: I1209 12:06:18.079051 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:18 crc kubenswrapper[4703]: I1209 12:06:18.079062 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:18 crc kubenswrapper[4703]: I1209 12:06:18.079074 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:18 crc kubenswrapper[4703]: I1209 12:06:18.079082 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:18Z","lastTransitionTime":"2025-12-09T12:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:18 crc kubenswrapper[4703]: I1209 12:06:18.181513 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:18 crc kubenswrapper[4703]: I1209 12:06:18.181545 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:18 crc kubenswrapper[4703]: I1209 12:06:18.181555 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:18 crc kubenswrapper[4703]: I1209 12:06:18.181570 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:18 crc kubenswrapper[4703]: I1209 12:06:18.181581 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:18Z","lastTransitionTime":"2025-12-09T12:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:18 crc kubenswrapper[4703]: I1209 12:06:18.284067 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:18 crc kubenswrapper[4703]: I1209 12:06:18.284107 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:18 crc kubenswrapper[4703]: I1209 12:06:18.284118 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:18 crc kubenswrapper[4703]: I1209 12:06:18.284133 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:18 crc kubenswrapper[4703]: I1209 12:06:18.284144 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:18Z","lastTransitionTime":"2025-12-09T12:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:18 crc kubenswrapper[4703]: I1209 12:06:18.387071 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:18 crc kubenswrapper[4703]: I1209 12:06:18.387112 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:18 crc kubenswrapper[4703]: I1209 12:06:18.387125 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:18 crc kubenswrapper[4703]: I1209 12:06:18.387141 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:18 crc kubenswrapper[4703]: I1209 12:06:18.387152 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:18Z","lastTransitionTime":"2025-12-09T12:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:18 crc kubenswrapper[4703]: I1209 12:06:18.458822 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7hrm8_e9173444-5181-4ee4-b651-11d92ccab0d0/ovnkube-controller/2.log" Dec 09 12:06:18 crc kubenswrapper[4703]: I1209 12:06:18.460876 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" event={"ID":"e9173444-5181-4ee4-b651-11d92ccab0d0","Type":"ContainerStarted","Data":"500c141c181fc269b89b51bc952bf461e962782dfa52cec80966dce7f1ae181f"} Dec 09 12:06:18 crc kubenswrapper[4703]: I1209 12:06:18.461820 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" Dec 09 12:06:18 crc kubenswrapper[4703]: I1209 12:06:18.472770 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998159f494b646ac88f1f1ffb09789bd146d7b3554b3a5106174d7c72f3ff583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:18Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:18 crc kubenswrapper[4703]: I1209 12:06:18.483711 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32956ceb-8540-406e-8693-e86efb46cd42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1535901888e91bc0037177a3150033b17d6e2d143faa5a082dba0d096cad474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tv4nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d149dfa980c5e9cee99cd7ba8a51bca19529368e037a1fd8e9b8dbea04179378\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tv4nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q8sfk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:18Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:18 crc kubenswrapper[4703]: I1209 12:06:18.488943 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:18 crc kubenswrapper[4703]: I1209 12:06:18.488982 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:18 crc kubenswrapper[4703]: I1209 12:06:18.489207 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:18 crc kubenswrapper[4703]: I1209 12:06:18.489245 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:18 crc kubenswrapper[4703]: I1209 12:06:18.489256 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:18Z","lastTransitionTime":"2025-12-09T12:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:18 crc kubenswrapper[4703]: I1209 12:06:18.494851 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4sss9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9dac2a02-8ee0-445c-bac8-4d448cda509d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c698b36fa974c95698ac8ac133b3caa9cbe8847626f5abf8088011c1a1a13032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q96fc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://660ecb626e35b5e693783b416e62c773b996e7e39233160e422dc9357ca0fd2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q96fc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4sss9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:18Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:18 crc kubenswrapper[4703]: I1209 12:06:18.506661 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"687e9c7e-e4b8-4bbf-871e-714260501e27\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd9149c63c18f6178219d1f2e4f8e44bc861746b90fb9748105461192a870431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8612defa917cc99574e7762a0260e1b670277ea754e0a66bca960ef6b5a97f2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc0c9f4fdf8ce1ac42d54d3823b0137e0fd8ddf4499236ac3b954f9bff7c491\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd716e645be3932ca219e76ba8d896a8cf27086bbbac8f5e68057873fa68a6f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da708e8829aba2597198cdab64374791885bac9ddf7cb04b296c81d4e81d42ff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 12:05:18.266857 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 12:05:18.266993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 12:05:18.268588 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1075686048/tls.crt::/tmp/serving-cert-1075686048/tls.key\\\\\\\"\\\\nI1209 12:05:18.602670 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 12:05:18.604681 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 12:05:18.604730 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 12:05:18.604783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 12:05:18.604808 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 12:05:18.609177 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 12:05:18.609280 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:05:18.609306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:05:18.609328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 12:05:18.609348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 12:05:18.609367 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 12:05:18.609387 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 12:05:18.609237 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 12:05:18.612865 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://917390e847b6ec24f370316f0fbc305db0f667eab946b4f0e749855a62112cc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55c7e1e49ab4af3fff08ccfa9a7d9b4f889fbdd1d60592bca6c87990f4623d5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55c7e1e49ab4af3fff08ccfa9a7d9b4f889fbdd1d60592bca6c87990f4623d5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:18Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:18 crc kubenswrapper[4703]: I1209 12:06:18.517570 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e04b82b0-1939-4ba9-8913-d41e77c0d60e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4efcec3e0a1bc588b61035ca4ff2f4932da9a6f39c37e48f2d81d3c7d6999822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a54cbf7d80c64289b2c6534ae640af5e06588f6d89e949a307ecd54f87eb429d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22136ab0fb8a3051c168e5b6e650e9b4da83daa189e67b570a453603bd5f4814\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b2cdfd8cad0173fd775a465149996734b60c1c457ffc602c552103ecbb26f68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b2cdfd8cad0173fd775a465149996734b60c1c457ffc602c552103ecbb26f68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:01Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:18Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:18 crc kubenswrapper[4703]: I1209 12:06:18.529788 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:18Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:18 crc kubenswrapper[4703]: I1209 12:06:18.540968 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ncbbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c180fbd9-43db-436b-8166-3cbcb5a14da3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8f7249f78155fea92f6a72919fd391b841a3cdb25583c9b9bc1083c6e34f087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22tv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ncbbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:18Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:18 crc kubenswrapper[4703]: I1209 12:06:18.560676 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9173444-5181-4ee4-b651-11d92ccab0d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://382198a5fbf52c39a98bc79aa4e917d50dfbcb3ff73d831389901544d400817c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce8423b98a9127fea76c23c865d9840069fe97e1d10dc9e97816c916192bf8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5989b9b8e8fb619a07c3242f7c4310e042a281648593a05edd6cb16449480428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9652e0d783249140796d5c8586e300acb7918f3ced79a1daec01da21eac363c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06ebee5cde81f7c5fb299c4a346f466d3bd4667be4dd2e89712dcae660a1e08c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02b8fb5400b3ed69b7e65ae9f09aac617bd51972cfa0d5aec436c9f59eb65ac2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://500c141c181fc269b89b51bc952bf461e962782dfa52cec80966dce7f1ae181f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81d94ce172f3ac2803f304ca33a0549f1e75e67a45f3acedc0b276a83e885db1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T12:05:47Z\\\",\\\"message\\\":\\\"cluster\\\\\\\", UUID:\\\\\\\"d4efc4a8-c514-4a6b-901c-2953978b50d3\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-multus/multus-admission-controller\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-multus/multus-admission-controller_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-multus/multus-admission-controller\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.119\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.119\\\\\\\", Port:8443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1209 12:05:46.875029 6369 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[extern\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88a462e310fdbdf79496906a304fcf953c4e48c9cab9137f52cf5aafa9c4054c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f192db4b078c6bf3ffea6db58842ce7e8b4ad76c12fa8422496288bc0cfe0c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f192db4b078c6bf3ffea6db58842ce7e8b4ad76c12fa8422496288bc0cfe0c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7hrm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:18Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:18 crc kubenswrapper[4703]: I1209 12:06:18.574893 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pf4r7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f199898-7916-48b6-b5e6-c878bacae384\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-856ln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-856ln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pf4r7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:18Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:18 crc kubenswrapper[4703]: I1209 12:06:18.590237 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e02414b1-6edb-41df-bef5-da5638c58c13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf2e3462f3adecb5658327310dd9a543e1f7fe8b7d477f0f90c77f9a3c1724cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4933db7a36f445ad8289586a5de8f593c8f565d53b4b310292c2a6b436b86b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dd75d360dcc49fc8db00b8a075ce33da2bd845289f44a0148ab391635ab90d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://feccbfdb1fa7a9935e98927106faf0c924485823950276b473cdb8c06c4b07eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:18Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:18 crc kubenswrapper[4703]: I1209 12:06:18.591563 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:18 crc kubenswrapper[4703]: I1209 12:06:18.591593 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:18 crc kubenswrapper[4703]: I1209 12:06:18.591600 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:18 crc kubenswrapper[4703]: I1209 12:06:18.591613 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:18 crc kubenswrapper[4703]: I1209 12:06:18.591623 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:18Z","lastTransitionTime":"2025-12-09T12:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:18 crc kubenswrapper[4703]: I1209 12:06:18.606212 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15947b2dcc90c69809979bf2d4a222ee664245d897b1f9d9ca91d29551ad86ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://592c8564eeb877cf87544918cb97d0e32451a6afd1c201efafb4cfec9a1e5857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:18Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:18 crc kubenswrapper[4703]: I1209 12:06:18.618159 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rfmng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a89eb00-454a-44b2-9b8e-6518b4a9d10c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a0c3ac19e4c5217f17ea8cd7c742043ee7708b0f086c99341a4bbc062fa53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qf85c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rfmng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:18Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:18 crc kubenswrapper[4703]: I1209 12:06:18.630156 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f05c1cdf163073cebf69a5dcd00e285698d11f7c76995b202500503a9762d467\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:18Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:18 crc kubenswrapper[4703]: I1209 12:06:18.641171 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:18Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:18 crc kubenswrapper[4703]: I1209 12:06:18.653848 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9zbgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b57e1095-b0e1-4b30-a491-00852a5219e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71dde398c11e22726cc75a12c047119d7f10bc0a03bb5bda05b6cab5a65c9bda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c902e8b479756afb7cf929d0963afc0f950288658427c1d49ff22ee4319cda70\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T12:06:07Z\\\",\\\"message\\\":\\\"2025-12-09T12:05:22+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1457100e-c7f0-4b9d-98ec-8cebbfe75af3\\\\n2025-12-09T12:05:22+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1457100e-c7f0-4b9d-98ec-8cebbfe75af3 to /host/opt/cni/bin/\\\\n2025-12-09T12:05:22Z [verbose] multus-daemon started\\\\n2025-12-09T12:05:22Z [verbose] Readiness Indicator file check\\\\n2025-12-09T12:06:07Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6sdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9zbgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:18Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:18 crc kubenswrapper[4703]: I1209 12:06:18.665908 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:18Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:18 crc kubenswrapper[4703]: I1209 12:06:18.679218 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4r9tc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65954588-9afb-47ff-8c0b-f83bf290da27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://698ca5256efee75c73f1585ad1f7955676be6dac9cbd1c5aebdbf88848c35cee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aeaa589b823d27c4d29e522f951c5578acd67848c14381e3f61821458f74a966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aeaa589b823d27c4d29e522f951c5578acd67848c14381e3f61821458f74a966\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e7f1224d3313cfd9900663c77a41898ee26d0f91591e2f641a17a6a543c69b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e7f1224d3313cfd9900663c77a41898ee26d0f91591e2f641a17a6a543c69b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f86761f861d07fe4e22b95651d0ed069799b95020e8e168527fd838ea6cc1e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f86761f861d07fe4e22b95651d0ed069799b95020e8e168527fd838ea6cc1e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c0a351532024238eb4c192066350cdd4f94945cfa549586ced1c91a2379490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c0a351532024238eb4c192066350cdd4f94945cfa549586ced1c91a2379490\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7baabf4646f56bb08044eb6cb29976274522e962906d573a279e967e64d191f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7baabf4646f56bb08044eb6cb29976274522e962906d573a279e967e64d191f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87f16f3d165437fc695faa410c055f3f3716fbb47e2dc1d41bba05315872e950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87f16f3d165437fc695faa410c055f3f3716fbb47e2dc1d41bba05315872e950\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4r9tc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:18Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:18 crc kubenswrapper[4703]: I1209 12:06:18.693599 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:18 crc kubenswrapper[4703]: I1209 12:06:18.693640 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:18 crc kubenswrapper[4703]: I1209 12:06:18.693651 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:18 crc kubenswrapper[4703]: I1209 12:06:18.693666 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:18 crc kubenswrapper[4703]: I1209 12:06:18.693676 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:18Z","lastTransitionTime":"2025-12-09T12:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:18 crc kubenswrapper[4703]: I1209 12:06:18.795644 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:18 crc kubenswrapper[4703]: I1209 12:06:18.795682 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:18 crc kubenswrapper[4703]: I1209 12:06:18.795690 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:18 crc kubenswrapper[4703]: I1209 12:06:18.795705 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:18 crc kubenswrapper[4703]: I1209 12:06:18.795715 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:18Z","lastTransitionTime":"2025-12-09T12:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:18 crc kubenswrapper[4703]: I1209 12:06:18.897470 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:18 crc kubenswrapper[4703]: I1209 12:06:18.897506 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:18 crc kubenswrapper[4703]: I1209 12:06:18.897517 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:18 crc kubenswrapper[4703]: I1209 12:06:18.897530 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:18 crc kubenswrapper[4703]: I1209 12:06:18.897539 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:18Z","lastTransitionTime":"2025-12-09T12:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:18 crc kubenswrapper[4703]: I1209 12:06:18.999702 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:18 crc kubenswrapper[4703]: I1209 12:06:18.999747 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:18 crc kubenswrapper[4703]: I1209 12:06:18.999761 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:18 crc kubenswrapper[4703]: I1209 12:06:18.999777 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:18 crc kubenswrapper[4703]: I1209 12:06:18.999789 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:18Z","lastTransitionTime":"2025-12-09T12:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:19 crc kubenswrapper[4703]: I1209 12:06:19.069640 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:06:19 crc kubenswrapper[4703]: I1209 12:06:19.069686 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:06:19 crc kubenswrapper[4703]: I1209 12:06:19.069784 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:06:19 crc kubenswrapper[4703]: I1209 12:06:19.069950 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pf4r7" Dec 09 12:06:19 crc kubenswrapper[4703]: E1209 12:06:19.069939 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 12:06:19 crc kubenswrapper[4703]: E1209 12:06:19.070121 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pf4r7" podUID="9f199898-7916-48b6-b5e6-c878bacae384" Dec 09 12:06:19 crc kubenswrapper[4703]: E1209 12:06:19.070172 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 12:06:19 crc kubenswrapper[4703]: E1209 12:06:19.070251 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 12:06:19 crc kubenswrapper[4703]: I1209 12:06:19.102013 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:19 crc kubenswrapper[4703]: I1209 12:06:19.102098 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:19 crc kubenswrapper[4703]: I1209 12:06:19.102108 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:19 crc kubenswrapper[4703]: I1209 12:06:19.102124 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:19 crc kubenswrapper[4703]: I1209 12:06:19.102136 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:19Z","lastTransitionTime":"2025-12-09T12:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:19 crc kubenswrapper[4703]: I1209 12:06:19.204133 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:19 crc kubenswrapper[4703]: I1209 12:06:19.204178 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:19 crc kubenswrapper[4703]: I1209 12:06:19.204205 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:19 crc kubenswrapper[4703]: I1209 12:06:19.204223 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:19 crc kubenswrapper[4703]: I1209 12:06:19.204233 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:19Z","lastTransitionTime":"2025-12-09T12:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:19 crc kubenswrapper[4703]: I1209 12:06:19.306530 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:19 crc kubenswrapper[4703]: I1209 12:06:19.306570 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:19 crc kubenswrapper[4703]: I1209 12:06:19.306578 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:19 crc kubenswrapper[4703]: I1209 12:06:19.306621 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:19 crc kubenswrapper[4703]: I1209 12:06:19.306631 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:19Z","lastTransitionTime":"2025-12-09T12:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:19 crc kubenswrapper[4703]: I1209 12:06:19.409659 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:19 crc kubenswrapper[4703]: I1209 12:06:19.409702 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:19 crc kubenswrapper[4703]: I1209 12:06:19.409711 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:19 crc kubenswrapper[4703]: I1209 12:06:19.409725 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:19 crc kubenswrapper[4703]: I1209 12:06:19.409738 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:19Z","lastTransitionTime":"2025-12-09T12:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:19 crc kubenswrapper[4703]: I1209 12:06:19.511869 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:19 crc kubenswrapper[4703]: I1209 12:06:19.511911 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:19 crc kubenswrapper[4703]: I1209 12:06:19.511919 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:19 crc kubenswrapper[4703]: I1209 12:06:19.511935 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:19 crc kubenswrapper[4703]: I1209 12:06:19.511947 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:19Z","lastTransitionTime":"2025-12-09T12:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:19 crc kubenswrapper[4703]: I1209 12:06:19.614379 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:19 crc kubenswrapper[4703]: I1209 12:06:19.614426 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:19 crc kubenswrapper[4703]: I1209 12:06:19.614467 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:19 crc kubenswrapper[4703]: I1209 12:06:19.614483 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:19 crc kubenswrapper[4703]: I1209 12:06:19.614493 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:19Z","lastTransitionTime":"2025-12-09T12:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:19 crc kubenswrapper[4703]: I1209 12:06:19.716886 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:19 crc kubenswrapper[4703]: I1209 12:06:19.716917 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:19 crc kubenswrapper[4703]: I1209 12:06:19.716927 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:19 crc kubenswrapper[4703]: I1209 12:06:19.716948 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:19 crc kubenswrapper[4703]: I1209 12:06:19.716961 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:19Z","lastTransitionTime":"2025-12-09T12:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:19 crc kubenswrapper[4703]: I1209 12:06:19.819539 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:19 crc kubenswrapper[4703]: I1209 12:06:19.819583 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:19 crc kubenswrapper[4703]: I1209 12:06:19.819594 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:19 crc kubenswrapper[4703]: I1209 12:06:19.819608 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:19 crc kubenswrapper[4703]: I1209 12:06:19.819620 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:19Z","lastTransitionTime":"2025-12-09T12:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:19 crc kubenswrapper[4703]: I1209 12:06:19.922419 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:19 crc kubenswrapper[4703]: I1209 12:06:19.922480 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:19 crc kubenswrapper[4703]: I1209 12:06:19.922492 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:19 crc kubenswrapper[4703]: I1209 12:06:19.922508 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:19 crc kubenswrapper[4703]: I1209 12:06:19.922519 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:19Z","lastTransitionTime":"2025-12-09T12:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:20 crc kubenswrapper[4703]: I1209 12:06:20.025222 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:20 crc kubenswrapper[4703]: I1209 12:06:20.025252 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:20 crc kubenswrapper[4703]: I1209 12:06:20.025260 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:20 crc kubenswrapper[4703]: I1209 12:06:20.025272 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:20 crc kubenswrapper[4703]: I1209 12:06:20.025282 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:20Z","lastTransitionTime":"2025-12-09T12:06:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:20 crc kubenswrapper[4703]: I1209 12:06:20.082126 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Dec 09 12:06:20 crc kubenswrapper[4703]: I1209 12:06:20.128011 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:20 crc kubenswrapper[4703]: I1209 12:06:20.128051 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:20 crc kubenswrapper[4703]: I1209 12:06:20.128063 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:20 crc kubenswrapper[4703]: I1209 12:06:20.128081 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:20 crc kubenswrapper[4703]: I1209 12:06:20.128092 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:20Z","lastTransitionTime":"2025-12-09T12:06:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:20 crc kubenswrapper[4703]: I1209 12:06:20.230551 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:20 crc kubenswrapper[4703]: I1209 12:06:20.230585 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:20 crc kubenswrapper[4703]: I1209 12:06:20.230593 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:20 crc kubenswrapper[4703]: I1209 12:06:20.230609 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:20 crc kubenswrapper[4703]: I1209 12:06:20.230619 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:20Z","lastTransitionTime":"2025-12-09T12:06:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:20 crc kubenswrapper[4703]: I1209 12:06:20.332999 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:20 crc kubenswrapper[4703]: I1209 12:06:20.333042 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:20 crc kubenswrapper[4703]: I1209 12:06:20.333053 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:20 crc kubenswrapper[4703]: I1209 12:06:20.333070 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:20 crc kubenswrapper[4703]: I1209 12:06:20.333080 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:20Z","lastTransitionTime":"2025-12-09T12:06:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:20 crc kubenswrapper[4703]: I1209 12:06:20.435879 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:20 crc kubenswrapper[4703]: I1209 12:06:20.435946 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:20 crc kubenswrapper[4703]: I1209 12:06:20.435959 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:20 crc kubenswrapper[4703]: I1209 12:06:20.435975 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:20 crc kubenswrapper[4703]: I1209 12:06:20.435988 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:20Z","lastTransitionTime":"2025-12-09T12:06:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:20 crc kubenswrapper[4703]: I1209 12:06:20.466884 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7hrm8_e9173444-5181-4ee4-b651-11d92ccab0d0/ovnkube-controller/3.log" Dec 09 12:06:20 crc kubenswrapper[4703]: I1209 12:06:20.467452 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7hrm8_e9173444-5181-4ee4-b651-11d92ccab0d0/ovnkube-controller/2.log" Dec 09 12:06:20 crc kubenswrapper[4703]: I1209 12:06:20.469824 4703 generic.go:334] "Generic (PLEG): container finished" podID="e9173444-5181-4ee4-b651-11d92ccab0d0" containerID="500c141c181fc269b89b51bc952bf461e962782dfa52cec80966dce7f1ae181f" exitCode=1 Dec 09 12:06:20 crc kubenswrapper[4703]: I1209 12:06:20.469924 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" event={"ID":"e9173444-5181-4ee4-b651-11d92ccab0d0","Type":"ContainerDied","Data":"500c141c181fc269b89b51bc952bf461e962782dfa52cec80966dce7f1ae181f"} Dec 09 12:06:20 crc kubenswrapper[4703]: I1209 12:06:20.469991 4703 scope.go:117] "RemoveContainer" containerID="81d94ce172f3ac2803f304ca33a0549f1e75e67a45f3acedc0b276a83e885db1" Dec 09 12:06:20 crc kubenswrapper[4703]: I1209 12:06:20.471256 4703 scope.go:117] "RemoveContainer" containerID="500c141c181fc269b89b51bc952bf461e962782dfa52cec80966dce7f1ae181f" Dec 09 12:06:20 crc kubenswrapper[4703]: E1209 12:06:20.471461 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-7hrm8_openshift-ovn-kubernetes(e9173444-5181-4ee4-b651-11d92ccab0d0)\"" pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" podUID="e9173444-5181-4ee4-b651-11d92ccab0d0" Dec 09 12:06:20 crc kubenswrapper[4703]: I1209 12:06:20.486598 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f05c1cdf163073cebf69a5dcd00e285698d11f7c76995b202500503a9762d467\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:20Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:20 crc kubenswrapper[4703]: I1209 12:06:20.498228 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:20Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:20 crc kubenswrapper[4703]: I1209 12:06:20.510577 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15947b2dcc90c69809979bf2d4a222ee664245d897b1f9d9ca91d29551ad86ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://592c8564eeb877cf87544918cb97d0e32451a6afd1c201efafb4cfec9a1e5857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:20Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:20 crc kubenswrapper[4703]: I1209 12:06:20.521549 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rfmng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a89eb00-454a-44b2-9b8e-6518b4a9d10c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a0c3ac19e4c5217f17ea8cd7c742043ee7708b0f086c99341a4bbc062fa53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qf85c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rfmng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:20Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:20 crc kubenswrapper[4703]: I1209 12:06:20.532414 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:20Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:20 crc kubenswrapper[4703]: I1209 12:06:20.539090 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:20 crc kubenswrapper[4703]: I1209 12:06:20.539119 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:20 crc kubenswrapper[4703]: I1209 12:06:20.539130 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:20 crc kubenswrapper[4703]: I1209 12:06:20.539146 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:20 crc kubenswrapper[4703]: I1209 12:06:20.539158 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:20Z","lastTransitionTime":"2025-12-09T12:06:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:20 crc kubenswrapper[4703]: I1209 12:06:20.545845 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4r9tc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65954588-9afb-47ff-8c0b-f83bf290da27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://698ca5256efee75c73f1585ad1f7955676be6dac9cbd1c5aebdbf88848c35cee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aeaa589b823d27c4d29e522f951c5578acd67848c14381e3f61821458f74a966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aeaa589b823d27c4d29e522f951c5578acd67848c14381e3f61821458f74a966\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e7f1224d3313cfd9900663c77a41898ee26d0f91591e2f641a17a6a543c69b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e7f1224d3313cfd9900663c77a41898ee26d0f91591e2f641a17a6a543c69b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f86761f861d07fe4e22b95651d0ed069799b95020e8e168527fd838ea6cc1e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f86761f861d07fe4e22b95651d0ed069799b95020e8e168527fd838ea6cc1e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c0a351532024238eb4c192066350cdd4f94945cfa549586ced1c91a2379490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c0a351532024238eb4c192066350cdd4f94945cfa549586ced1c91a2379490\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7baabf4646f56bb08044eb6cb29976274522e962906d573a279e967e64d191f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7baabf4646f56bb08044eb6cb29976274522e962906d573a279e967e64d191f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87f16f3d165437fc695faa410c055f3f3716fbb47e2dc1d41bba05315872e950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87f16f3d165437fc695faa410c055f3f3716fbb47e2dc1d41bba05315872e950\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4r9tc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:20Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:20 crc kubenswrapper[4703]: I1209 12:06:20.556700 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9zbgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b57e1095-b0e1-4b30-a491-00852a5219e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71dde398c11e22726cc75a12c047119d7f10bc0a03bb5bda05b6cab5a65c9bda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c902e8b479756afb7cf929d0963afc0f950288658427c1d49ff22ee4319cda70\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T12:06:07Z\\\",\\\"message\\\":\\\"2025-12-09T12:05:22+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1457100e-c7f0-4b9d-98ec-8cebbfe75af3\\\\n2025-12-09T12:05:22+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1457100e-c7f0-4b9d-98ec-8cebbfe75af3 to /host/opt/cni/bin/\\\\n2025-12-09T12:05:22Z [verbose] multus-daemon started\\\\n2025-12-09T12:05:22Z [verbose] Readiness Indicator file check\\\\n2025-12-09T12:06:07Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6sdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9zbgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:20Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:20 crc kubenswrapper[4703]: I1209 12:06:20.566244 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4sss9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9dac2a02-8ee0-445c-bac8-4d448cda509d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c698b36fa974c95698ac8ac133b3caa9cbe8847626f5abf8088011c1a1a13032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q96fc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://660ecb626e35b5e693783b416e62c773b996e7e39233160e422dc9357ca0fd2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q96fc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4sss9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:20Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:20 crc kubenswrapper[4703]: I1209 12:06:20.579711 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"687e9c7e-e4b8-4bbf-871e-714260501e27\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd9149c63c18f6178219d1f2e4f8e44bc861746b90fb9748105461192a870431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8612defa917cc99574e7762a0260e1b670277ea754e0a66bca960ef6b5a97f2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc0c9f4fdf8ce1ac42d54d3823b0137e0fd8ddf4499236ac3b954f9bff7c491\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd716e645be3932ca219e76ba8d896a8cf27086bbbac8f5e68057873fa68a6f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da708e8829aba2597198cdab64374791885bac9ddf7cb04b296c81d4e81d42ff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 12:05:18.266857 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 12:05:18.266993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 12:05:18.268588 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1075686048/tls.crt::/tmp/serving-cert-1075686048/tls.key\\\\\\\"\\\\nI1209 12:05:18.602670 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 12:05:18.604681 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 12:05:18.604730 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 12:05:18.604783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 12:05:18.604808 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 12:05:18.609177 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 12:05:18.609280 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:05:18.609306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:05:18.609328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 12:05:18.609348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 12:05:18.609367 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 12:05:18.609387 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 12:05:18.609237 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 12:05:18.612865 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://917390e847b6ec24f370316f0fbc305db0f667eab946b4f0e749855a62112cc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55c7e1e49ab4af3fff08ccfa9a7d9b4f889fbdd1d60592bca6c87990f4623d5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55c7e1e49ab4af3fff08ccfa9a7d9b4f889fbdd1d60592bca6c87990f4623d5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:20Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:20 crc kubenswrapper[4703]: I1209 12:06:20.591728 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998159f494b646ac88f1f1ffb09789bd146d7b3554b3a5106174d7c72f3ff583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:20Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:20 crc kubenswrapper[4703]: I1209 12:06:20.605039 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32956ceb-8540-406e-8693-e86efb46cd42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1535901888e91bc0037177a3150033b17d6e2d143faa5a082dba0d096cad474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tv4nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d149dfa980c5e9cee99cd7ba8a51bca19529368e037a1fd8e9b8dbea04179378\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tv4nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q8sfk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:20Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:20 crc kubenswrapper[4703]: I1209 12:06:20.615345 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ncbbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c180fbd9-43db-436b-8166-3cbcb5a14da3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8f7249f78155fea92f6a72919fd391b841a3cdb25583c9b9bc1083c6e34f087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22tv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ncbbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:20Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:20 crc kubenswrapper[4703]: I1209 12:06:20.635901 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9173444-5181-4ee4-b651-11d92ccab0d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://382198a5fbf52c39a98bc79aa4e917d50dfbcb3ff73d831389901544d400817c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce8423b98a9127fea76c23c865d9840069fe97e1d10dc9e97816c916192bf8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5989b9b8e8fb619a07c3242f7c4310e042a281648593a05edd6cb16449480428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9652e0d783249140796d5c8586e300acb7918f3ced79a1daec01da21eac363c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06ebee5cde81f7c5fb299c4a346f466d3bd4667be4dd2e89712dcae660a1e08c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02b8fb5400b3ed69b7e65ae9f09aac617bd51972cfa0d5aec436c9f59eb65ac2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://500c141c181fc269b89b51bc952bf461e962782dfa52cec80966dce7f1ae181f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81d94ce172f3ac2803f304ca33a0549f1e75e67a45f3acedc0b276a83e885db1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T12:05:47Z\\\",\\\"message\\\":\\\"cluster\\\\\\\", UUID:\\\\\\\"d4efc4a8-c514-4a6b-901c-2953978b50d3\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-multus/multus-admission-controller\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-multus/multus-admission-controller_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-multus/multus-admission-controller\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.119\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.119\\\\\\\", Port:8443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1209 12:05:46.875029 6369 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[extern\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://500c141c181fc269b89b51bc952bf461e962782dfa52cec80966dce7f1ae181f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T12:06:19Z\\\",\\\"message\\\":\\\"nformer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:19Z is after 2025-08-24T17:21:41Z]\\\\nI1209 12:06:19.248568 6775 services_controller.go:434] Service openshift-dns-operator/metrics retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{metrics openshift-dns-operator 4bf7a6e2-037e-4e09-ad6b-2e7f1059a532 4106 0 2025-02-23 05:12:23 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[name:dns-operator] map[include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-secret-name:metrics-tls service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc000633e87 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{Serv\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88a462e310fdbdf79496906a304fcf953c4e48c9cab9137f52cf5aafa9c4054c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f192db4b078c6bf3ffea6db58842ce7e8b4ad76c12fa8422496288bc0cfe0c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f192db4b078c6bf3ffea6db58842ce7e8b4ad76c12fa8422496288bc0cfe0c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7hrm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:20Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:20 crc kubenswrapper[4703]: I1209 12:06:20.641519 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:20 crc kubenswrapper[4703]: I1209 12:06:20.641568 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:20 crc kubenswrapper[4703]: I1209 12:06:20.641580 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:20 crc kubenswrapper[4703]: I1209 12:06:20.641598 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:20 crc kubenswrapper[4703]: I1209 12:06:20.641610 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:20Z","lastTransitionTime":"2025-12-09T12:06:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:20 crc kubenswrapper[4703]: I1209 12:06:20.645812 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pf4r7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f199898-7916-48b6-b5e6-c878bacae384\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-856ln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-856ln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pf4r7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:20Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:20 crc kubenswrapper[4703]: I1209 12:06:20.664389 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ae32bd0-593a-4f16-9fe3-7d89da2ad915\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ff53944c366e95d2f2186c0202fbd21e2465d6d6ca6f2874d56550dc5a5ff6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70b368c179d04e90a5a01543b05fdd71a67c7877d7de57fbdbd416378cd586e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae6d08b388f19cf2d4587d19583d5c68d1ab921f809bc154917161a5e2cdc4fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://113937f0cd549b93c1c4e084ab24af7044c8a3a887db183bcebed112ee2e091e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f7f975f9fe265343d0c3c38c4a8310d8b9256bbfedb330fa42525f39c27cdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c76e6d78cb239cd0e6304e04f332e3d40053b95f7bb2c997b7ce43b6c3205547\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c76e6d78cb239cd0e6304e04f332e3d40053b95f7bb2c997b7ce43b6c3205547\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09657f3df2c538127b59a65becbd64662b5ebba1021b2e258bab7ae7a1c160f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09657f3df2c538127b59a65becbd64662b5ebba1021b2e258bab7ae7a1c160f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://76130a5c4f0e966af1c3d2840c4a41b05b0b32b3cc7099a53e2bb39183311415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76130a5c4f0e966af1c3d2840c4a41b05b0b32b3cc7099a53e2bb39183311415\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:20Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:20 crc kubenswrapper[4703]: I1209 12:06:20.678239 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e02414b1-6edb-41df-bef5-da5638c58c13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf2e3462f3adecb5658327310dd9a543e1f7fe8b7d477f0f90c77f9a3c1724cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4933db7a36f445ad8289586a5de8f593c8f565d53b4b310292c2a6b436b86b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dd75d360dcc49fc8db00b8a075ce33da2bd845289f44a0148ab391635ab90d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://feccbfdb1fa7a9935e98927106faf0c924485823950276b473cdb8c06c4b07eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:20Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:20 crc kubenswrapper[4703]: I1209 12:06:20.689956 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e04b82b0-1939-4ba9-8913-d41e77c0d60e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4efcec3e0a1bc588b61035ca4ff2f4932da9a6f39c37e48f2d81d3c7d6999822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a54cbf7d80c64289b2c6534ae640af5e06588f6d89e949a307ecd54f87eb429d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22136ab0fb8a3051c168e5b6e650e9b4da83daa189e67b570a453603bd5f4814\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b2cdfd8cad0173fd775a465149996734b60c1c457ffc602c552103ecbb26f68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b2cdfd8cad0173fd775a465149996734b60c1c457ffc602c552103ecbb26f68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:01Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:20Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:20 crc kubenswrapper[4703]: I1209 12:06:20.702978 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:20Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:20 crc kubenswrapper[4703]: I1209 12:06:20.743624 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:20 crc kubenswrapper[4703]: I1209 12:06:20.743665 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:20 crc kubenswrapper[4703]: I1209 12:06:20.743673 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:20 crc kubenswrapper[4703]: I1209 12:06:20.743688 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:20 crc kubenswrapper[4703]: I1209 12:06:20.743699 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:20Z","lastTransitionTime":"2025-12-09T12:06:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:20 crc kubenswrapper[4703]: I1209 12:06:20.845704 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:20 crc kubenswrapper[4703]: I1209 12:06:20.845756 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:20 crc kubenswrapper[4703]: I1209 12:06:20.845772 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:20 crc kubenswrapper[4703]: I1209 12:06:20.845794 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:20 crc kubenswrapper[4703]: I1209 12:06:20.845814 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:20Z","lastTransitionTime":"2025-12-09T12:06:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:20 crc kubenswrapper[4703]: I1209 12:06:20.947904 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:20 crc kubenswrapper[4703]: I1209 12:06:20.947934 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:20 crc kubenswrapper[4703]: I1209 12:06:20.947942 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:20 crc kubenswrapper[4703]: I1209 12:06:20.947954 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:20 crc kubenswrapper[4703]: I1209 12:06:20.947962 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:20Z","lastTransitionTime":"2025-12-09T12:06:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:21 crc kubenswrapper[4703]: I1209 12:06:21.050689 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:21 crc kubenswrapper[4703]: I1209 12:06:21.050738 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:21 crc kubenswrapper[4703]: I1209 12:06:21.050753 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:21 crc kubenswrapper[4703]: I1209 12:06:21.050770 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:21 crc kubenswrapper[4703]: I1209 12:06:21.050783 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:21Z","lastTransitionTime":"2025-12-09T12:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:21 crc kubenswrapper[4703]: I1209 12:06:21.068765 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:06:21 crc kubenswrapper[4703]: E1209 12:06:21.068889 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 12:06:21 crc kubenswrapper[4703]: I1209 12:06:21.068927 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:06:21 crc kubenswrapper[4703]: I1209 12:06:21.068988 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:06:21 crc kubenswrapper[4703]: E1209 12:06:21.069058 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 12:06:21 crc kubenswrapper[4703]: I1209 12:06:21.069064 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pf4r7" Dec 09 12:06:21 crc kubenswrapper[4703]: E1209 12:06:21.069119 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 12:06:21 crc kubenswrapper[4703]: E1209 12:06:21.069174 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pf4r7" podUID="9f199898-7916-48b6-b5e6-c878bacae384" Dec 09 12:06:21 crc kubenswrapper[4703]: I1209 12:06:21.081912 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:21Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:21 crc kubenswrapper[4703]: I1209 12:06:21.095353 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4r9tc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65954588-9afb-47ff-8c0b-f83bf290da27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://698ca5256efee75c73f1585ad1f7955676be6dac9cbd1c5aebdbf88848c35cee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aeaa589b823d27c4d29e522f951c5578acd67848c14381e3f61821458f74a966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aeaa589b823d27c4d29e522f951c5578acd67848c14381e3f61821458f74a966\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e7f1224d3313cfd9900663c77a41898ee26d0f91591e2f641a17a6a543c69b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e7f1224d3313cfd9900663c77a41898ee26d0f91591e2f641a17a6a543c69b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f86761f861d07fe4e22b95651d0ed069799b95020e8e168527fd838ea6cc1e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f86761f861d07fe4e22b95651d0ed069799b95020e8e168527fd838ea6cc1e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c0a351532024238eb4c192066350cdd4f94945cfa549586ced1c91a2379490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c0a351532024238eb4c192066350cdd4f94945cfa549586ced1c91a2379490\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7baabf4646f56bb08044eb6cb29976274522e962906d573a279e967e64d191f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7baabf4646f56bb08044eb6cb29976274522e962906d573a279e967e64d191f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87f16f3d165437fc695faa410c055f3f3716fbb47e2dc1d41bba05315872e950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87f16f3d165437fc695faa410c055f3f3716fbb47e2dc1d41bba05315872e950\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4r9tc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:21Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:21 crc kubenswrapper[4703]: I1209 12:06:21.108266 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9zbgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b57e1095-b0e1-4b30-a491-00852a5219e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71dde398c11e22726cc75a12c047119d7f10bc0a03bb5bda05b6cab5a65c9bda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c902e8b479756afb7cf929d0963afc0f950288658427c1d49ff22ee4319cda70\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T12:06:07Z\\\",\\\"message\\\":\\\"2025-12-09T12:05:22+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1457100e-c7f0-4b9d-98ec-8cebbfe75af3\\\\n2025-12-09T12:05:22+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1457100e-c7f0-4b9d-98ec-8cebbfe75af3 to /host/opt/cni/bin/\\\\n2025-12-09T12:05:22Z [verbose] multus-daemon started\\\\n2025-12-09T12:05:22Z [verbose] Readiness Indicator file check\\\\n2025-12-09T12:06:07Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6sdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9zbgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:21Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:21 crc kubenswrapper[4703]: I1209 12:06:21.122158 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"687e9c7e-e4b8-4bbf-871e-714260501e27\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd9149c63c18f6178219d1f2e4f8e44bc861746b90fb9748105461192a870431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8612defa917cc99574e7762a0260e1b670277ea754e0a66bca960ef6b5a97f2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc0c9f4fdf8ce1ac42d54d3823b0137e0fd8ddf4499236ac3b954f9bff7c491\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd716e645be3932ca219e76ba8d896a8cf27086bbbac8f5e68057873fa68a6f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da708e8829aba2597198cdab64374791885bac9ddf7cb04b296c81d4e81d42ff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 12:05:18.266857 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 12:05:18.266993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 12:05:18.268588 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1075686048/tls.crt::/tmp/serving-cert-1075686048/tls.key\\\\\\\"\\\\nI1209 12:05:18.602670 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 12:05:18.604681 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 12:05:18.604730 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 12:05:18.604783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 12:05:18.604808 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 12:05:18.609177 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 12:05:18.609280 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:05:18.609306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:05:18.609328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 12:05:18.609348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 12:05:18.609367 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 12:05:18.609387 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 12:05:18.609237 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 12:05:18.612865 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://917390e847b6ec24f370316f0fbc305db0f667eab946b4f0e749855a62112cc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55c7e1e49ab4af3fff08ccfa9a7d9b4f889fbdd1d60592bca6c87990f4623d5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55c7e1e49ab4af3fff08ccfa9a7d9b4f889fbdd1d60592bca6c87990f4623d5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:21Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:21 crc kubenswrapper[4703]: I1209 12:06:21.134516 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998159f494b646ac88f1f1ffb09789bd146d7b3554b3a5106174d7c72f3ff583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:21Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:21 crc kubenswrapper[4703]: I1209 12:06:21.146364 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32956ceb-8540-406e-8693-e86efb46cd42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1535901888e91bc0037177a3150033b17d6e2d143faa5a082dba0d096cad474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tv4nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d149dfa980c5e9cee99cd7ba8a51bca19529368e037a1fd8e9b8dbea04179378\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tv4nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q8sfk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:21Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:21 crc kubenswrapper[4703]: I1209 12:06:21.157359 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4sss9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9dac2a02-8ee0-445c-bac8-4d448cda509d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c698b36fa974c95698ac8ac133b3caa9cbe8847626f5abf8088011c1a1a13032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q96fc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://660ecb626e35b5e693783b416e62c773b996e7e39233160e422dc9357ca0fd2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q96fc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4sss9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:21Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:21 crc kubenswrapper[4703]: I1209 12:06:21.160232 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:21 crc kubenswrapper[4703]: I1209 12:06:21.160304 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:21 crc kubenswrapper[4703]: I1209 12:06:21.160523 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:21 crc kubenswrapper[4703]: I1209 12:06:21.160804 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:21 crc kubenswrapper[4703]: I1209 12:06:21.160845 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:21Z","lastTransitionTime":"2025-12-09T12:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:21 crc kubenswrapper[4703]: I1209 12:06:21.174876 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9173444-5181-4ee4-b651-11d92ccab0d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://382198a5fbf52c39a98bc79aa4e917d50dfbcb3ff73d831389901544d400817c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce8423b98a9127fea76c23c865d9840069fe97e1d10dc9e97816c916192bf8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5989b9b8e8fb619a07c3242f7c4310e042a281648593a05edd6cb16449480428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9652e0d783249140796d5c8586e300acb7918f3ced79a1daec01da21eac363c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06ebee5cde81f7c5fb299c4a346f466d3bd4667be4dd2e89712dcae660a1e08c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02b8fb5400b3ed69b7e65ae9f09aac617bd51972cfa0d5aec436c9f59eb65ac2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://500c141c181fc269b89b51bc952bf461e962782dfa52cec80966dce7f1ae181f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81d94ce172f3ac2803f304ca33a0549f1e75e67a45f3acedc0b276a83e885db1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T12:05:47Z\\\",\\\"message\\\":\\\"cluster\\\\\\\", UUID:\\\\\\\"d4efc4a8-c514-4a6b-901c-2953978b50d3\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-multus/multus-admission-controller\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-multus/multus-admission-controller_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-multus/multus-admission-controller\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.119\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.119\\\\\\\", Port:8443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1209 12:05:46.875029 6369 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[extern\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://500c141c181fc269b89b51bc952bf461e962782dfa52cec80966dce7f1ae181f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T12:06:19Z\\\",\\\"message\\\":\\\"nformer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:19Z is after 2025-08-24T17:21:41Z]\\\\nI1209 12:06:19.248568 6775 services_controller.go:434] Service openshift-dns-operator/metrics retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{metrics openshift-dns-operator 4bf7a6e2-037e-4e09-ad6b-2e7f1059a532 4106 0 2025-02-23 05:12:23 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[name:dns-operator] map[include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-secret-name:metrics-tls service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc000633e87 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{Serv\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88a462e310fdbdf79496906a304fcf953c4e48c9cab9137f52cf5aafa9c4054c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f192db4b078c6bf3ffea6db58842ce7e8b4ad76c12fa8422496288bc0cfe0c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f192db4b078c6bf3ffea6db58842ce7e8b4ad76c12fa8422496288bc0cfe0c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7hrm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:21Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:21 crc kubenswrapper[4703]: I1209 12:06:21.185215 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pf4r7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f199898-7916-48b6-b5e6-c878bacae384\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-856ln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-856ln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pf4r7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:21Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:21 crc kubenswrapper[4703]: I1209 12:06:21.204374 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ae32bd0-593a-4f16-9fe3-7d89da2ad915\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ff53944c366e95d2f2186c0202fbd21e2465d6d6ca6f2874d56550dc5a5ff6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70b368c179d04e90a5a01543b05fdd71a67c7877d7de57fbdbd416378cd586e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae6d08b388f19cf2d4587d19583d5c68d1ab921f809bc154917161a5e2cdc4fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://113937f0cd549b93c1c4e084ab24af7044c8a3a887db183bcebed112ee2e091e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f7f975f9fe265343d0c3c38c4a8310d8b9256bbfedb330fa42525f39c27cdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c76e6d78cb239cd0e6304e04f332e3d40053b95f7bb2c997b7ce43b6c3205547\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c76e6d78cb239cd0e6304e04f332e3d40053b95f7bb2c997b7ce43b6c3205547\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09657f3df2c538127b59a65becbd64662b5ebba1021b2e258bab7ae7a1c160f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09657f3df2c538127b59a65becbd64662b5ebba1021b2e258bab7ae7a1c160f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://76130a5c4f0e966af1c3d2840c4a41b05b0b32b3cc7099a53e2bb39183311415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76130a5c4f0e966af1c3d2840c4a41b05b0b32b3cc7099a53e2bb39183311415\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:21Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:21 crc kubenswrapper[4703]: I1209 12:06:21.216585 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e02414b1-6edb-41df-bef5-da5638c58c13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf2e3462f3adecb5658327310dd9a543e1f7fe8b7d477f0f90c77f9a3c1724cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4933db7a36f445ad8289586a5de8f593c8f565d53b4b310292c2a6b436b86b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dd75d360dcc49fc8db00b8a075ce33da2bd845289f44a0148ab391635ab90d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://feccbfdb1fa7a9935e98927106faf0c924485823950276b473cdb8c06c4b07eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:21Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:21 crc kubenswrapper[4703]: I1209 12:06:21.228441 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e04b82b0-1939-4ba9-8913-d41e77c0d60e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4efcec3e0a1bc588b61035ca4ff2f4932da9a6f39c37e48f2d81d3c7d6999822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a54cbf7d80c64289b2c6534ae640af5e06588f6d89e949a307ecd54f87eb429d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22136ab0fb8a3051c168e5b6e650e9b4da83daa189e67b570a453603bd5f4814\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b2cdfd8cad0173fd775a465149996734b60c1c457ffc602c552103ecbb26f68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b2cdfd8cad0173fd775a465149996734b60c1c457ffc602c552103ecbb26f68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:01Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:21Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:21 crc kubenswrapper[4703]: I1209 12:06:21.241076 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:21Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:21 crc kubenswrapper[4703]: I1209 12:06:21.249711 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ncbbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c180fbd9-43db-436b-8166-3cbcb5a14da3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8f7249f78155fea92f6a72919fd391b841a3cdb25583c9b9bc1083c6e34f087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22tv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ncbbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:21Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:21 crc kubenswrapper[4703]: I1209 12:06:21.262906 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f05c1cdf163073cebf69a5dcd00e285698d11f7c76995b202500503a9762d467\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:21Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:21 crc kubenswrapper[4703]: I1209 12:06:21.264096 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:21 crc kubenswrapper[4703]: I1209 12:06:21.264222 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:21 crc kubenswrapper[4703]: I1209 12:06:21.264297 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:21 crc kubenswrapper[4703]: I1209 12:06:21.264364 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:21 crc kubenswrapper[4703]: I1209 12:06:21.264422 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:21Z","lastTransitionTime":"2025-12-09T12:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:21 crc kubenswrapper[4703]: I1209 12:06:21.276881 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:21Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:21 crc kubenswrapper[4703]: I1209 12:06:21.289829 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15947b2dcc90c69809979bf2d4a222ee664245d897b1f9d9ca91d29551ad86ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://592c8564eeb877cf87544918cb97d0e32451a6afd1c201efafb4cfec9a1e5857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:21Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:21 crc kubenswrapper[4703]: I1209 12:06:21.301994 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rfmng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a89eb00-454a-44b2-9b8e-6518b4a9d10c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a0c3ac19e4c5217f17ea8cd7c742043ee7708b0f086c99341a4bbc062fa53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qf85c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rfmng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:21Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:21 crc kubenswrapper[4703]: I1209 12:06:21.367041 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:21 crc kubenswrapper[4703]: I1209 12:06:21.367088 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:21 crc kubenswrapper[4703]: I1209 12:06:21.367097 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:21 crc kubenswrapper[4703]: I1209 12:06:21.367114 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:21 crc kubenswrapper[4703]: I1209 12:06:21.367123 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:21Z","lastTransitionTime":"2025-12-09T12:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:21 crc kubenswrapper[4703]: I1209 12:06:21.468791 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:21 crc kubenswrapper[4703]: I1209 12:06:21.468826 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:21 crc kubenswrapper[4703]: I1209 12:06:21.468838 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:21 crc kubenswrapper[4703]: I1209 12:06:21.468854 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:21 crc kubenswrapper[4703]: I1209 12:06:21.468865 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:21Z","lastTransitionTime":"2025-12-09T12:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:21 crc kubenswrapper[4703]: I1209 12:06:21.474179 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7hrm8_e9173444-5181-4ee4-b651-11d92ccab0d0/ovnkube-controller/3.log" Dec 09 12:06:21 crc kubenswrapper[4703]: I1209 12:06:21.571114 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:21 crc kubenswrapper[4703]: I1209 12:06:21.571159 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:21 crc kubenswrapper[4703]: I1209 12:06:21.571168 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:21 crc kubenswrapper[4703]: I1209 12:06:21.571182 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:21 crc kubenswrapper[4703]: I1209 12:06:21.571206 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:21Z","lastTransitionTime":"2025-12-09T12:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:21 crc kubenswrapper[4703]: I1209 12:06:21.673732 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:21 crc kubenswrapper[4703]: I1209 12:06:21.674069 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:21 crc kubenswrapper[4703]: I1209 12:06:21.674079 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:21 crc kubenswrapper[4703]: I1209 12:06:21.674093 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:21 crc kubenswrapper[4703]: I1209 12:06:21.674105 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:21Z","lastTransitionTime":"2025-12-09T12:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:21 crc kubenswrapper[4703]: I1209 12:06:21.775812 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:21 crc kubenswrapper[4703]: I1209 12:06:21.775853 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:21 crc kubenswrapper[4703]: I1209 12:06:21.775864 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:21 crc kubenswrapper[4703]: I1209 12:06:21.775888 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:21 crc kubenswrapper[4703]: I1209 12:06:21.775900 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:21Z","lastTransitionTime":"2025-12-09T12:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:21 crc kubenswrapper[4703]: I1209 12:06:21.877929 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:21 crc kubenswrapper[4703]: I1209 12:06:21.877970 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:21 crc kubenswrapper[4703]: I1209 12:06:21.877981 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:21 crc kubenswrapper[4703]: I1209 12:06:21.877998 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:21 crc kubenswrapper[4703]: I1209 12:06:21.878009 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:21Z","lastTransitionTime":"2025-12-09T12:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:21 crc kubenswrapper[4703]: I1209 12:06:21.979807 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:21 crc kubenswrapper[4703]: I1209 12:06:21.979846 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:21 crc kubenswrapper[4703]: I1209 12:06:21.979856 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:21 crc kubenswrapper[4703]: I1209 12:06:21.979868 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:21 crc kubenswrapper[4703]: I1209 12:06:21.979876 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:21Z","lastTransitionTime":"2025-12-09T12:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:22 crc kubenswrapper[4703]: I1209 12:06:22.082905 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:22 crc kubenswrapper[4703]: I1209 12:06:22.082982 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:22 crc kubenswrapper[4703]: I1209 12:06:22.082995 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:22 crc kubenswrapper[4703]: I1209 12:06:22.083015 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:22 crc kubenswrapper[4703]: I1209 12:06:22.083027 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:22Z","lastTransitionTime":"2025-12-09T12:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:22 crc kubenswrapper[4703]: I1209 12:06:22.185516 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:22 crc kubenswrapper[4703]: I1209 12:06:22.185551 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:22 crc kubenswrapper[4703]: I1209 12:06:22.185560 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:22 crc kubenswrapper[4703]: I1209 12:06:22.185575 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:22 crc kubenswrapper[4703]: I1209 12:06:22.185585 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:22Z","lastTransitionTime":"2025-12-09T12:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:22 crc kubenswrapper[4703]: I1209 12:06:22.287751 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:22 crc kubenswrapper[4703]: I1209 12:06:22.287828 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:22 crc kubenswrapper[4703]: I1209 12:06:22.287837 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:22 crc kubenswrapper[4703]: I1209 12:06:22.287850 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:22 crc kubenswrapper[4703]: I1209 12:06:22.287861 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:22Z","lastTransitionTime":"2025-12-09T12:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:22 crc kubenswrapper[4703]: I1209 12:06:22.389926 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:22 crc kubenswrapper[4703]: I1209 12:06:22.389971 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:22 crc kubenswrapper[4703]: I1209 12:06:22.389981 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:22 crc kubenswrapper[4703]: I1209 12:06:22.389996 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:22 crc kubenswrapper[4703]: I1209 12:06:22.390009 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:22Z","lastTransitionTime":"2025-12-09T12:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:22 crc kubenswrapper[4703]: I1209 12:06:22.491827 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:22 crc kubenswrapper[4703]: I1209 12:06:22.491864 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:22 crc kubenswrapper[4703]: I1209 12:06:22.491875 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:22 crc kubenswrapper[4703]: I1209 12:06:22.491889 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:22 crc kubenswrapper[4703]: I1209 12:06:22.491901 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:22Z","lastTransitionTime":"2025-12-09T12:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:22 crc kubenswrapper[4703]: I1209 12:06:22.594305 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:22 crc kubenswrapper[4703]: I1209 12:06:22.594343 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:22 crc kubenswrapper[4703]: I1209 12:06:22.594351 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:22 crc kubenswrapper[4703]: I1209 12:06:22.594365 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:22 crc kubenswrapper[4703]: I1209 12:06:22.594376 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:22Z","lastTransitionTime":"2025-12-09T12:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:22 crc kubenswrapper[4703]: I1209 12:06:22.697657 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:22 crc kubenswrapper[4703]: I1209 12:06:22.697697 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:22 crc kubenswrapper[4703]: I1209 12:06:22.697707 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:22 crc kubenswrapper[4703]: I1209 12:06:22.697722 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:22 crc kubenswrapper[4703]: I1209 12:06:22.697735 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:22Z","lastTransitionTime":"2025-12-09T12:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:22 crc kubenswrapper[4703]: I1209 12:06:22.801151 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:22 crc kubenswrapper[4703]: I1209 12:06:22.801296 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:22 crc kubenswrapper[4703]: I1209 12:06:22.801322 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:22 crc kubenswrapper[4703]: I1209 12:06:22.801359 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:22 crc kubenswrapper[4703]: I1209 12:06:22.801390 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:22Z","lastTransitionTime":"2025-12-09T12:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:22 crc kubenswrapper[4703]: I1209 12:06:22.904589 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:22 crc kubenswrapper[4703]: I1209 12:06:22.904647 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:22 crc kubenswrapper[4703]: I1209 12:06:22.904656 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:22 crc kubenswrapper[4703]: I1209 12:06:22.904676 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:22 crc kubenswrapper[4703]: I1209 12:06:22.904877 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:22Z","lastTransitionTime":"2025-12-09T12:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:22 crc kubenswrapper[4703]: I1209 12:06:22.966803 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:06:22 crc kubenswrapper[4703]: E1209 12:06:22.966953 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:07:26.96691296 +0000 UTC m=+146.215676479 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:06:23 crc kubenswrapper[4703]: I1209 12:06:23.007683 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:23 crc kubenswrapper[4703]: I1209 12:06:23.007760 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:23 crc kubenswrapper[4703]: I1209 12:06:23.007771 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:23 crc kubenswrapper[4703]: I1209 12:06:23.007790 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:23 crc kubenswrapper[4703]: I1209 12:06:23.007803 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:23Z","lastTransitionTime":"2025-12-09T12:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:23 crc kubenswrapper[4703]: I1209 12:06:23.040018 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:23 crc kubenswrapper[4703]: I1209 12:06:23.040055 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:23 crc kubenswrapper[4703]: I1209 12:06:23.040064 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:23 crc kubenswrapper[4703]: I1209 12:06:23.040079 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:23 crc kubenswrapper[4703]: I1209 12:06:23.040090 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:23Z","lastTransitionTime":"2025-12-09T12:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:23 crc kubenswrapper[4703]: E1209 12:06:23.051890 4703 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cc650f57-0f1e-4118-b5e8-874027bb4fd3\\\",\\\"systemUUID\\\":\\\"538480e3-ee75-4c42-9816-5a001726e0b5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:23Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:23 crc kubenswrapper[4703]: I1209 12:06:23.055051 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:23 crc kubenswrapper[4703]: I1209 12:06:23.055102 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:23 crc kubenswrapper[4703]: I1209 12:06:23.055112 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:23 crc kubenswrapper[4703]: I1209 12:06:23.055128 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:23 crc kubenswrapper[4703]: I1209 12:06:23.055138 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:23Z","lastTransitionTime":"2025-12-09T12:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:23 crc kubenswrapper[4703]: E1209 12:06:23.066609 4703 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cc650f57-0f1e-4118-b5e8-874027bb4fd3\\\",\\\"systemUUID\\\":\\\"538480e3-ee75-4c42-9816-5a001726e0b5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:23Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:23 crc kubenswrapper[4703]: I1209 12:06:23.067806 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:06:23 crc kubenswrapper[4703]: I1209 12:06:23.067848 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:06:23 crc kubenswrapper[4703]: I1209 12:06:23.067882 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:06:23 crc kubenswrapper[4703]: I1209 12:06:23.067902 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:06:23 crc kubenswrapper[4703]: E1209 12:06:23.067952 4703 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 12:06:23 crc kubenswrapper[4703]: E1209 12:06:23.067981 4703 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 12:06:23 crc kubenswrapper[4703]: E1209 12:06:23.067996 4703 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 12:06:23 crc kubenswrapper[4703]: E1209 12:06:23.068001 4703 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 12:06:23 crc kubenswrapper[4703]: E1209 12:06:23.068011 4703 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 12:06:23 crc kubenswrapper[4703]: E1209 12:06:23.068044 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-09 12:07:27.068028264 +0000 UTC m=+146.316791783 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 12:06:23 crc kubenswrapper[4703]: E1209 12:06:23.068062 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-09 12:07:27.068053665 +0000 UTC m=+146.316817184 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 12:06:23 crc kubenswrapper[4703]: E1209 12:06:23.068075 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-09 12:07:27.068067635 +0000 UTC m=+146.316831264 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 12:06:23 crc kubenswrapper[4703]: E1209 12:06:23.068093 4703 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 12:06:23 crc kubenswrapper[4703]: E1209 12:06:23.068126 4703 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 12:06:23 crc kubenswrapper[4703]: E1209 12:06:23.068139 4703 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 12:06:23 crc kubenswrapper[4703]: E1209 12:06:23.068211 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-09 12:07:27.068175119 +0000 UTC m=+146.316938708 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 12:06:23 crc kubenswrapper[4703]: I1209 12:06:23.068639 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pf4r7" Dec 09 12:06:23 crc kubenswrapper[4703]: I1209 12:06:23.068670 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:06:23 crc kubenswrapper[4703]: I1209 12:06:23.068732 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:06:23 crc kubenswrapper[4703]: I1209 12:06:23.068890 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:06:23 crc kubenswrapper[4703]: E1209 12:06:23.068947 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 12:06:23 crc kubenswrapper[4703]: E1209 12:06:23.069016 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 12:06:23 crc kubenswrapper[4703]: E1209 12:06:23.069060 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 12:06:23 crc kubenswrapper[4703]: E1209 12:06:23.069116 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pf4r7" podUID="9f199898-7916-48b6-b5e6-c878bacae384" Dec 09 12:06:23 crc kubenswrapper[4703]: I1209 12:06:23.069952 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:23 crc kubenswrapper[4703]: I1209 12:06:23.069993 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:23 crc kubenswrapper[4703]: I1209 12:06:23.070003 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:23 crc kubenswrapper[4703]: I1209 12:06:23.070016 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:23 crc kubenswrapper[4703]: I1209 12:06:23.070024 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:23Z","lastTransitionTime":"2025-12-09T12:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:23 crc kubenswrapper[4703]: E1209 12:06:23.081983 4703 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cc650f57-0f1e-4118-b5e8-874027bb4fd3\\\",\\\"systemUUID\\\":\\\"538480e3-ee75-4c42-9816-5a001726e0b5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:23Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:23 crc kubenswrapper[4703]: I1209 12:06:23.085956 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:23 crc kubenswrapper[4703]: I1209 12:06:23.085994 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:23 crc kubenswrapper[4703]: I1209 12:06:23.086006 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:23 crc kubenswrapper[4703]: I1209 12:06:23.086024 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:23 crc kubenswrapper[4703]: I1209 12:06:23.086037 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:23Z","lastTransitionTime":"2025-12-09T12:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:23 crc kubenswrapper[4703]: E1209 12:06:23.096748 4703 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cc650f57-0f1e-4118-b5e8-874027bb4fd3\\\",\\\"systemUUID\\\":\\\"538480e3-ee75-4c42-9816-5a001726e0b5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:23Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:23 crc kubenswrapper[4703]: I1209 12:06:23.100349 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:23 crc kubenswrapper[4703]: I1209 12:06:23.100393 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:23 crc kubenswrapper[4703]: I1209 12:06:23.100410 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:23 crc kubenswrapper[4703]: I1209 12:06:23.100425 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:23 crc kubenswrapper[4703]: I1209 12:06:23.100434 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:23Z","lastTransitionTime":"2025-12-09T12:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:23 crc kubenswrapper[4703]: E1209 12:06:23.112640 4703 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cc650f57-0f1e-4118-b5e8-874027bb4fd3\\\",\\\"systemUUID\\\":\\\"538480e3-ee75-4c42-9816-5a001726e0b5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:23Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:23 crc kubenswrapper[4703]: E1209 12:06:23.112755 4703 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 09 12:06:23 crc kubenswrapper[4703]: I1209 12:06:23.114287 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:23 crc kubenswrapper[4703]: I1209 12:06:23.114317 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:23 crc kubenswrapper[4703]: I1209 12:06:23.114327 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:23 crc kubenswrapper[4703]: I1209 12:06:23.114344 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:23 crc kubenswrapper[4703]: I1209 12:06:23.114356 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:23Z","lastTransitionTime":"2025-12-09T12:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:23 crc kubenswrapper[4703]: I1209 12:06:23.216705 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:23 crc kubenswrapper[4703]: I1209 12:06:23.216743 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:23 crc kubenswrapper[4703]: I1209 12:06:23.216756 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:23 crc kubenswrapper[4703]: I1209 12:06:23.216771 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:23 crc kubenswrapper[4703]: I1209 12:06:23.216782 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:23Z","lastTransitionTime":"2025-12-09T12:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:23 crc kubenswrapper[4703]: I1209 12:06:23.319006 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:23 crc kubenswrapper[4703]: I1209 12:06:23.319311 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:23 crc kubenswrapper[4703]: I1209 12:06:23.319391 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:23 crc kubenswrapper[4703]: I1209 12:06:23.319611 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:23 crc kubenswrapper[4703]: I1209 12:06:23.319699 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:23Z","lastTransitionTime":"2025-12-09T12:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:23 crc kubenswrapper[4703]: I1209 12:06:23.422374 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:23 crc kubenswrapper[4703]: I1209 12:06:23.422661 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:23 crc kubenswrapper[4703]: I1209 12:06:23.422804 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:23 crc kubenswrapper[4703]: I1209 12:06:23.422885 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:23 crc kubenswrapper[4703]: I1209 12:06:23.422971 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:23Z","lastTransitionTime":"2025-12-09T12:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:23 crc kubenswrapper[4703]: I1209 12:06:23.526321 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:23 crc kubenswrapper[4703]: I1209 12:06:23.526357 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:23 crc kubenswrapper[4703]: I1209 12:06:23.526366 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:23 crc kubenswrapper[4703]: I1209 12:06:23.526380 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:23 crc kubenswrapper[4703]: I1209 12:06:23.526390 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:23Z","lastTransitionTime":"2025-12-09T12:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:23 crc kubenswrapper[4703]: I1209 12:06:23.628733 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:23 crc kubenswrapper[4703]: I1209 12:06:23.628778 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:23 crc kubenswrapper[4703]: I1209 12:06:23.628787 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:23 crc kubenswrapper[4703]: I1209 12:06:23.628802 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:23 crc kubenswrapper[4703]: I1209 12:06:23.628811 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:23Z","lastTransitionTime":"2025-12-09T12:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:23 crc kubenswrapper[4703]: I1209 12:06:23.731397 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:23 crc kubenswrapper[4703]: I1209 12:06:23.731440 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:23 crc kubenswrapper[4703]: I1209 12:06:23.731453 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:23 crc kubenswrapper[4703]: I1209 12:06:23.731469 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:23 crc kubenswrapper[4703]: I1209 12:06:23.731480 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:23Z","lastTransitionTime":"2025-12-09T12:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:23 crc kubenswrapper[4703]: I1209 12:06:23.834080 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:23 crc kubenswrapper[4703]: I1209 12:06:23.834415 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:23 crc kubenswrapper[4703]: I1209 12:06:23.834546 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:23 crc kubenswrapper[4703]: I1209 12:06:23.834638 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:23 crc kubenswrapper[4703]: I1209 12:06:23.834760 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:23Z","lastTransitionTime":"2025-12-09T12:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:23 crc kubenswrapper[4703]: I1209 12:06:23.937512 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:23 crc kubenswrapper[4703]: I1209 12:06:23.937797 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:23 crc kubenswrapper[4703]: I1209 12:06:23.937865 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:23 crc kubenswrapper[4703]: I1209 12:06:23.937939 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:23 crc kubenswrapper[4703]: I1209 12:06:23.937997 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:23Z","lastTransitionTime":"2025-12-09T12:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:24 crc kubenswrapper[4703]: I1209 12:06:24.040439 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:24 crc kubenswrapper[4703]: I1209 12:06:24.040474 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:24 crc kubenswrapper[4703]: I1209 12:06:24.040483 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:24 crc kubenswrapper[4703]: I1209 12:06:24.040497 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:24 crc kubenswrapper[4703]: I1209 12:06:24.040505 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:24Z","lastTransitionTime":"2025-12-09T12:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:24 crc kubenswrapper[4703]: I1209 12:06:24.143448 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:24 crc kubenswrapper[4703]: I1209 12:06:24.143502 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:24 crc kubenswrapper[4703]: I1209 12:06:24.143515 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:24 crc kubenswrapper[4703]: I1209 12:06:24.143538 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:24 crc kubenswrapper[4703]: I1209 12:06:24.143557 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:24Z","lastTransitionTime":"2025-12-09T12:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:24 crc kubenswrapper[4703]: I1209 12:06:24.250992 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:24 crc kubenswrapper[4703]: I1209 12:06:24.251054 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:24 crc kubenswrapper[4703]: I1209 12:06:24.251077 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:24 crc kubenswrapper[4703]: I1209 12:06:24.251106 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:24 crc kubenswrapper[4703]: I1209 12:06:24.251127 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:24Z","lastTransitionTime":"2025-12-09T12:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:24 crc kubenswrapper[4703]: I1209 12:06:24.353576 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:24 crc kubenswrapper[4703]: I1209 12:06:24.353647 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:24 crc kubenswrapper[4703]: I1209 12:06:24.353668 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:24 crc kubenswrapper[4703]: I1209 12:06:24.353699 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:24 crc kubenswrapper[4703]: I1209 12:06:24.353721 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:24Z","lastTransitionTime":"2025-12-09T12:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:24 crc kubenswrapper[4703]: I1209 12:06:24.456493 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:24 crc kubenswrapper[4703]: I1209 12:06:24.456549 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:24 crc kubenswrapper[4703]: I1209 12:06:24.456566 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:24 crc kubenswrapper[4703]: I1209 12:06:24.456586 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:24 crc kubenswrapper[4703]: I1209 12:06:24.456600 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:24Z","lastTransitionTime":"2025-12-09T12:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:24 crc kubenswrapper[4703]: I1209 12:06:24.558890 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:24 crc kubenswrapper[4703]: I1209 12:06:24.558922 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:24 crc kubenswrapper[4703]: I1209 12:06:24.558932 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:24 crc kubenswrapper[4703]: I1209 12:06:24.558945 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:24 crc kubenswrapper[4703]: I1209 12:06:24.558953 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:24Z","lastTransitionTime":"2025-12-09T12:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:24 crc kubenswrapper[4703]: I1209 12:06:24.660442 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:24 crc kubenswrapper[4703]: I1209 12:06:24.660487 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:24 crc kubenswrapper[4703]: I1209 12:06:24.660501 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:24 crc kubenswrapper[4703]: I1209 12:06:24.660517 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:24 crc kubenswrapper[4703]: I1209 12:06:24.660528 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:24Z","lastTransitionTime":"2025-12-09T12:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:24 crc kubenswrapper[4703]: I1209 12:06:24.762785 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:24 crc kubenswrapper[4703]: I1209 12:06:24.762825 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:24 crc kubenswrapper[4703]: I1209 12:06:24.762836 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:24 crc kubenswrapper[4703]: I1209 12:06:24.762851 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:24 crc kubenswrapper[4703]: I1209 12:06:24.762864 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:24Z","lastTransitionTime":"2025-12-09T12:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:24 crc kubenswrapper[4703]: I1209 12:06:24.864938 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:24 crc kubenswrapper[4703]: I1209 12:06:24.864983 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:24 crc kubenswrapper[4703]: I1209 12:06:24.864992 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:24 crc kubenswrapper[4703]: I1209 12:06:24.865010 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:24 crc kubenswrapper[4703]: I1209 12:06:24.865021 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:24Z","lastTransitionTime":"2025-12-09T12:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:24 crc kubenswrapper[4703]: I1209 12:06:24.967251 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:24 crc kubenswrapper[4703]: I1209 12:06:24.967287 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:24 crc kubenswrapper[4703]: I1209 12:06:24.967296 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:24 crc kubenswrapper[4703]: I1209 12:06:24.967309 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:24 crc kubenswrapper[4703]: I1209 12:06:24.967318 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:24Z","lastTransitionTime":"2025-12-09T12:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:25 crc kubenswrapper[4703]: I1209 12:06:25.068630 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:06:25 crc kubenswrapper[4703]: I1209 12:06:25.068675 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:06:25 crc kubenswrapper[4703]: I1209 12:06:25.068717 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pf4r7" Dec 09 12:06:25 crc kubenswrapper[4703]: I1209 12:06:25.068756 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:06:25 crc kubenswrapper[4703]: E1209 12:06:25.068749 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 12:06:25 crc kubenswrapper[4703]: E1209 12:06:25.068810 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pf4r7" podUID="9f199898-7916-48b6-b5e6-c878bacae384" Dec 09 12:06:25 crc kubenswrapper[4703]: E1209 12:06:25.068858 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 12:06:25 crc kubenswrapper[4703]: E1209 12:06:25.068902 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 12:06:25 crc kubenswrapper[4703]: I1209 12:06:25.069335 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:25 crc kubenswrapper[4703]: I1209 12:06:25.069356 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:25 crc kubenswrapper[4703]: I1209 12:06:25.069363 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:25 crc kubenswrapper[4703]: I1209 12:06:25.069373 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:25 crc kubenswrapper[4703]: I1209 12:06:25.069382 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:25Z","lastTransitionTime":"2025-12-09T12:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:25 crc kubenswrapper[4703]: I1209 12:06:25.171685 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:25 crc kubenswrapper[4703]: I1209 12:06:25.171732 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:25 crc kubenswrapper[4703]: I1209 12:06:25.171748 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:25 crc kubenswrapper[4703]: I1209 12:06:25.171764 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:25 crc kubenswrapper[4703]: I1209 12:06:25.171774 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:25Z","lastTransitionTime":"2025-12-09T12:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:25 crc kubenswrapper[4703]: I1209 12:06:25.274174 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:25 crc kubenswrapper[4703]: I1209 12:06:25.274232 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:25 crc kubenswrapper[4703]: I1209 12:06:25.274241 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:25 crc kubenswrapper[4703]: I1209 12:06:25.274255 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:25 crc kubenswrapper[4703]: I1209 12:06:25.274264 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:25Z","lastTransitionTime":"2025-12-09T12:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:25 crc kubenswrapper[4703]: I1209 12:06:25.376406 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:25 crc kubenswrapper[4703]: I1209 12:06:25.376470 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:25 crc kubenswrapper[4703]: I1209 12:06:25.376479 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:25 crc kubenswrapper[4703]: I1209 12:06:25.376493 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:25 crc kubenswrapper[4703]: I1209 12:06:25.376502 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:25Z","lastTransitionTime":"2025-12-09T12:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:25 crc kubenswrapper[4703]: I1209 12:06:25.478345 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:25 crc kubenswrapper[4703]: I1209 12:06:25.478390 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:25 crc kubenswrapper[4703]: I1209 12:06:25.478402 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:25 crc kubenswrapper[4703]: I1209 12:06:25.478425 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:25 crc kubenswrapper[4703]: I1209 12:06:25.478438 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:25Z","lastTransitionTime":"2025-12-09T12:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:25 crc kubenswrapper[4703]: I1209 12:06:25.580636 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:25 crc kubenswrapper[4703]: I1209 12:06:25.580669 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:25 crc kubenswrapper[4703]: I1209 12:06:25.580678 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:25 crc kubenswrapper[4703]: I1209 12:06:25.580691 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:25 crc kubenswrapper[4703]: I1209 12:06:25.580700 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:25Z","lastTransitionTime":"2025-12-09T12:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:25 crc kubenswrapper[4703]: I1209 12:06:25.684328 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:25 crc kubenswrapper[4703]: I1209 12:06:25.684377 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:25 crc kubenswrapper[4703]: I1209 12:06:25.684400 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:25 crc kubenswrapper[4703]: I1209 12:06:25.684422 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:25 crc kubenswrapper[4703]: I1209 12:06:25.684438 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:25Z","lastTransitionTime":"2025-12-09T12:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:25 crc kubenswrapper[4703]: I1209 12:06:25.786278 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:25 crc kubenswrapper[4703]: I1209 12:06:25.786328 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:25 crc kubenswrapper[4703]: I1209 12:06:25.786341 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:25 crc kubenswrapper[4703]: I1209 12:06:25.786357 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:25 crc kubenswrapper[4703]: I1209 12:06:25.786368 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:25Z","lastTransitionTime":"2025-12-09T12:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:25 crc kubenswrapper[4703]: I1209 12:06:25.888597 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:25 crc kubenswrapper[4703]: I1209 12:06:25.888634 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:25 crc kubenswrapper[4703]: I1209 12:06:25.888643 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:25 crc kubenswrapper[4703]: I1209 12:06:25.888658 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:25 crc kubenswrapper[4703]: I1209 12:06:25.888667 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:25Z","lastTransitionTime":"2025-12-09T12:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:25 crc kubenswrapper[4703]: I1209 12:06:25.991447 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:25 crc kubenswrapper[4703]: I1209 12:06:25.991497 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:25 crc kubenswrapper[4703]: I1209 12:06:25.991513 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:25 crc kubenswrapper[4703]: I1209 12:06:25.991530 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:25 crc kubenswrapper[4703]: I1209 12:06:25.991593 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:25Z","lastTransitionTime":"2025-12-09T12:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:26 crc kubenswrapper[4703]: I1209 12:06:26.093877 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:26 crc kubenswrapper[4703]: I1209 12:06:26.093916 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:26 crc kubenswrapper[4703]: I1209 12:06:26.093969 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:26 crc kubenswrapper[4703]: I1209 12:06:26.093987 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:26 crc kubenswrapper[4703]: I1209 12:06:26.093998 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:26Z","lastTransitionTime":"2025-12-09T12:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:26 crc kubenswrapper[4703]: I1209 12:06:26.196289 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:26 crc kubenswrapper[4703]: I1209 12:06:26.196332 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:26 crc kubenswrapper[4703]: I1209 12:06:26.196343 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:26 crc kubenswrapper[4703]: I1209 12:06:26.196360 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:26 crc kubenswrapper[4703]: I1209 12:06:26.196371 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:26Z","lastTransitionTime":"2025-12-09T12:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:26 crc kubenswrapper[4703]: I1209 12:06:26.299140 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:26 crc kubenswrapper[4703]: I1209 12:06:26.299447 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:26 crc kubenswrapper[4703]: I1209 12:06:26.299559 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:26 crc kubenswrapper[4703]: I1209 12:06:26.299659 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:26 crc kubenswrapper[4703]: I1209 12:06:26.299755 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:26Z","lastTransitionTime":"2025-12-09T12:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:26 crc kubenswrapper[4703]: I1209 12:06:26.402147 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:26 crc kubenswrapper[4703]: I1209 12:06:26.402460 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:26 crc kubenswrapper[4703]: I1209 12:06:26.402524 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:26 crc kubenswrapper[4703]: I1209 12:06:26.402597 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:26 crc kubenswrapper[4703]: I1209 12:06:26.402671 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:26Z","lastTransitionTime":"2025-12-09T12:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:26 crc kubenswrapper[4703]: I1209 12:06:26.504734 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:26 crc kubenswrapper[4703]: I1209 12:06:26.504774 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:26 crc kubenswrapper[4703]: I1209 12:06:26.504783 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:26 crc kubenswrapper[4703]: I1209 12:06:26.504799 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:26 crc kubenswrapper[4703]: I1209 12:06:26.504810 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:26Z","lastTransitionTime":"2025-12-09T12:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:26 crc kubenswrapper[4703]: I1209 12:06:26.607104 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:26 crc kubenswrapper[4703]: I1209 12:06:26.607366 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:26 crc kubenswrapper[4703]: I1209 12:06:26.607427 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:26 crc kubenswrapper[4703]: I1209 12:06:26.607490 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:26 crc kubenswrapper[4703]: I1209 12:06:26.607547 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:26Z","lastTransitionTime":"2025-12-09T12:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:26 crc kubenswrapper[4703]: I1209 12:06:26.710720 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:26 crc kubenswrapper[4703]: I1209 12:06:26.711002 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:26 crc kubenswrapper[4703]: I1209 12:06:26.711096 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:26 crc kubenswrapper[4703]: I1209 12:06:26.711199 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:26 crc kubenswrapper[4703]: I1209 12:06:26.711280 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:26Z","lastTransitionTime":"2025-12-09T12:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:26 crc kubenswrapper[4703]: I1209 12:06:26.814222 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:26 crc kubenswrapper[4703]: I1209 12:06:26.814261 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:26 crc kubenswrapper[4703]: I1209 12:06:26.814271 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:26 crc kubenswrapper[4703]: I1209 12:06:26.814288 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:26 crc kubenswrapper[4703]: I1209 12:06:26.814299 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:26Z","lastTransitionTime":"2025-12-09T12:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:26 crc kubenswrapper[4703]: I1209 12:06:26.916559 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:26 crc kubenswrapper[4703]: I1209 12:06:26.916742 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:26 crc kubenswrapper[4703]: I1209 12:06:26.916820 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:26 crc kubenswrapper[4703]: I1209 12:06:26.916892 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:26 crc kubenswrapper[4703]: I1209 12:06:26.916948 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:26Z","lastTransitionTime":"2025-12-09T12:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:27 crc kubenswrapper[4703]: I1209 12:06:27.018724 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:27 crc kubenswrapper[4703]: I1209 12:06:27.018762 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:27 crc kubenswrapper[4703]: I1209 12:06:27.018771 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:27 crc kubenswrapper[4703]: I1209 12:06:27.018786 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:27 crc kubenswrapper[4703]: I1209 12:06:27.018795 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:27Z","lastTransitionTime":"2025-12-09T12:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:27 crc kubenswrapper[4703]: I1209 12:06:27.069214 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:06:27 crc kubenswrapper[4703]: I1209 12:06:27.069260 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:06:27 crc kubenswrapper[4703]: E1209 12:06:27.069339 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 12:06:27 crc kubenswrapper[4703]: I1209 12:06:27.069237 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:06:27 crc kubenswrapper[4703]: I1209 12:06:27.069362 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pf4r7" Dec 09 12:06:27 crc kubenswrapper[4703]: E1209 12:06:27.069553 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 12:06:27 crc kubenswrapper[4703]: E1209 12:06:27.069612 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pf4r7" podUID="9f199898-7916-48b6-b5e6-c878bacae384" Dec 09 12:06:27 crc kubenswrapper[4703]: E1209 12:06:27.069697 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 12:06:27 crc kubenswrapper[4703]: I1209 12:06:27.120349 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:27 crc kubenswrapper[4703]: I1209 12:06:27.120379 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:27 crc kubenswrapper[4703]: I1209 12:06:27.120388 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:27 crc kubenswrapper[4703]: I1209 12:06:27.120401 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:27 crc kubenswrapper[4703]: I1209 12:06:27.120408 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:27Z","lastTransitionTime":"2025-12-09T12:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:27 crc kubenswrapper[4703]: I1209 12:06:27.222338 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:27 crc kubenswrapper[4703]: I1209 12:06:27.222364 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:27 crc kubenswrapper[4703]: I1209 12:06:27.222372 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:27 crc kubenswrapper[4703]: I1209 12:06:27.222385 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:27 crc kubenswrapper[4703]: I1209 12:06:27.222393 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:27Z","lastTransitionTime":"2025-12-09T12:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:27 crc kubenswrapper[4703]: I1209 12:06:27.324538 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:27 crc kubenswrapper[4703]: I1209 12:06:27.324574 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:27 crc kubenswrapper[4703]: I1209 12:06:27.324583 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:27 crc kubenswrapper[4703]: I1209 12:06:27.324597 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:27 crc kubenswrapper[4703]: I1209 12:06:27.324607 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:27Z","lastTransitionTime":"2025-12-09T12:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:27 crc kubenswrapper[4703]: I1209 12:06:27.426939 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:27 crc kubenswrapper[4703]: I1209 12:06:27.426992 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:27 crc kubenswrapper[4703]: I1209 12:06:27.427000 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:27 crc kubenswrapper[4703]: I1209 12:06:27.427014 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:27 crc kubenswrapper[4703]: I1209 12:06:27.427024 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:27Z","lastTransitionTime":"2025-12-09T12:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:27 crc kubenswrapper[4703]: I1209 12:06:27.529276 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:27 crc kubenswrapper[4703]: I1209 12:06:27.529314 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:27 crc kubenswrapper[4703]: I1209 12:06:27.529324 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:27 crc kubenswrapper[4703]: I1209 12:06:27.529337 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:27 crc kubenswrapper[4703]: I1209 12:06:27.529347 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:27Z","lastTransitionTime":"2025-12-09T12:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:27 crc kubenswrapper[4703]: I1209 12:06:27.631440 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:27 crc kubenswrapper[4703]: I1209 12:06:27.631492 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:27 crc kubenswrapper[4703]: I1209 12:06:27.631503 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:27 crc kubenswrapper[4703]: I1209 12:06:27.631516 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:27 crc kubenswrapper[4703]: I1209 12:06:27.631525 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:27Z","lastTransitionTime":"2025-12-09T12:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:27 crc kubenswrapper[4703]: I1209 12:06:27.733684 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:27 crc kubenswrapper[4703]: I1209 12:06:27.733724 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:27 crc kubenswrapper[4703]: I1209 12:06:27.733735 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:27 crc kubenswrapper[4703]: I1209 12:06:27.733750 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:27 crc kubenswrapper[4703]: I1209 12:06:27.733760 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:27Z","lastTransitionTime":"2025-12-09T12:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:27 crc kubenswrapper[4703]: I1209 12:06:27.835929 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:27 crc kubenswrapper[4703]: I1209 12:06:27.835961 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:27 crc kubenswrapper[4703]: I1209 12:06:27.835969 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:27 crc kubenswrapper[4703]: I1209 12:06:27.835981 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:27 crc kubenswrapper[4703]: I1209 12:06:27.835989 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:27Z","lastTransitionTime":"2025-12-09T12:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:27 crc kubenswrapper[4703]: I1209 12:06:27.938478 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:27 crc kubenswrapper[4703]: I1209 12:06:27.938578 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:27 crc kubenswrapper[4703]: I1209 12:06:27.938598 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:27 crc kubenswrapper[4703]: I1209 12:06:27.938620 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:27 crc kubenswrapper[4703]: I1209 12:06:27.938642 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:27Z","lastTransitionTime":"2025-12-09T12:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:28 crc kubenswrapper[4703]: I1209 12:06:28.041557 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:28 crc kubenswrapper[4703]: I1209 12:06:28.041840 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:28 crc kubenswrapper[4703]: I1209 12:06:28.041905 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:28 crc kubenswrapper[4703]: I1209 12:06:28.041980 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:28 crc kubenswrapper[4703]: I1209 12:06:28.042051 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:28Z","lastTransitionTime":"2025-12-09T12:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:28 crc kubenswrapper[4703]: I1209 12:06:28.144710 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:28 crc kubenswrapper[4703]: I1209 12:06:28.145033 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:28 crc kubenswrapper[4703]: I1209 12:06:28.145134 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:28 crc kubenswrapper[4703]: I1209 12:06:28.145226 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:28 crc kubenswrapper[4703]: I1209 12:06:28.145297 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:28Z","lastTransitionTime":"2025-12-09T12:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:28 crc kubenswrapper[4703]: I1209 12:06:28.247633 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:28 crc kubenswrapper[4703]: I1209 12:06:28.247696 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:28 crc kubenswrapper[4703]: I1209 12:06:28.247705 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:28 crc kubenswrapper[4703]: I1209 12:06:28.247719 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:28 crc kubenswrapper[4703]: I1209 12:06:28.247728 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:28Z","lastTransitionTime":"2025-12-09T12:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:28 crc kubenswrapper[4703]: I1209 12:06:28.350334 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:28 crc kubenswrapper[4703]: I1209 12:06:28.350374 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:28 crc kubenswrapper[4703]: I1209 12:06:28.350383 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:28 crc kubenswrapper[4703]: I1209 12:06:28.350398 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:28 crc kubenswrapper[4703]: I1209 12:06:28.350407 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:28Z","lastTransitionTime":"2025-12-09T12:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:28 crc kubenswrapper[4703]: I1209 12:06:28.452824 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:28 crc kubenswrapper[4703]: I1209 12:06:28.453054 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:28 crc kubenswrapper[4703]: I1209 12:06:28.453165 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:28 crc kubenswrapper[4703]: I1209 12:06:28.453330 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:28 crc kubenswrapper[4703]: I1209 12:06:28.453450 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:28Z","lastTransitionTime":"2025-12-09T12:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:28 crc kubenswrapper[4703]: I1209 12:06:28.555634 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:28 crc kubenswrapper[4703]: I1209 12:06:28.555677 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:28 crc kubenswrapper[4703]: I1209 12:06:28.555686 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:28 crc kubenswrapper[4703]: I1209 12:06:28.555702 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:28 crc kubenswrapper[4703]: I1209 12:06:28.555713 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:28Z","lastTransitionTime":"2025-12-09T12:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:28 crc kubenswrapper[4703]: I1209 12:06:28.658031 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:28 crc kubenswrapper[4703]: I1209 12:06:28.658084 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:28 crc kubenswrapper[4703]: I1209 12:06:28.658095 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:28 crc kubenswrapper[4703]: I1209 12:06:28.658110 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:28 crc kubenswrapper[4703]: I1209 12:06:28.658122 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:28Z","lastTransitionTime":"2025-12-09T12:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:28 crc kubenswrapper[4703]: I1209 12:06:28.760464 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:28 crc kubenswrapper[4703]: I1209 12:06:28.760798 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:28 crc kubenswrapper[4703]: I1209 12:06:28.760865 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:28 crc kubenswrapper[4703]: I1209 12:06:28.760935 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:28 crc kubenswrapper[4703]: I1209 12:06:28.760997 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:28Z","lastTransitionTime":"2025-12-09T12:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:28 crc kubenswrapper[4703]: I1209 12:06:28.862709 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:28 crc kubenswrapper[4703]: I1209 12:06:28.863225 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:28 crc kubenswrapper[4703]: I1209 12:06:28.863299 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:28 crc kubenswrapper[4703]: I1209 12:06:28.863359 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:28 crc kubenswrapper[4703]: I1209 12:06:28.863418 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:28Z","lastTransitionTime":"2025-12-09T12:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:28 crc kubenswrapper[4703]: I1209 12:06:28.965287 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:28 crc kubenswrapper[4703]: I1209 12:06:28.965322 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:28 crc kubenswrapper[4703]: I1209 12:06:28.965332 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:28 crc kubenswrapper[4703]: I1209 12:06:28.965347 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:28 crc kubenswrapper[4703]: I1209 12:06:28.965358 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:28Z","lastTransitionTime":"2025-12-09T12:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:29 crc kubenswrapper[4703]: I1209 12:06:29.067427 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:29 crc kubenswrapper[4703]: I1209 12:06:29.067462 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:29 crc kubenswrapper[4703]: I1209 12:06:29.067472 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:29 crc kubenswrapper[4703]: I1209 12:06:29.067486 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:29 crc kubenswrapper[4703]: I1209 12:06:29.067496 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:29Z","lastTransitionTime":"2025-12-09T12:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:29 crc kubenswrapper[4703]: I1209 12:06:29.068775 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:06:29 crc kubenswrapper[4703]: I1209 12:06:29.068862 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:06:29 crc kubenswrapper[4703]: E1209 12:06:29.068991 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 12:06:29 crc kubenswrapper[4703]: E1209 12:06:29.068863 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 12:06:29 crc kubenswrapper[4703]: I1209 12:06:29.068913 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pf4r7" Dec 09 12:06:29 crc kubenswrapper[4703]: I1209 12:06:29.068778 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:06:29 crc kubenswrapper[4703]: E1209 12:06:29.069419 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 12:06:29 crc kubenswrapper[4703]: E1209 12:06:29.069343 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pf4r7" podUID="9f199898-7916-48b6-b5e6-c878bacae384" Dec 09 12:06:29 crc kubenswrapper[4703]: I1209 12:06:29.169656 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:29 crc kubenswrapper[4703]: I1209 12:06:29.169931 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:29 crc kubenswrapper[4703]: I1209 12:06:29.169999 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:29 crc kubenswrapper[4703]: I1209 12:06:29.170237 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:29 crc kubenswrapper[4703]: I1209 12:06:29.170330 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:29Z","lastTransitionTime":"2025-12-09T12:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:29 crc kubenswrapper[4703]: I1209 12:06:29.272410 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:29 crc kubenswrapper[4703]: I1209 12:06:29.272441 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:29 crc kubenswrapper[4703]: I1209 12:06:29.272449 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:29 crc kubenswrapper[4703]: I1209 12:06:29.272461 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:29 crc kubenswrapper[4703]: I1209 12:06:29.272470 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:29Z","lastTransitionTime":"2025-12-09T12:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:29 crc kubenswrapper[4703]: I1209 12:06:29.374915 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:29 crc kubenswrapper[4703]: I1209 12:06:29.374958 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:29 crc kubenswrapper[4703]: I1209 12:06:29.374969 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:29 crc kubenswrapper[4703]: I1209 12:06:29.374983 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:29 crc kubenswrapper[4703]: I1209 12:06:29.374993 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:29Z","lastTransitionTime":"2025-12-09T12:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:29 crc kubenswrapper[4703]: I1209 12:06:29.477792 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:29 crc kubenswrapper[4703]: I1209 12:06:29.477828 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:29 crc kubenswrapper[4703]: I1209 12:06:29.477837 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:29 crc kubenswrapper[4703]: I1209 12:06:29.477851 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:29 crc kubenswrapper[4703]: I1209 12:06:29.477860 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:29Z","lastTransitionTime":"2025-12-09T12:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:29 crc kubenswrapper[4703]: I1209 12:06:29.580489 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:29 crc kubenswrapper[4703]: I1209 12:06:29.580556 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:29 crc kubenswrapper[4703]: I1209 12:06:29.580566 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:29 crc kubenswrapper[4703]: I1209 12:06:29.580583 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:29 crc kubenswrapper[4703]: I1209 12:06:29.580592 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:29Z","lastTransitionTime":"2025-12-09T12:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:29 crc kubenswrapper[4703]: I1209 12:06:29.683245 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:29 crc kubenswrapper[4703]: I1209 12:06:29.683282 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:29 crc kubenswrapper[4703]: I1209 12:06:29.683293 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:29 crc kubenswrapper[4703]: I1209 12:06:29.683310 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:29 crc kubenswrapper[4703]: I1209 12:06:29.683321 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:29Z","lastTransitionTime":"2025-12-09T12:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:29 crc kubenswrapper[4703]: I1209 12:06:29.785478 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:29 crc kubenswrapper[4703]: I1209 12:06:29.785519 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:29 crc kubenswrapper[4703]: I1209 12:06:29.785530 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:29 crc kubenswrapper[4703]: I1209 12:06:29.785545 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:29 crc kubenswrapper[4703]: I1209 12:06:29.785554 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:29Z","lastTransitionTime":"2025-12-09T12:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:29 crc kubenswrapper[4703]: I1209 12:06:29.887930 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:29 crc kubenswrapper[4703]: I1209 12:06:29.887963 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:29 crc kubenswrapper[4703]: I1209 12:06:29.887972 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:29 crc kubenswrapper[4703]: I1209 12:06:29.887985 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:29 crc kubenswrapper[4703]: I1209 12:06:29.887994 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:29Z","lastTransitionTime":"2025-12-09T12:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:29 crc kubenswrapper[4703]: I1209 12:06:29.990049 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:29 crc kubenswrapper[4703]: I1209 12:06:29.990393 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:29 crc kubenswrapper[4703]: I1209 12:06:29.990478 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:29 crc kubenswrapper[4703]: I1209 12:06:29.990648 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:29 crc kubenswrapper[4703]: I1209 12:06:29.990744 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:29Z","lastTransitionTime":"2025-12-09T12:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:30 crc kubenswrapper[4703]: I1209 12:06:30.093059 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:30 crc kubenswrapper[4703]: I1209 12:06:30.093111 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:30 crc kubenswrapper[4703]: I1209 12:06:30.093122 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:30 crc kubenswrapper[4703]: I1209 12:06:30.093140 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:30 crc kubenswrapper[4703]: I1209 12:06:30.093152 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:30Z","lastTransitionTime":"2025-12-09T12:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:30 crc kubenswrapper[4703]: I1209 12:06:30.195769 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:30 crc kubenswrapper[4703]: I1209 12:06:30.195799 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:30 crc kubenswrapper[4703]: I1209 12:06:30.195808 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:30 crc kubenswrapper[4703]: I1209 12:06:30.195821 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:30 crc kubenswrapper[4703]: I1209 12:06:30.195829 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:30Z","lastTransitionTime":"2025-12-09T12:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:30 crc kubenswrapper[4703]: I1209 12:06:30.298038 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:30 crc kubenswrapper[4703]: I1209 12:06:30.298080 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:30 crc kubenswrapper[4703]: I1209 12:06:30.298093 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:30 crc kubenswrapper[4703]: I1209 12:06:30.298108 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:30 crc kubenswrapper[4703]: I1209 12:06:30.298118 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:30Z","lastTransitionTime":"2025-12-09T12:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:30 crc kubenswrapper[4703]: I1209 12:06:30.399975 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:30 crc kubenswrapper[4703]: I1209 12:06:30.400003 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:30 crc kubenswrapper[4703]: I1209 12:06:30.400011 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:30 crc kubenswrapper[4703]: I1209 12:06:30.400023 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:30 crc kubenswrapper[4703]: I1209 12:06:30.400032 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:30Z","lastTransitionTime":"2025-12-09T12:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:30 crc kubenswrapper[4703]: I1209 12:06:30.502722 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:30 crc kubenswrapper[4703]: I1209 12:06:30.502765 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:30 crc kubenswrapper[4703]: I1209 12:06:30.502779 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:30 crc kubenswrapper[4703]: I1209 12:06:30.502797 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:30 crc kubenswrapper[4703]: I1209 12:06:30.502807 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:30Z","lastTransitionTime":"2025-12-09T12:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:30 crc kubenswrapper[4703]: I1209 12:06:30.604523 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:30 crc kubenswrapper[4703]: I1209 12:06:30.604577 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:30 crc kubenswrapper[4703]: I1209 12:06:30.604592 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:30 crc kubenswrapper[4703]: I1209 12:06:30.604616 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:30 crc kubenswrapper[4703]: I1209 12:06:30.604628 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:30Z","lastTransitionTime":"2025-12-09T12:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:30 crc kubenswrapper[4703]: I1209 12:06:30.706737 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:30 crc kubenswrapper[4703]: I1209 12:06:30.706789 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:30 crc kubenswrapper[4703]: I1209 12:06:30.706800 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:30 crc kubenswrapper[4703]: I1209 12:06:30.706817 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:30 crc kubenswrapper[4703]: I1209 12:06:30.706828 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:30Z","lastTransitionTime":"2025-12-09T12:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:30 crc kubenswrapper[4703]: I1209 12:06:30.809160 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:30 crc kubenswrapper[4703]: I1209 12:06:30.809223 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:30 crc kubenswrapper[4703]: I1209 12:06:30.809235 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:30 crc kubenswrapper[4703]: I1209 12:06:30.809252 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:30 crc kubenswrapper[4703]: I1209 12:06:30.809261 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:30Z","lastTransitionTime":"2025-12-09T12:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:30 crc kubenswrapper[4703]: I1209 12:06:30.911444 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:30 crc kubenswrapper[4703]: I1209 12:06:30.911479 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:30 crc kubenswrapper[4703]: I1209 12:06:30.911488 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:30 crc kubenswrapper[4703]: I1209 12:06:30.911504 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:30 crc kubenswrapper[4703]: I1209 12:06:30.911514 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:30Z","lastTransitionTime":"2025-12-09T12:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:31 crc kubenswrapper[4703]: I1209 12:06:31.013896 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:31 crc kubenswrapper[4703]: I1209 12:06:31.013931 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:31 crc kubenswrapper[4703]: I1209 12:06:31.013941 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:31 crc kubenswrapper[4703]: I1209 12:06:31.013955 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:31 crc kubenswrapper[4703]: I1209 12:06:31.013965 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:31Z","lastTransitionTime":"2025-12-09T12:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:31 crc kubenswrapper[4703]: I1209 12:06:31.069562 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:06:31 crc kubenswrapper[4703]: I1209 12:06:31.069634 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:06:31 crc kubenswrapper[4703]: E1209 12:06:31.069701 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 12:06:31 crc kubenswrapper[4703]: E1209 12:06:31.069771 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 12:06:31 crc kubenswrapper[4703]: I1209 12:06:31.069877 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:06:31 crc kubenswrapper[4703]: I1209 12:06:31.069978 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pf4r7" Dec 09 12:06:31 crc kubenswrapper[4703]: E1209 12:06:31.070145 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 12:06:31 crc kubenswrapper[4703]: E1209 12:06:31.070267 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pf4r7" podUID="9f199898-7916-48b6-b5e6-c878bacae384" Dec 09 12:06:31 crc kubenswrapper[4703]: I1209 12:06:31.081017 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Dec 09 12:06:31 crc kubenswrapper[4703]: I1209 12:06:31.085371 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4sss9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9dac2a02-8ee0-445c-bac8-4d448cda509d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c698b36fa974c95698ac8ac133b3caa9cbe8847626f5abf8088011c1a1a13032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q96fc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://660ecb626e35b5e693783b416e62c773b996e7e39233160e422dc9357ca0fd2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q96fc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4sss9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:31Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:31 crc kubenswrapper[4703]: I1209 12:06:31.102999 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"687e9c7e-e4b8-4bbf-871e-714260501e27\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd9149c63c18f6178219d1f2e4f8e44bc861746b90fb9748105461192a870431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8612defa917cc99574e7762a0260e1b670277ea754e0a66bca960ef6b5a97f2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc0c9f4fdf8ce1ac42d54d3823b0137e0fd8ddf4499236ac3b954f9bff7c491\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd716e645be3932ca219e76ba8d896a8cf27086bbbac8f5e68057873fa68a6f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da708e8829aba2597198cdab64374791885bac9ddf7cb04b296c81d4e81d42ff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 12:05:18.266857 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 12:05:18.266993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 12:05:18.268588 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1075686048/tls.crt::/tmp/serving-cert-1075686048/tls.key\\\\\\\"\\\\nI1209 12:05:18.602670 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 12:05:18.604681 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 12:05:18.604730 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 12:05:18.604783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 12:05:18.604808 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 12:05:18.609177 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 12:05:18.609280 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:05:18.609306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:05:18.609328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 12:05:18.609348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 12:05:18.609367 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 12:05:18.609387 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 12:05:18.609237 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 12:05:18.612865 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://917390e847b6ec24f370316f0fbc305db0f667eab946b4f0e749855a62112cc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55c7e1e49ab4af3fff08ccfa9a7d9b4f889fbdd1d60592bca6c87990f4623d5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55c7e1e49ab4af3fff08ccfa9a7d9b4f889fbdd1d60592bca6c87990f4623d5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:31Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:31 crc kubenswrapper[4703]: I1209 12:06:31.117012 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:31 crc kubenswrapper[4703]: I1209 12:06:31.117070 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:31 crc kubenswrapper[4703]: I1209 12:06:31.117081 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:31 crc kubenswrapper[4703]: I1209 12:06:31.117108 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:31 crc kubenswrapper[4703]: I1209 12:06:31.117123 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:31Z","lastTransitionTime":"2025-12-09T12:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:31 crc kubenswrapper[4703]: I1209 12:06:31.120117 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998159f494b646ac88f1f1ffb09789bd146d7b3554b3a5106174d7c72f3ff583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:31Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:31 crc kubenswrapper[4703]: I1209 12:06:31.134357 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32956ceb-8540-406e-8693-e86efb46cd42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1535901888e91bc0037177a3150033b17d6e2d143faa5a082dba0d096cad474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tv4nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d149dfa980c5e9cee99cd7ba8a51bca19529368e037a1fd8e9b8dbea04179378\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tv4nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q8sfk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:31Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:31 crc kubenswrapper[4703]: I1209 12:06:31.148459 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ncbbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c180fbd9-43db-436b-8166-3cbcb5a14da3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8f7249f78155fea92f6a72919fd391b841a3cdb25583c9b9bc1083c6e34f087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22tv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ncbbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:31Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:31 crc kubenswrapper[4703]: I1209 12:06:31.171703 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9173444-5181-4ee4-b651-11d92ccab0d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://382198a5fbf52c39a98bc79aa4e917d50dfbcb3ff73d831389901544d400817c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce8423b98a9127fea76c23c865d9840069fe97e1d10dc9e97816c916192bf8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5989b9b8e8fb619a07c3242f7c4310e042a281648593a05edd6cb16449480428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9652e0d783249140796d5c8586e300acb7918f3ced79a1daec01da21eac363c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06ebee5cde81f7c5fb299c4a346f466d3bd4667be4dd2e89712dcae660a1e08c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02b8fb5400b3ed69b7e65ae9f09aac617bd51972cfa0d5aec436c9f59eb65ac2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://500c141c181fc269b89b51bc952bf461e962782dfa52cec80966dce7f1ae181f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81d94ce172f3ac2803f304ca33a0549f1e75e67a45f3acedc0b276a83e885db1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T12:05:47Z\\\",\\\"message\\\":\\\"cluster\\\\\\\", UUID:\\\\\\\"d4efc4a8-c514-4a6b-901c-2953978b50d3\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-multus/multus-admission-controller\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-multus/multus-admission-controller_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-multus/multus-admission-controller\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.119\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.119\\\\\\\", Port:8443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1209 12:05:46.875029 6369 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[extern\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://500c141c181fc269b89b51bc952bf461e962782dfa52cec80966dce7f1ae181f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T12:06:19Z\\\",\\\"message\\\":\\\"nformer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:19Z is after 2025-08-24T17:21:41Z]\\\\nI1209 12:06:19.248568 6775 services_controller.go:434] Service openshift-dns-operator/metrics retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{metrics openshift-dns-operator 4bf7a6e2-037e-4e09-ad6b-2e7f1059a532 4106 0 2025-02-23 05:12:23 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[name:dns-operator] map[include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-secret-name:metrics-tls service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc000633e87 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{Serv\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88a462e310fdbdf79496906a304fcf953c4e48c9cab9137f52cf5aafa9c4054c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f192db4b078c6bf3ffea6db58842ce7e8b4ad76c12fa8422496288bc0cfe0c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f192db4b078c6bf3ffea6db58842ce7e8b4ad76c12fa8422496288bc0cfe0c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7hrm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:31Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:31 crc kubenswrapper[4703]: I1209 12:06:31.183330 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pf4r7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f199898-7916-48b6-b5e6-c878bacae384\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-856ln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-856ln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pf4r7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:31Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:31 crc kubenswrapper[4703]: I1209 12:06:31.204738 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ae32bd0-593a-4f16-9fe3-7d89da2ad915\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ff53944c366e95d2f2186c0202fbd21e2465d6d6ca6f2874d56550dc5a5ff6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70b368c179d04e90a5a01543b05fdd71a67c7877d7de57fbdbd416378cd586e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae6d08b388f19cf2d4587d19583d5c68d1ab921f809bc154917161a5e2cdc4fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://113937f0cd549b93c1c4e084ab24af7044c8a3a887db183bcebed112ee2e091e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f7f975f9fe265343d0c3c38c4a8310d8b9256bbfedb330fa42525f39c27cdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c76e6d78cb239cd0e6304e04f332e3d40053b95f7bb2c997b7ce43b6c3205547\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c76e6d78cb239cd0e6304e04f332e3d40053b95f7bb2c997b7ce43b6c3205547\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09657f3df2c538127b59a65becbd64662b5ebba1021b2e258bab7ae7a1c160f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09657f3df2c538127b59a65becbd64662b5ebba1021b2e258bab7ae7a1c160f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://76130a5c4f0e966af1c3d2840c4a41b05b0b32b3cc7099a53e2bb39183311415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76130a5c4f0e966af1c3d2840c4a41b05b0b32b3cc7099a53e2bb39183311415\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:31Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:31 crc kubenswrapper[4703]: I1209 12:06:31.220163 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:31 crc kubenswrapper[4703]: I1209 12:06:31.220234 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:31 crc kubenswrapper[4703]: I1209 12:06:31.220247 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:31 crc kubenswrapper[4703]: I1209 12:06:31.220265 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:31 crc kubenswrapper[4703]: I1209 12:06:31.220277 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:31Z","lastTransitionTime":"2025-12-09T12:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:31 crc kubenswrapper[4703]: I1209 12:06:31.224024 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e02414b1-6edb-41df-bef5-da5638c58c13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf2e3462f3adecb5658327310dd9a543e1f7fe8b7d477f0f90c77f9a3c1724cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4933db7a36f445ad8289586a5de8f593c8f565d53b4b310292c2a6b436b86b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dd75d360dcc49fc8db00b8a075ce33da2bd845289f44a0148ab391635ab90d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://feccbfdb1fa7a9935e98927106faf0c924485823950276b473cdb8c06c4b07eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:31Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:31 crc kubenswrapper[4703]: I1209 12:06:31.237967 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e04b82b0-1939-4ba9-8913-d41e77c0d60e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4efcec3e0a1bc588b61035ca4ff2f4932da9a6f39c37e48f2d81d3c7d6999822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a54cbf7d80c64289b2c6534ae640af5e06588f6d89e949a307ecd54f87eb429d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22136ab0fb8a3051c168e5b6e650e9b4da83daa189e67b570a453603bd5f4814\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b2cdfd8cad0173fd775a465149996734b60c1c457ffc602c552103ecbb26f68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b2cdfd8cad0173fd775a465149996734b60c1c457ffc602c552103ecbb26f68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:01Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:31Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:31 crc kubenswrapper[4703]: I1209 12:06:31.249677 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:31Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:31 crc kubenswrapper[4703]: I1209 12:06:31.262540 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f05c1cdf163073cebf69a5dcd00e285698d11f7c76995b202500503a9762d467\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:31Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:31 crc kubenswrapper[4703]: I1209 12:06:31.273082 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:31Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:31 crc kubenswrapper[4703]: I1209 12:06:31.285974 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15947b2dcc90c69809979bf2d4a222ee664245d897b1f9d9ca91d29551ad86ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://592c8564eeb877cf87544918cb97d0e32451a6afd1c201efafb4cfec9a1e5857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:31Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:31 crc kubenswrapper[4703]: I1209 12:06:31.298347 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rfmng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a89eb00-454a-44b2-9b8e-6518b4a9d10c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a0c3ac19e4c5217f17ea8cd7c742043ee7708b0f086c99341a4bbc062fa53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qf85c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rfmng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:31Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:31 crc kubenswrapper[4703]: I1209 12:06:31.310863 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:31Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:31 crc kubenswrapper[4703]: I1209 12:06:31.322834 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:31 crc kubenswrapper[4703]: I1209 12:06:31.322888 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:31 crc kubenswrapper[4703]: I1209 12:06:31.322900 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:31 crc kubenswrapper[4703]: I1209 12:06:31.322918 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:31 crc kubenswrapper[4703]: I1209 12:06:31.322929 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:31Z","lastTransitionTime":"2025-12-09T12:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:31 crc kubenswrapper[4703]: I1209 12:06:31.324237 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4r9tc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65954588-9afb-47ff-8c0b-f83bf290da27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://698ca5256efee75c73f1585ad1f7955676be6dac9cbd1c5aebdbf88848c35cee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aeaa589b823d27c4d29e522f951c5578acd67848c14381e3f61821458f74a966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aeaa589b823d27c4d29e522f951c5578acd67848c14381e3f61821458f74a966\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e7f1224d3313cfd9900663c77a41898ee26d0f91591e2f641a17a6a543c69b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e7f1224d3313cfd9900663c77a41898ee26d0f91591e2f641a17a6a543c69b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f86761f861d07fe4e22b95651d0ed069799b95020e8e168527fd838ea6cc1e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f86761f861d07fe4e22b95651d0ed069799b95020e8e168527fd838ea6cc1e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c0a351532024238eb4c192066350cdd4f94945cfa549586ced1c91a2379490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c0a351532024238eb4c192066350cdd4f94945cfa549586ced1c91a2379490\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7baabf4646f56bb08044eb6cb29976274522e962906d573a279e967e64d191f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7baabf4646f56bb08044eb6cb29976274522e962906d573a279e967e64d191f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87f16f3d165437fc695faa410c055f3f3716fbb47e2dc1d41bba05315872e950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87f16f3d165437fc695faa410c055f3f3716fbb47e2dc1d41bba05315872e950\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4r9tc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:31Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:31 crc kubenswrapper[4703]: I1209 12:06:31.337660 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9zbgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b57e1095-b0e1-4b30-a491-00852a5219e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71dde398c11e22726cc75a12c047119d7f10bc0a03bb5bda05b6cab5a65c9bda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c902e8b479756afb7cf929d0963afc0f950288658427c1d49ff22ee4319cda70\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T12:06:07Z\\\",\\\"message\\\":\\\"2025-12-09T12:05:22+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1457100e-c7f0-4b9d-98ec-8cebbfe75af3\\\\n2025-12-09T12:05:22+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1457100e-c7f0-4b9d-98ec-8cebbfe75af3 to /host/opt/cni/bin/\\\\n2025-12-09T12:05:22Z [verbose] multus-daemon started\\\\n2025-12-09T12:05:22Z [verbose] Readiness Indicator file check\\\\n2025-12-09T12:06:07Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6sdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9zbgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:31Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:31 crc kubenswrapper[4703]: I1209 12:06:31.425887 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:31 crc kubenswrapper[4703]: I1209 12:06:31.425922 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:31 crc kubenswrapper[4703]: I1209 12:06:31.425932 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:31 crc kubenswrapper[4703]: I1209 12:06:31.425946 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:31 crc kubenswrapper[4703]: I1209 12:06:31.425957 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:31Z","lastTransitionTime":"2025-12-09T12:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:31 crc kubenswrapper[4703]: I1209 12:06:31.527689 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:31 crc kubenswrapper[4703]: I1209 12:06:31.527735 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:31 crc kubenswrapper[4703]: I1209 12:06:31.527745 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:31 crc kubenswrapper[4703]: I1209 12:06:31.527759 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:31 crc kubenswrapper[4703]: I1209 12:06:31.527769 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:31Z","lastTransitionTime":"2025-12-09T12:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:31 crc kubenswrapper[4703]: I1209 12:06:31.629885 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:31 crc kubenswrapper[4703]: I1209 12:06:31.629938 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:31 crc kubenswrapper[4703]: I1209 12:06:31.629947 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:31 crc kubenswrapper[4703]: I1209 12:06:31.629961 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:31 crc kubenswrapper[4703]: I1209 12:06:31.629971 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:31Z","lastTransitionTime":"2025-12-09T12:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:31 crc kubenswrapper[4703]: I1209 12:06:31.732703 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:31 crc kubenswrapper[4703]: I1209 12:06:31.732752 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:31 crc kubenswrapper[4703]: I1209 12:06:31.732763 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:31 crc kubenswrapper[4703]: I1209 12:06:31.732780 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:31 crc kubenswrapper[4703]: I1209 12:06:31.732790 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:31Z","lastTransitionTime":"2025-12-09T12:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:31 crc kubenswrapper[4703]: I1209 12:06:31.835097 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:31 crc kubenswrapper[4703]: I1209 12:06:31.835137 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:31 crc kubenswrapper[4703]: I1209 12:06:31.835146 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:31 crc kubenswrapper[4703]: I1209 12:06:31.835160 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:31 crc kubenswrapper[4703]: I1209 12:06:31.835169 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:31Z","lastTransitionTime":"2025-12-09T12:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:31 crc kubenswrapper[4703]: I1209 12:06:31.937433 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:31 crc kubenswrapper[4703]: I1209 12:06:31.937463 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:31 crc kubenswrapper[4703]: I1209 12:06:31.937471 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:31 crc kubenswrapper[4703]: I1209 12:06:31.937483 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:31 crc kubenswrapper[4703]: I1209 12:06:31.937491 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:31Z","lastTransitionTime":"2025-12-09T12:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:32 crc kubenswrapper[4703]: I1209 12:06:32.039049 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:32 crc kubenswrapper[4703]: I1209 12:06:32.039090 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:32 crc kubenswrapper[4703]: I1209 12:06:32.039099 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:32 crc kubenswrapper[4703]: I1209 12:06:32.039113 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:32 crc kubenswrapper[4703]: I1209 12:06:32.039122 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:32Z","lastTransitionTime":"2025-12-09T12:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:32 crc kubenswrapper[4703]: I1209 12:06:32.142090 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:32 crc kubenswrapper[4703]: I1209 12:06:32.142126 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:32 crc kubenswrapper[4703]: I1209 12:06:32.142137 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:32 crc kubenswrapper[4703]: I1209 12:06:32.142151 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:32 crc kubenswrapper[4703]: I1209 12:06:32.142161 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:32Z","lastTransitionTime":"2025-12-09T12:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:32 crc kubenswrapper[4703]: I1209 12:06:32.244693 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:32 crc kubenswrapper[4703]: I1209 12:06:32.244736 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:32 crc kubenswrapper[4703]: I1209 12:06:32.244748 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:32 crc kubenswrapper[4703]: I1209 12:06:32.244764 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:32 crc kubenswrapper[4703]: I1209 12:06:32.244775 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:32Z","lastTransitionTime":"2025-12-09T12:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:32 crc kubenswrapper[4703]: I1209 12:06:32.348166 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:32 crc kubenswrapper[4703]: I1209 12:06:32.348235 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:32 crc kubenswrapper[4703]: I1209 12:06:32.348245 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:32 crc kubenswrapper[4703]: I1209 12:06:32.348261 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:32 crc kubenswrapper[4703]: I1209 12:06:32.348270 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:32Z","lastTransitionTime":"2025-12-09T12:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:32 crc kubenswrapper[4703]: I1209 12:06:32.450814 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:32 crc kubenswrapper[4703]: I1209 12:06:32.450848 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:32 crc kubenswrapper[4703]: I1209 12:06:32.450856 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:32 crc kubenswrapper[4703]: I1209 12:06:32.450868 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:32 crc kubenswrapper[4703]: I1209 12:06:32.450875 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:32Z","lastTransitionTime":"2025-12-09T12:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:32 crc kubenswrapper[4703]: I1209 12:06:32.553042 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:32 crc kubenswrapper[4703]: I1209 12:06:32.553097 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:32 crc kubenswrapper[4703]: I1209 12:06:32.553113 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:32 crc kubenswrapper[4703]: I1209 12:06:32.553129 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:32 crc kubenswrapper[4703]: I1209 12:06:32.553140 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:32Z","lastTransitionTime":"2025-12-09T12:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:32 crc kubenswrapper[4703]: I1209 12:06:32.655497 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:32 crc kubenswrapper[4703]: I1209 12:06:32.655556 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:32 crc kubenswrapper[4703]: I1209 12:06:32.655567 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:32 crc kubenswrapper[4703]: I1209 12:06:32.655581 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:32 crc kubenswrapper[4703]: I1209 12:06:32.655591 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:32Z","lastTransitionTime":"2025-12-09T12:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:32 crc kubenswrapper[4703]: I1209 12:06:32.757825 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:32 crc kubenswrapper[4703]: I1209 12:06:32.757868 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:32 crc kubenswrapper[4703]: I1209 12:06:32.757880 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:32 crc kubenswrapper[4703]: I1209 12:06:32.757897 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:32 crc kubenswrapper[4703]: I1209 12:06:32.757908 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:32Z","lastTransitionTime":"2025-12-09T12:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:32 crc kubenswrapper[4703]: I1209 12:06:32.860455 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:32 crc kubenswrapper[4703]: I1209 12:06:32.860496 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:32 crc kubenswrapper[4703]: I1209 12:06:32.860510 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:32 crc kubenswrapper[4703]: I1209 12:06:32.860526 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:32 crc kubenswrapper[4703]: I1209 12:06:32.860538 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:32Z","lastTransitionTime":"2025-12-09T12:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:32 crc kubenswrapper[4703]: I1209 12:06:32.963091 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:32 crc kubenswrapper[4703]: I1209 12:06:32.963145 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:32 crc kubenswrapper[4703]: I1209 12:06:32.963158 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:32 crc kubenswrapper[4703]: I1209 12:06:32.963176 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:32 crc kubenswrapper[4703]: I1209 12:06:32.963205 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:32Z","lastTransitionTime":"2025-12-09T12:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:33 crc kubenswrapper[4703]: I1209 12:06:33.065901 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:33 crc kubenswrapper[4703]: I1209 12:06:33.065941 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:33 crc kubenswrapper[4703]: I1209 12:06:33.065951 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:33 crc kubenswrapper[4703]: I1209 12:06:33.065967 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:33 crc kubenswrapper[4703]: I1209 12:06:33.065978 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:33Z","lastTransitionTime":"2025-12-09T12:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:33 crc kubenswrapper[4703]: I1209 12:06:33.069181 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:06:33 crc kubenswrapper[4703]: E1209 12:06:33.069299 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 12:06:33 crc kubenswrapper[4703]: I1209 12:06:33.069321 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:06:33 crc kubenswrapper[4703]: I1209 12:06:33.069349 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pf4r7" Dec 09 12:06:33 crc kubenswrapper[4703]: I1209 12:06:33.069377 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:06:33 crc kubenswrapper[4703]: I1209 12:06:33.070044 4703 scope.go:117] "RemoveContainer" containerID="500c141c181fc269b89b51bc952bf461e962782dfa52cec80966dce7f1ae181f" Dec 09 12:06:33 crc kubenswrapper[4703]: E1209 12:06:33.070233 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-7hrm8_openshift-ovn-kubernetes(e9173444-5181-4ee4-b651-11d92ccab0d0)\"" pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" podUID="e9173444-5181-4ee4-b651-11d92ccab0d0" Dec 09 12:06:33 crc kubenswrapper[4703]: E1209 12:06:33.070422 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 12:06:33 crc kubenswrapper[4703]: E1209 12:06:33.070477 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 12:06:33 crc kubenswrapper[4703]: E1209 12:06:33.070571 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pf4r7" podUID="9f199898-7916-48b6-b5e6-c878bacae384" Dec 09 12:06:33 crc kubenswrapper[4703]: I1209 12:06:33.082640 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9zbgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b57e1095-b0e1-4b30-a491-00852a5219e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71dde398c11e22726cc75a12c047119d7f10bc0a03bb5bda05b6cab5a65c9bda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c902e8b479756afb7cf929d0963afc0f950288658427c1d49ff22ee4319cda70\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T12:06:07Z\\\",\\\"message\\\":\\\"2025-12-09T12:05:22+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1457100e-c7f0-4b9d-98ec-8cebbfe75af3\\\\n2025-12-09T12:05:22+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1457100e-c7f0-4b9d-98ec-8cebbfe75af3 to /host/opt/cni/bin/\\\\n2025-12-09T12:05:22Z [verbose] multus-daemon started\\\\n2025-12-09T12:05:22Z [verbose] Readiness Indicator file check\\\\n2025-12-09T12:06:07Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6sdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9zbgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:33Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:33 crc kubenswrapper[4703]: I1209 12:06:33.093274 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:33Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:33 crc kubenswrapper[4703]: I1209 12:06:33.106973 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4r9tc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65954588-9afb-47ff-8c0b-f83bf290da27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://698ca5256efee75c73f1585ad1f7955676be6dac9cbd1c5aebdbf88848c35cee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aeaa589b823d27c4d29e522f951c5578acd67848c14381e3f61821458f74a966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aeaa589b823d27c4d29e522f951c5578acd67848c14381e3f61821458f74a966\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e7f1224d3313cfd9900663c77a41898ee26d0f91591e2f641a17a6a543c69b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e7f1224d3313cfd9900663c77a41898ee26d0f91591e2f641a17a6a543c69b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f86761f861d07fe4e22b95651d0ed069799b95020e8e168527fd838ea6cc1e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f86761f861d07fe4e22b95651d0ed069799b95020e8e168527fd838ea6cc1e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c0a351532024238eb4c192066350cdd4f94945cfa549586ced1c91a2379490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c0a351532024238eb4c192066350cdd4f94945cfa549586ced1c91a2379490\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7baabf4646f56bb08044eb6cb29976274522e962906d573a279e967e64d191f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7baabf4646f56bb08044eb6cb29976274522e962906d573a279e967e64d191f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87f16f3d165437fc695faa410c055f3f3716fbb47e2dc1d41bba05315872e950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87f16f3d165437fc695faa410c055f3f3716fbb47e2dc1d41bba05315872e950\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4r9tc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:33Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:33 crc kubenswrapper[4703]: I1209 12:06:33.117911 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998159f494b646ac88f1f1ffb09789bd146d7b3554b3a5106174d7c72f3ff583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:33Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:33 crc kubenswrapper[4703]: I1209 12:06:33.120741 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:33 crc kubenswrapper[4703]: I1209 12:06:33.120772 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:33 crc kubenswrapper[4703]: I1209 12:06:33.120783 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:33 crc kubenswrapper[4703]: I1209 12:06:33.120799 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:33 crc kubenswrapper[4703]: I1209 12:06:33.120810 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:33Z","lastTransitionTime":"2025-12-09T12:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:33 crc kubenswrapper[4703]: I1209 12:06:33.127700 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32956ceb-8540-406e-8693-e86efb46cd42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1535901888e91bc0037177a3150033b17d6e2d143faa5a082dba0d096cad474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tv4nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d149dfa980c5e9cee99cd7ba8a51bca19529368e037a1fd8e9b8dbea04179378\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tv4nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q8sfk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:33Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:33 crc kubenswrapper[4703]: E1209 12:06:33.132591 4703 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cc650f57-0f1e-4118-b5e8-874027bb4fd3\\\",\\\"systemUUID\\\":\\\"538480e3-ee75-4c42-9816-5a001726e0b5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:33Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:33 crc kubenswrapper[4703]: I1209 12:06:33.135709 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:33 crc kubenswrapper[4703]: I1209 12:06:33.135873 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:33 crc kubenswrapper[4703]: I1209 12:06:33.135996 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:33 crc kubenswrapper[4703]: I1209 12:06:33.136119 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:33 crc kubenswrapper[4703]: I1209 12:06:33.136233 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:33Z","lastTransitionTime":"2025-12-09T12:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:33 crc kubenswrapper[4703]: I1209 12:06:33.137979 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4sss9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9dac2a02-8ee0-445c-bac8-4d448cda509d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c698b36fa974c95698ac8ac133b3caa9cbe8847626f5abf8088011c1a1a13032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q96fc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://660ecb626e35b5e693783b416e62c773b996e7e39233160e422dc9357ca0fd2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q96fc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4sss9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:33Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:33 crc kubenswrapper[4703]: I1209 12:06:33.147208 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efa666b8-6b68-4065-98e4-46d13ac311d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe51d85ebdde170cd7531214b7f4c74c76e435371aa9bdec11400c38d6733d6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://892955233c39ca67e919ba0709618461def13836b90c5a61dd8eb629bea71647\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://892955233c39ca67e919ba0709618461def13836b90c5a61dd8eb629bea71647\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:33Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:33 crc kubenswrapper[4703]: E1209 12:06:33.147481 4703 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cc650f57-0f1e-4118-b5e8-874027bb4fd3\\\",\\\"systemUUID\\\":\\\"538480e3-ee75-4c42-9816-5a001726e0b5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:33Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:33 crc kubenswrapper[4703]: I1209 12:06:33.150122 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:33 crc kubenswrapper[4703]: I1209 12:06:33.150240 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:33 crc kubenswrapper[4703]: I1209 12:06:33.150315 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:33 crc kubenswrapper[4703]: I1209 12:06:33.150378 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:33 crc kubenswrapper[4703]: I1209 12:06:33.150434 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:33Z","lastTransitionTime":"2025-12-09T12:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:33 crc kubenswrapper[4703]: I1209 12:06:33.160103 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"687e9c7e-e4b8-4bbf-871e-714260501e27\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd9149c63c18f6178219d1f2e4f8e44bc861746b90fb9748105461192a870431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8612defa917cc99574e7762a0260e1b670277ea754e0a66bca960ef6b5a97f2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc0c9f4fdf8ce1ac42d54d3823b0137e0fd8ddf4499236ac3b954f9bff7c491\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd716e645be3932ca219e76ba8d896a8cf27086bbbac8f5e68057873fa68a6f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da708e8829aba2597198cdab64374791885bac9ddf7cb04b296c81d4e81d42ff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 12:05:18.266857 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 12:05:18.266993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 12:05:18.268588 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1075686048/tls.crt::/tmp/serving-cert-1075686048/tls.key\\\\\\\"\\\\nI1209 12:05:18.602670 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 12:05:18.604681 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 12:05:18.604730 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 12:05:18.604783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 12:05:18.604808 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 12:05:18.609177 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 12:05:18.609280 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:05:18.609306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:05:18.609328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 12:05:18.609348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 12:05:18.609367 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 12:05:18.609387 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 12:05:18.609237 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 12:05:18.612865 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://917390e847b6ec24f370316f0fbc305db0f667eab946b4f0e749855a62112cc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55c7e1e49ab4af3fff08ccfa9a7d9b4f889fbdd1d60592bca6c87990f4623d5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55c7e1e49ab4af3fff08ccfa9a7d9b4f889fbdd1d60592bca6c87990f4623d5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:33Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:33 crc kubenswrapper[4703]: E1209 12:06:33.161928 4703 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cc650f57-0f1e-4118-b5e8-874027bb4fd3\\\",\\\"systemUUID\\\":\\\"538480e3-ee75-4c42-9816-5a001726e0b5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:33Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:33 crc kubenswrapper[4703]: I1209 12:06:33.167904 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:33 crc kubenswrapper[4703]: I1209 12:06:33.167934 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:33 crc kubenswrapper[4703]: I1209 12:06:33.167965 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:33 crc kubenswrapper[4703]: I1209 12:06:33.167979 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:33 crc kubenswrapper[4703]: I1209 12:06:33.167988 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:33Z","lastTransitionTime":"2025-12-09T12:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:33 crc kubenswrapper[4703]: I1209 12:06:33.171504 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e04b82b0-1939-4ba9-8913-d41e77c0d60e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4efcec3e0a1bc588b61035ca4ff2f4932da9a6f39c37e48f2d81d3c7d6999822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a54cbf7d80c64289b2c6534ae640af5e06588f6d89e949a307ecd54f87eb429d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22136ab0fb8a3051c168e5b6e650e9b4da83daa189e67b570a453603bd5f4814\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b2cdfd8cad0173fd775a465149996734b60c1c457ffc602c552103ecbb26f68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b2cdfd8cad0173fd775a465149996734b60c1c457ffc602c552103ecbb26f68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:01Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:33Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:33 crc kubenswrapper[4703]: E1209 12:06:33.178634 4703 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cc650f57-0f1e-4118-b5e8-874027bb4fd3\\\",\\\"systemUUID\\\":\\\"538480e3-ee75-4c42-9816-5a001726e0b5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:33Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:33 crc kubenswrapper[4703]: I1209 12:06:33.181651 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:33Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:33 crc kubenswrapper[4703]: I1209 12:06:33.182260 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:33 crc kubenswrapper[4703]: I1209 12:06:33.182295 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:33 crc kubenswrapper[4703]: I1209 12:06:33.182305 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:33 crc kubenswrapper[4703]: I1209 12:06:33.182318 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:33 crc kubenswrapper[4703]: I1209 12:06:33.182329 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:33Z","lastTransitionTime":"2025-12-09T12:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:33 crc kubenswrapper[4703]: I1209 12:06:33.190032 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ncbbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c180fbd9-43db-436b-8166-3cbcb5a14da3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8f7249f78155fea92f6a72919fd391b841a3cdb25583c9b9bc1083c6e34f087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22tv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ncbbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:33Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:33 crc kubenswrapper[4703]: E1209 12:06:33.191843 4703 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cc650f57-0f1e-4118-b5e8-874027bb4fd3\\\",\\\"systemUUID\\\":\\\"538480e3-ee75-4c42-9816-5a001726e0b5\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:33Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:33 crc kubenswrapper[4703]: E1209 12:06:33.192094 4703 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 09 12:06:33 crc kubenswrapper[4703]: I1209 12:06:33.193620 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:33 crc kubenswrapper[4703]: I1209 12:06:33.193751 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:33 crc kubenswrapper[4703]: I1209 12:06:33.193906 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:33 crc kubenswrapper[4703]: I1209 12:06:33.194002 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:33 crc kubenswrapper[4703]: I1209 12:06:33.194062 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:33Z","lastTransitionTime":"2025-12-09T12:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:33 crc kubenswrapper[4703]: I1209 12:06:33.205789 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9173444-5181-4ee4-b651-11d92ccab0d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://382198a5fbf52c39a98bc79aa4e917d50dfbcb3ff73d831389901544d400817c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce8423b98a9127fea76c23c865d9840069fe97e1d10dc9e97816c916192bf8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5989b9b8e8fb619a07c3242f7c4310e042a281648593a05edd6cb16449480428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9652e0d783249140796d5c8586e300acb7918f3ced79a1daec01da21eac363c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06ebee5cde81f7c5fb299c4a346f466d3bd4667be4dd2e89712dcae660a1e08c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02b8fb5400b3ed69b7e65ae9f09aac617bd51972cfa0d5aec436c9f59eb65ac2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://500c141c181fc269b89b51bc952bf461e962782dfa52cec80966dce7f1ae181f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://500c141c181fc269b89b51bc952bf461e962782dfa52cec80966dce7f1ae181f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T12:06:19Z\\\",\\\"message\\\":\\\"nformer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:19Z is after 2025-08-24T17:21:41Z]\\\\nI1209 12:06:19.248568 6775 services_controller.go:434] Service openshift-dns-operator/metrics retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{metrics openshift-dns-operator 4bf7a6e2-037e-4e09-ad6b-2e7f1059a532 4106 0 2025-02-23 05:12:23 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[name:dns-operator] map[include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-secret-name:metrics-tls service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc000633e87 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{Serv\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-7hrm8_openshift-ovn-kubernetes(e9173444-5181-4ee4-b651-11d92ccab0d0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88a462e310fdbdf79496906a304fcf953c4e48c9cab9137f52cf5aafa9c4054c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f192db4b078c6bf3ffea6db58842ce7e8b4ad76c12fa8422496288bc0cfe0c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f192db4b078c6bf3ffea6db58842ce7e8b4ad76c12fa8422496288bc0cfe0c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7hrm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:33Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:33 crc kubenswrapper[4703]: I1209 12:06:33.215058 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pf4r7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f199898-7916-48b6-b5e6-c878bacae384\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-856ln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-856ln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pf4r7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:33Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:33 crc kubenswrapper[4703]: I1209 12:06:33.231323 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ae32bd0-593a-4f16-9fe3-7d89da2ad915\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ff53944c366e95d2f2186c0202fbd21e2465d6d6ca6f2874d56550dc5a5ff6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70b368c179d04e90a5a01543b05fdd71a67c7877d7de57fbdbd416378cd586e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae6d08b388f19cf2d4587d19583d5c68d1ab921f809bc154917161a5e2cdc4fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://113937f0cd549b93c1c4e084ab24af7044c8a3a887db183bcebed112ee2e091e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f7f975f9fe265343d0c3c38c4a8310d8b9256bbfedb330fa42525f39c27cdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c76e6d78cb239cd0e6304e04f332e3d40053b95f7bb2c997b7ce43b6c3205547\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c76e6d78cb239cd0e6304e04f332e3d40053b95f7bb2c997b7ce43b6c3205547\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09657f3df2c538127b59a65becbd64662b5ebba1021b2e258bab7ae7a1c160f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09657f3df2c538127b59a65becbd64662b5ebba1021b2e258bab7ae7a1c160f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://76130a5c4f0e966af1c3d2840c4a41b05b0b32b3cc7099a53e2bb39183311415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76130a5c4f0e966af1c3d2840c4a41b05b0b32b3cc7099a53e2bb39183311415\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:33Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:33 crc kubenswrapper[4703]: I1209 12:06:33.241294 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e02414b1-6edb-41df-bef5-da5638c58c13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf2e3462f3adecb5658327310dd9a543e1f7fe8b7d477f0f90c77f9a3c1724cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4933db7a36f445ad8289586a5de8f593c8f565d53b4b310292c2a6b436b86b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dd75d360dcc49fc8db00b8a075ce33da2bd845289f44a0148ab391635ab90d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://feccbfdb1fa7a9935e98927106faf0c924485823950276b473cdb8c06c4b07eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:33Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:33 crc kubenswrapper[4703]: I1209 12:06:33.251337 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15947b2dcc90c69809979bf2d4a222ee664245d897b1f9d9ca91d29551ad86ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://592c8564eeb877cf87544918cb97d0e32451a6afd1c201efafb4cfec9a1e5857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:33Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:33 crc kubenswrapper[4703]: I1209 12:06:33.261132 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rfmng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a89eb00-454a-44b2-9b8e-6518b4a9d10c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a0c3ac19e4c5217f17ea8cd7c742043ee7708b0f086c99341a4bbc062fa53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qf85c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rfmng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:33Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:33 crc kubenswrapper[4703]: I1209 12:06:33.273157 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f05c1cdf163073cebf69a5dcd00e285698d11f7c76995b202500503a9762d467\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:33Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:33 crc kubenswrapper[4703]: I1209 12:06:33.286985 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:33Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:33 crc kubenswrapper[4703]: I1209 12:06:33.296558 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:33 crc kubenswrapper[4703]: I1209 12:06:33.296720 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:33 crc kubenswrapper[4703]: I1209 12:06:33.296826 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:33 crc kubenswrapper[4703]: I1209 12:06:33.296947 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:33 crc kubenswrapper[4703]: I1209 12:06:33.297084 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:33Z","lastTransitionTime":"2025-12-09T12:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:33 crc kubenswrapper[4703]: I1209 12:06:33.399936 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:33 crc kubenswrapper[4703]: I1209 12:06:33.400343 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:33 crc kubenswrapper[4703]: I1209 12:06:33.400355 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:33 crc kubenswrapper[4703]: I1209 12:06:33.400368 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:33 crc kubenswrapper[4703]: I1209 12:06:33.400377 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:33Z","lastTransitionTime":"2025-12-09T12:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:33 crc kubenswrapper[4703]: I1209 12:06:33.502719 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:33 crc kubenswrapper[4703]: I1209 12:06:33.502757 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:33 crc kubenswrapper[4703]: I1209 12:06:33.502765 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:33 crc kubenswrapper[4703]: I1209 12:06:33.502779 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:33 crc kubenswrapper[4703]: I1209 12:06:33.502788 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:33Z","lastTransitionTime":"2025-12-09T12:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:33 crc kubenswrapper[4703]: I1209 12:06:33.604474 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:33 crc kubenswrapper[4703]: I1209 12:06:33.604513 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:33 crc kubenswrapper[4703]: I1209 12:06:33.604522 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:33 crc kubenswrapper[4703]: I1209 12:06:33.604535 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:33 crc kubenswrapper[4703]: I1209 12:06:33.604546 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:33Z","lastTransitionTime":"2025-12-09T12:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:33 crc kubenswrapper[4703]: I1209 12:06:33.707623 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:33 crc kubenswrapper[4703]: I1209 12:06:33.707671 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:33 crc kubenswrapper[4703]: I1209 12:06:33.707680 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:33 crc kubenswrapper[4703]: I1209 12:06:33.707693 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:33 crc kubenswrapper[4703]: I1209 12:06:33.707703 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:33Z","lastTransitionTime":"2025-12-09T12:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:33 crc kubenswrapper[4703]: I1209 12:06:33.809681 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:33 crc kubenswrapper[4703]: I1209 12:06:33.809715 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:33 crc kubenswrapper[4703]: I1209 12:06:33.809723 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:33 crc kubenswrapper[4703]: I1209 12:06:33.809736 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:33 crc kubenswrapper[4703]: I1209 12:06:33.809745 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:33Z","lastTransitionTime":"2025-12-09T12:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:33 crc kubenswrapper[4703]: I1209 12:06:33.912369 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:33 crc kubenswrapper[4703]: I1209 12:06:33.912435 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:33 crc kubenswrapper[4703]: I1209 12:06:33.912651 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:33 crc kubenswrapper[4703]: I1209 12:06:33.912677 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:33 crc kubenswrapper[4703]: I1209 12:06:33.912697 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:33Z","lastTransitionTime":"2025-12-09T12:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:34 crc kubenswrapper[4703]: I1209 12:06:34.014836 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:34 crc kubenswrapper[4703]: I1209 12:06:34.014878 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:34 crc kubenswrapper[4703]: I1209 12:06:34.014887 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:34 crc kubenswrapper[4703]: I1209 12:06:34.014901 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:34 crc kubenswrapper[4703]: I1209 12:06:34.014910 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:34Z","lastTransitionTime":"2025-12-09T12:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:34 crc kubenswrapper[4703]: I1209 12:06:34.117448 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:34 crc kubenswrapper[4703]: I1209 12:06:34.117483 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:34 crc kubenswrapper[4703]: I1209 12:06:34.117496 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:34 crc kubenswrapper[4703]: I1209 12:06:34.117511 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:34 crc kubenswrapper[4703]: I1209 12:06:34.117521 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:34Z","lastTransitionTime":"2025-12-09T12:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:34 crc kubenswrapper[4703]: I1209 12:06:34.219835 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:34 crc kubenswrapper[4703]: I1209 12:06:34.219860 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:34 crc kubenswrapper[4703]: I1209 12:06:34.219868 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:34 crc kubenswrapper[4703]: I1209 12:06:34.219880 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:34 crc kubenswrapper[4703]: I1209 12:06:34.219890 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:34Z","lastTransitionTime":"2025-12-09T12:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:34 crc kubenswrapper[4703]: I1209 12:06:34.322706 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:34 crc kubenswrapper[4703]: I1209 12:06:34.322778 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:34 crc kubenswrapper[4703]: I1209 12:06:34.322794 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:34 crc kubenswrapper[4703]: I1209 12:06:34.322817 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:34 crc kubenswrapper[4703]: I1209 12:06:34.322834 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:34Z","lastTransitionTime":"2025-12-09T12:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:34 crc kubenswrapper[4703]: I1209 12:06:34.425005 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:34 crc kubenswrapper[4703]: I1209 12:06:34.425046 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:34 crc kubenswrapper[4703]: I1209 12:06:34.425056 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:34 crc kubenswrapper[4703]: I1209 12:06:34.425072 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:34 crc kubenswrapper[4703]: I1209 12:06:34.425084 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:34Z","lastTransitionTime":"2025-12-09T12:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:34 crc kubenswrapper[4703]: I1209 12:06:34.526926 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:34 crc kubenswrapper[4703]: I1209 12:06:34.526963 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:34 crc kubenswrapper[4703]: I1209 12:06:34.526976 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:34 crc kubenswrapper[4703]: I1209 12:06:34.526991 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:34 crc kubenswrapper[4703]: I1209 12:06:34.527001 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:34Z","lastTransitionTime":"2025-12-09T12:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:34 crc kubenswrapper[4703]: I1209 12:06:34.629230 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:34 crc kubenswrapper[4703]: I1209 12:06:34.629270 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:34 crc kubenswrapper[4703]: I1209 12:06:34.629279 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:34 crc kubenswrapper[4703]: I1209 12:06:34.629293 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:34 crc kubenswrapper[4703]: I1209 12:06:34.629302 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:34Z","lastTransitionTime":"2025-12-09T12:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:34 crc kubenswrapper[4703]: I1209 12:06:34.732294 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:34 crc kubenswrapper[4703]: I1209 12:06:34.732330 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:34 crc kubenswrapper[4703]: I1209 12:06:34.732368 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:34 crc kubenswrapper[4703]: I1209 12:06:34.732382 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:34 crc kubenswrapper[4703]: I1209 12:06:34.732393 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:34Z","lastTransitionTime":"2025-12-09T12:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:34 crc kubenswrapper[4703]: I1209 12:06:34.834872 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:34 crc kubenswrapper[4703]: I1209 12:06:34.834911 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:34 crc kubenswrapper[4703]: I1209 12:06:34.834919 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:34 crc kubenswrapper[4703]: I1209 12:06:34.834932 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:34 crc kubenswrapper[4703]: I1209 12:06:34.834940 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:34Z","lastTransitionTime":"2025-12-09T12:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:34 crc kubenswrapper[4703]: I1209 12:06:34.937371 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:34 crc kubenswrapper[4703]: I1209 12:06:34.937429 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:34 crc kubenswrapper[4703]: I1209 12:06:34.937441 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:34 crc kubenswrapper[4703]: I1209 12:06:34.937460 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:34 crc kubenswrapper[4703]: I1209 12:06:34.937472 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:34Z","lastTransitionTime":"2025-12-09T12:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:35 crc kubenswrapper[4703]: I1209 12:06:35.039709 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:35 crc kubenswrapper[4703]: I1209 12:06:35.039778 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:35 crc kubenswrapper[4703]: I1209 12:06:35.039789 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:35 crc kubenswrapper[4703]: I1209 12:06:35.039807 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:35 crc kubenswrapper[4703]: I1209 12:06:35.039817 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:35Z","lastTransitionTime":"2025-12-09T12:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:35 crc kubenswrapper[4703]: I1209 12:06:35.069297 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:06:35 crc kubenswrapper[4703]: I1209 12:06:35.069334 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:06:35 crc kubenswrapper[4703]: I1209 12:06:35.069333 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pf4r7" Dec 09 12:06:35 crc kubenswrapper[4703]: I1209 12:06:35.069351 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:06:35 crc kubenswrapper[4703]: E1209 12:06:35.069423 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 12:06:35 crc kubenswrapper[4703]: E1209 12:06:35.069538 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 12:06:35 crc kubenswrapper[4703]: E1209 12:06:35.069607 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pf4r7" podUID="9f199898-7916-48b6-b5e6-c878bacae384" Dec 09 12:06:35 crc kubenswrapper[4703]: E1209 12:06:35.069687 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 12:06:35 crc kubenswrapper[4703]: I1209 12:06:35.142813 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:35 crc kubenswrapper[4703]: I1209 12:06:35.142861 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:35 crc kubenswrapper[4703]: I1209 12:06:35.142871 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:35 crc kubenswrapper[4703]: I1209 12:06:35.142888 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:35 crc kubenswrapper[4703]: I1209 12:06:35.142901 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:35Z","lastTransitionTime":"2025-12-09T12:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:35 crc kubenswrapper[4703]: I1209 12:06:35.245777 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:35 crc kubenswrapper[4703]: I1209 12:06:35.247292 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:35 crc kubenswrapper[4703]: I1209 12:06:35.247320 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:35 crc kubenswrapper[4703]: I1209 12:06:35.247340 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:35 crc kubenswrapper[4703]: I1209 12:06:35.247351 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:35Z","lastTransitionTime":"2025-12-09T12:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:35 crc kubenswrapper[4703]: I1209 12:06:35.349629 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:35 crc kubenswrapper[4703]: I1209 12:06:35.349661 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:35 crc kubenswrapper[4703]: I1209 12:06:35.349669 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:35 crc kubenswrapper[4703]: I1209 12:06:35.349681 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:35 crc kubenswrapper[4703]: I1209 12:06:35.349689 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:35Z","lastTransitionTime":"2025-12-09T12:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:35 crc kubenswrapper[4703]: I1209 12:06:35.453637 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:35 crc kubenswrapper[4703]: I1209 12:06:35.453697 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:35 crc kubenswrapper[4703]: I1209 12:06:35.453710 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:35 crc kubenswrapper[4703]: I1209 12:06:35.453727 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:35 crc kubenswrapper[4703]: I1209 12:06:35.453738 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:35Z","lastTransitionTime":"2025-12-09T12:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:35 crc kubenswrapper[4703]: I1209 12:06:35.556788 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:35 crc kubenswrapper[4703]: I1209 12:06:35.556827 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:35 crc kubenswrapper[4703]: I1209 12:06:35.556835 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:35 crc kubenswrapper[4703]: I1209 12:06:35.556848 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:35 crc kubenswrapper[4703]: I1209 12:06:35.556856 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:35Z","lastTransitionTime":"2025-12-09T12:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:35 crc kubenswrapper[4703]: I1209 12:06:35.658654 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:35 crc kubenswrapper[4703]: I1209 12:06:35.658917 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:35 crc kubenswrapper[4703]: I1209 12:06:35.658995 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:35 crc kubenswrapper[4703]: I1209 12:06:35.659059 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:35 crc kubenswrapper[4703]: I1209 12:06:35.659133 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:35Z","lastTransitionTime":"2025-12-09T12:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:35 crc kubenswrapper[4703]: I1209 12:06:35.762465 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:35 crc kubenswrapper[4703]: I1209 12:06:35.762770 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:35 crc kubenswrapper[4703]: I1209 12:06:35.762888 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:35 crc kubenswrapper[4703]: I1209 12:06:35.763011 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:35 crc kubenswrapper[4703]: I1209 12:06:35.763146 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:35Z","lastTransitionTime":"2025-12-09T12:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:35 crc kubenswrapper[4703]: I1209 12:06:35.866157 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:35 crc kubenswrapper[4703]: I1209 12:06:35.866444 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:35 crc kubenswrapper[4703]: I1209 12:06:35.866525 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:35 crc kubenswrapper[4703]: I1209 12:06:35.866625 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:35 crc kubenswrapper[4703]: I1209 12:06:35.866719 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:35Z","lastTransitionTime":"2025-12-09T12:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:35 crc kubenswrapper[4703]: I1209 12:06:35.969568 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:35 crc kubenswrapper[4703]: I1209 12:06:35.969596 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:35 crc kubenswrapper[4703]: I1209 12:06:35.969604 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:35 crc kubenswrapper[4703]: I1209 12:06:35.969617 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:35 crc kubenswrapper[4703]: I1209 12:06:35.969625 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:35Z","lastTransitionTime":"2025-12-09T12:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:36 crc kubenswrapper[4703]: I1209 12:06:36.073068 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:36 crc kubenswrapper[4703]: I1209 12:06:36.073112 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:36 crc kubenswrapper[4703]: I1209 12:06:36.073123 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:36 crc kubenswrapper[4703]: I1209 12:06:36.073143 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:36 crc kubenswrapper[4703]: I1209 12:06:36.073158 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:36Z","lastTransitionTime":"2025-12-09T12:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:36 crc kubenswrapper[4703]: I1209 12:06:36.175023 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:36 crc kubenswrapper[4703]: I1209 12:06:36.175057 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:36 crc kubenswrapper[4703]: I1209 12:06:36.175066 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:36 crc kubenswrapper[4703]: I1209 12:06:36.175078 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:36 crc kubenswrapper[4703]: I1209 12:06:36.175088 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:36Z","lastTransitionTime":"2025-12-09T12:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:36 crc kubenswrapper[4703]: I1209 12:06:36.276990 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:36 crc kubenswrapper[4703]: I1209 12:06:36.277029 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:36 crc kubenswrapper[4703]: I1209 12:06:36.277037 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:36 crc kubenswrapper[4703]: I1209 12:06:36.277051 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:36 crc kubenswrapper[4703]: I1209 12:06:36.277060 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:36Z","lastTransitionTime":"2025-12-09T12:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:36 crc kubenswrapper[4703]: I1209 12:06:36.379357 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:36 crc kubenswrapper[4703]: I1209 12:06:36.379406 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:36 crc kubenswrapper[4703]: I1209 12:06:36.379415 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:36 crc kubenswrapper[4703]: I1209 12:06:36.379427 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:36 crc kubenswrapper[4703]: I1209 12:06:36.379436 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:36Z","lastTransitionTime":"2025-12-09T12:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:36 crc kubenswrapper[4703]: I1209 12:06:36.480994 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:36 crc kubenswrapper[4703]: I1209 12:06:36.481034 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:36 crc kubenswrapper[4703]: I1209 12:06:36.481041 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:36 crc kubenswrapper[4703]: I1209 12:06:36.481073 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:36 crc kubenswrapper[4703]: I1209 12:06:36.481082 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:36Z","lastTransitionTime":"2025-12-09T12:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:36 crc kubenswrapper[4703]: I1209 12:06:36.583732 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:36 crc kubenswrapper[4703]: I1209 12:06:36.583781 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:36 crc kubenswrapper[4703]: I1209 12:06:36.583790 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:36 crc kubenswrapper[4703]: I1209 12:06:36.583805 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:36 crc kubenswrapper[4703]: I1209 12:06:36.583814 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:36Z","lastTransitionTime":"2025-12-09T12:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:36 crc kubenswrapper[4703]: I1209 12:06:36.686616 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:36 crc kubenswrapper[4703]: I1209 12:06:36.686650 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:36 crc kubenswrapper[4703]: I1209 12:06:36.686660 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:36 crc kubenswrapper[4703]: I1209 12:06:36.686676 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:36 crc kubenswrapper[4703]: I1209 12:06:36.686687 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:36Z","lastTransitionTime":"2025-12-09T12:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:36 crc kubenswrapper[4703]: I1209 12:06:36.788414 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:36 crc kubenswrapper[4703]: I1209 12:06:36.788454 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:36 crc kubenswrapper[4703]: I1209 12:06:36.788472 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:36 crc kubenswrapper[4703]: I1209 12:06:36.788489 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:36 crc kubenswrapper[4703]: I1209 12:06:36.788499 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:36Z","lastTransitionTime":"2025-12-09T12:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:36 crc kubenswrapper[4703]: I1209 12:06:36.891612 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:36 crc kubenswrapper[4703]: I1209 12:06:36.891662 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:36 crc kubenswrapper[4703]: I1209 12:06:36.891672 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:36 crc kubenswrapper[4703]: I1209 12:06:36.891692 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:36 crc kubenswrapper[4703]: I1209 12:06:36.891704 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:36Z","lastTransitionTime":"2025-12-09T12:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:36 crc kubenswrapper[4703]: I1209 12:06:36.993909 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:36 crc kubenswrapper[4703]: I1209 12:06:36.993956 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:36 crc kubenswrapper[4703]: I1209 12:06:36.993966 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:36 crc kubenswrapper[4703]: I1209 12:06:36.993985 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:36 crc kubenswrapper[4703]: I1209 12:06:36.993996 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:36Z","lastTransitionTime":"2025-12-09T12:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:36 crc kubenswrapper[4703]: I1209 12:06:36.997469 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9f199898-7916-48b6-b5e6-c878bacae384-metrics-certs\") pod \"network-metrics-daemon-pf4r7\" (UID: \"9f199898-7916-48b6-b5e6-c878bacae384\") " pod="openshift-multus/network-metrics-daemon-pf4r7" Dec 09 12:06:36 crc kubenswrapper[4703]: E1209 12:06:36.997599 4703 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 12:06:36 crc kubenswrapper[4703]: E1209 12:06:36.997699 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9f199898-7916-48b6-b5e6-c878bacae384-metrics-certs podName:9f199898-7916-48b6-b5e6-c878bacae384 nodeName:}" failed. No retries permitted until 2025-12-09 12:07:40.99768523 +0000 UTC m=+160.246448749 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9f199898-7916-48b6-b5e6-c878bacae384-metrics-certs") pod "network-metrics-daemon-pf4r7" (UID: "9f199898-7916-48b6-b5e6-c878bacae384") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 12:06:37 crc kubenswrapper[4703]: I1209 12:06:37.069529 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pf4r7" Dec 09 12:06:37 crc kubenswrapper[4703]: I1209 12:06:37.069654 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:06:37 crc kubenswrapper[4703]: I1209 12:06:37.069675 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:06:37 crc kubenswrapper[4703]: E1209 12:06:37.069776 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pf4r7" podUID="9f199898-7916-48b6-b5e6-c878bacae384" Dec 09 12:06:37 crc kubenswrapper[4703]: I1209 12:06:37.069806 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:06:37 crc kubenswrapper[4703]: E1209 12:06:37.070021 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 12:06:37 crc kubenswrapper[4703]: E1209 12:06:37.070269 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 12:06:37 crc kubenswrapper[4703]: E1209 12:06:37.070395 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 12:06:37 crc kubenswrapper[4703]: I1209 12:06:37.096513 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:37 crc kubenswrapper[4703]: I1209 12:06:37.096555 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:37 crc kubenswrapper[4703]: I1209 12:06:37.096568 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:37 crc kubenswrapper[4703]: I1209 12:06:37.096585 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:37 crc kubenswrapper[4703]: I1209 12:06:37.096657 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:37Z","lastTransitionTime":"2025-12-09T12:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:37 crc kubenswrapper[4703]: I1209 12:06:37.199582 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:37 crc kubenswrapper[4703]: I1209 12:06:37.199967 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:37 crc kubenswrapper[4703]: I1209 12:06:37.200066 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:37 crc kubenswrapper[4703]: I1209 12:06:37.200150 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:37 crc kubenswrapper[4703]: I1209 12:06:37.200254 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:37Z","lastTransitionTime":"2025-12-09T12:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:37 crc kubenswrapper[4703]: I1209 12:06:37.303100 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:37 crc kubenswrapper[4703]: I1209 12:06:37.303145 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:37 crc kubenswrapper[4703]: I1209 12:06:37.303160 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:37 crc kubenswrapper[4703]: I1209 12:06:37.303182 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:37 crc kubenswrapper[4703]: I1209 12:06:37.303766 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:37Z","lastTransitionTime":"2025-12-09T12:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:37 crc kubenswrapper[4703]: I1209 12:06:37.405612 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:37 crc kubenswrapper[4703]: I1209 12:06:37.405654 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:37 crc kubenswrapper[4703]: I1209 12:06:37.405671 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:37 crc kubenswrapper[4703]: I1209 12:06:37.405687 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:37 crc kubenswrapper[4703]: I1209 12:06:37.405697 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:37Z","lastTransitionTime":"2025-12-09T12:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:37 crc kubenswrapper[4703]: I1209 12:06:37.507636 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:37 crc kubenswrapper[4703]: I1209 12:06:37.507929 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:37 crc kubenswrapper[4703]: I1209 12:06:37.508055 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:37 crc kubenswrapper[4703]: I1209 12:06:37.508164 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:37 crc kubenswrapper[4703]: I1209 12:06:37.508295 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:37Z","lastTransitionTime":"2025-12-09T12:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:37 crc kubenswrapper[4703]: I1209 12:06:37.610676 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:37 crc kubenswrapper[4703]: I1209 12:06:37.610712 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:37 crc kubenswrapper[4703]: I1209 12:06:37.610723 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:37 crc kubenswrapper[4703]: I1209 12:06:37.610737 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:37 crc kubenswrapper[4703]: I1209 12:06:37.610749 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:37Z","lastTransitionTime":"2025-12-09T12:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:37 crc kubenswrapper[4703]: I1209 12:06:37.713281 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:37 crc kubenswrapper[4703]: I1209 12:06:37.713334 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:37 crc kubenswrapper[4703]: I1209 12:06:37.713346 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:37 crc kubenswrapper[4703]: I1209 12:06:37.713363 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:37 crc kubenswrapper[4703]: I1209 12:06:37.713373 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:37Z","lastTransitionTime":"2025-12-09T12:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:37 crc kubenswrapper[4703]: I1209 12:06:37.815743 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:37 crc kubenswrapper[4703]: I1209 12:06:37.815780 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:37 crc kubenswrapper[4703]: I1209 12:06:37.815789 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:37 crc kubenswrapper[4703]: I1209 12:06:37.815804 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:37 crc kubenswrapper[4703]: I1209 12:06:37.815813 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:37Z","lastTransitionTime":"2025-12-09T12:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:37 crc kubenswrapper[4703]: I1209 12:06:37.918242 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:37 crc kubenswrapper[4703]: I1209 12:06:37.918270 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:37 crc kubenswrapper[4703]: I1209 12:06:37.918279 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:37 crc kubenswrapper[4703]: I1209 12:06:37.918297 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:37 crc kubenswrapper[4703]: I1209 12:06:37.918308 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:37Z","lastTransitionTime":"2025-12-09T12:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:38 crc kubenswrapper[4703]: I1209 12:06:38.020380 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:38 crc kubenswrapper[4703]: I1209 12:06:38.020415 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:38 crc kubenswrapper[4703]: I1209 12:06:38.020424 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:38 crc kubenswrapper[4703]: I1209 12:06:38.020437 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:38 crc kubenswrapper[4703]: I1209 12:06:38.020463 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:38Z","lastTransitionTime":"2025-12-09T12:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:38 crc kubenswrapper[4703]: I1209 12:06:38.123401 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:38 crc kubenswrapper[4703]: I1209 12:06:38.123440 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:38 crc kubenswrapper[4703]: I1209 12:06:38.123449 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:38 crc kubenswrapper[4703]: I1209 12:06:38.123464 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:38 crc kubenswrapper[4703]: I1209 12:06:38.123473 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:38Z","lastTransitionTime":"2025-12-09T12:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:38 crc kubenswrapper[4703]: I1209 12:06:38.226038 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:38 crc kubenswrapper[4703]: I1209 12:06:38.226079 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:38 crc kubenswrapper[4703]: I1209 12:06:38.226091 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:38 crc kubenswrapper[4703]: I1209 12:06:38.226108 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:38 crc kubenswrapper[4703]: I1209 12:06:38.226120 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:38Z","lastTransitionTime":"2025-12-09T12:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:38 crc kubenswrapper[4703]: I1209 12:06:38.329547 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:38 crc kubenswrapper[4703]: I1209 12:06:38.329582 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:38 crc kubenswrapper[4703]: I1209 12:06:38.329589 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:38 crc kubenswrapper[4703]: I1209 12:06:38.329603 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:38 crc kubenswrapper[4703]: I1209 12:06:38.329614 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:38Z","lastTransitionTime":"2025-12-09T12:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:38 crc kubenswrapper[4703]: I1209 12:06:38.431757 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:38 crc kubenswrapper[4703]: I1209 12:06:38.431791 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:38 crc kubenswrapper[4703]: I1209 12:06:38.431800 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:38 crc kubenswrapper[4703]: I1209 12:06:38.431813 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:38 crc kubenswrapper[4703]: I1209 12:06:38.431822 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:38Z","lastTransitionTime":"2025-12-09T12:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:38 crc kubenswrapper[4703]: I1209 12:06:38.534118 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:38 crc kubenswrapper[4703]: I1209 12:06:38.534160 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:38 crc kubenswrapper[4703]: I1209 12:06:38.534171 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:38 crc kubenswrapper[4703]: I1209 12:06:38.534220 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:38 crc kubenswrapper[4703]: I1209 12:06:38.534235 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:38Z","lastTransitionTime":"2025-12-09T12:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:38 crc kubenswrapper[4703]: I1209 12:06:38.635911 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:38 crc kubenswrapper[4703]: I1209 12:06:38.636008 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:38 crc kubenswrapper[4703]: I1209 12:06:38.636028 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:38 crc kubenswrapper[4703]: I1209 12:06:38.636044 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:38 crc kubenswrapper[4703]: I1209 12:06:38.636054 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:38Z","lastTransitionTime":"2025-12-09T12:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:38 crc kubenswrapper[4703]: I1209 12:06:38.738294 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:38 crc kubenswrapper[4703]: I1209 12:06:38.738337 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:38 crc kubenswrapper[4703]: I1209 12:06:38.738347 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:38 crc kubenswrapper[4703]: I1209 12:06:38.738361 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:38 crc kubenswrapper[4703]: I1209 12:06:38.738372 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:38Z","lastTransitionTime":"2025-12-09T12:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:38 crc kubenswrapper[4703]: I1209 12:06:38.840818 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:38 crc kubenswrapper[4703]: I1209 12:06:38.840854 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:38 crc kubenswrapper[4703]: I1209 12:06:38.840864 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:38 crc kubenswrapper[4703]: I1209 12:06:38.840882 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:38 crc kubenswrapper[4703]: I1209 12:06:38.840892 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:38Z","lastTransitionTime":"2025-12-09T12:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:38 crc kubenswrapper[4703]: I1209 12:06:38.943403 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:38 crc kubenswrapper[4703]: I1209 12:06:38.943437 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:38 crc kubenswrapper[4703]: I1209 12:06:38.943450 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:38 crc kubenswrapper[4703]: I1209 12:06:38.943465 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:38 crc kubenswrapper[4703]: I1209 12:06:38.943474 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:38Z","lastTransitionTime":"2025-12-09T12:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:39 crc kubenswrapper[4703]: I1209 12:06:39.046163 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:39 crc kubenswrapper[4703]: I1209 12:06:39.046254 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:39 crc kubenswrapper[4703]: I1209 12:06:39.046278 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:39 crc kubenswrapper[4703]: I1209 12:06:39.046294 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:39 crc kubenswrapper[4703]: I1209 12:06:39.046304 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:39Z","lastTransitionTime":"2025-12-09T12:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:39 crc kubenswrapper[4703]: I1209 12:06:39.069561 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:06:39 crc kubenswrapper[4703]: E1209 12:06:39.069655 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 12:06:39 crc kubenswrapper[4703]: I1209 12:06:39.069827 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:06:39 crc kubenswrapper[4703]: I1209 12:06:39.069837 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:06:39 crc kubenswrapper[4703]: E1209 12:06:39.069921 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 12:06:39 crc kubenswrapper[4703]: I1209 12:06:39.070004 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pf4r7" Dec 09 12:06:39 crc kubenswrapper[4703]: E1209 12:06:39.070008 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 12:06:39 crc kubenswrapper[4703]: E1209 12:06:39.070154 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pf4r7" podUID="9f199898-7916-48b6-b5e6-c878bacae384" Dec 09 12:06:39 crc kubenswrapper[4703]: I1209 12:06:39.148159 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:39 crc kubenswrapper[4703]: I1209 12:06:39.148654 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:39 crc kubenswrapper[4703]: I1209 12:06:39.148732 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:39 crc kubenswrapper[4703]: I1209 12:06:39.148812 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:39 crc kubenswrapper[4703]: I1209 12:06:39.148878 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:39Z","lastTransitionTime":"2025-12-09T12:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:39 crc kubenswrapper[4703]: I1209 12:06:39.251639 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:39 crc kubenswrapper[4703]: I1209 12:06:39.251665 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:39 crc kubenswrapper[4703]: I1209 12:06:39.251674 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:39 crc kubenswrapper[4703]: I1209 12:06:39.251688 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:39 crc kubenswrapper[4703]: I1209 12:06:39.251705 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:39Z","lastTransitionTime":"2025-12-09T12:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:39 crc kubenswrapper[4703]: I1209 12:06:39.353942 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:39 crc kubenswrapper[4703]: I1209 12:06:39.353982 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:39 crc kubenswrapper[4703]: I1209 12:06:39.353990 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:39 crc kubenswrapper[4703]: I1209 12:06:39.354007 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:39 crc kubenswrapper[4703]: I1209 12:06:39.354015 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:39Z","lastTransitionTime":"2025-12-09T12:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:39 crc kubenswrapper[4703]: I1209 12:06:39.456204 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:39 crc kubenswrapper[4703]: I1209 12:06:39.456244 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:39 crc kubenswrapper[4703]: I1209 12:06:39.456256 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:39 crc kubenswrapper[4703]: I1209 12:06:39.456272 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:39 crc kubenswrapper[4703]: I1209 12:06:39.456282 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:39Z","lastTransitionTime":"2025-12-09T12:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:39 crc kubenswrapper[4703]: I1209 12:06:39.558670 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:39 crc kubenswrapper[4703]: I1209 12:06:39.558704 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:39 crc kubenswrapper[4703]: I1209 12:06:39.558716 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:39 crc kubenswrapper[4703]: I1209 12:06:39.558730 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:39 crc kubenswrapper[4703]: I1209 12:06:39.558740 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:39Z","lastTransitionTime":"2025-12-09T12:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:39 crc kubenswrapper[4703]: I1209 12:06:39.661373 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:39 crc kubenswrapper[4703]: I1209 12:06:39.661413 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:39 crc kubenswrapper[4703]: I1209 12:06:39.661423 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:39 crc kubenswrapper[4703]: I1209 12:06:39.661438 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:39 crc kubenswrapper[4703]: I1209 12:06:39.661450 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:39Z","lastTransitionTime":"2025-12-09T12:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:39 crc kubenswrapper[4703]: I1209 12:06:39.763675 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:39 crc kubenswrapper[4703]: I1209 12:06:39.763705 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:39 crc kubenswrapper[4703]: I1209 12:06:39.763713 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:39 crc kubenswrapper[4703]: I1209 12:06:39.763726 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:39 crc kubenswrapper[4703]: I1209 12:06:39.763735 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:39Z","lastTransitionTime":"2025-12-09T12:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:39 crc kubenswrapper[4703]: I1209 12:06:39.865610 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:39 crc kubenswrapper[4703]: I1209 12:06:39.865670 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:39 crc kubenswrapper[4703]: I1209 12:06:39.865682 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:39 crc kubenswrapper[4703]: I1209 12:06:39.865697 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:39 crc kubenswrapper[4703]: I1209 12:06:39.865708 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:39Z","lastTransitionTime":"2025-12-09T12:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:39 crc kubenswrapper[4703]: I1209 12:06:39.968179 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:39 crc kubenswrapper[4703]: I1209 12:06:39.968238 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:39 crc kubenswrapper[4703]: I1209 12:06:39.968248 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:39 crc kubenswrapper[4703]: I1209 12:06:39.968265 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:39 crc kubenswrapper[4703]: I1209 12:06:39.968276 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:39Z","lastTransitionTime":"2025-12-09T12:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:40 crc kubenswrapper[4703]: I1209 12:06:40.070763 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:40 crc kubenswrapper[4703]: I1209 12:06:40.070806 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:40 crc kubenswrapper[4703]: I1209 12:06:40.070818 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:40 crc kubenswrapper[4703]: I1209 12:06:40.070836 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:40 crc kubenswrapper[4703]: I1209 12:06:40.070849 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:40Z","lastTransitionTime":"2025-12-09T12:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:40 crc kubenswrapper[4703]: I1209 12:06:40.173623 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:40 crc kubenswrapper[4703]: I1209 12:06:40.173681 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:40 crc kubenswrapper[4703]: I1209 12:06:40.173693 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:40 crc kubenswrapper[4703]: I1209 12:06:40.173717 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:40 crc kubenswrapper[4703]: I1209 12:06:40.173729 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:40Z","lastTransitionTime":"2025-12-09T12:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:40 crc kubenswrapper[4703]: I1209 12:06:40.276760 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:40 crc kubenswrapper[4703]: I1209 12:06:40.276848 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:40 crc kubenswrapper[4703]: I1209 12:06:40.276865 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:40 crc kubenswrapper[4703]: I1209 12:06:40.276894 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:40 crc kubenswrapper[4703]: I1209 12:06:40.276912 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:40Z","lastTransitionTime":"2025-12-09T12:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:40 crc kubenswrapper[4703]: I1209 12:06:40.380084 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:40 crc kubenswrapper[4703]: I1209 12:06:40.380141 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:40 crc kubenswrapper[4703]: I1209 12:06:40.380152 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:40 crc kubenswrapper[4703]: I1209 12:06:40.380173 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:40 crc kubenswrapper[4703]: I1209 12:06:40.380203 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:40Z","lastTransitionTime":"2025-12-09T12:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:40 crc kubenswrapper[4703]: I1209 12:06:40.483492 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:40 crc kubenswrapper[4703]: I1209 12:06:40.483529 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:40 crc kubenswrapper[4703]: I1209 12:06:40.483538 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:40 crc kubenswrapper[4703]: I1209 12:06:40.483560 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:40 crc kubenswrapper[4703]: I1209 12:06:40.483571 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:40Z","lastTransitionTime":"2025-12-09T12:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:40 crc kubenswrapper[4703]: I1209 12:06:40.586594 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:40 crc kubenswrapper[4703]: I1209 12:06:40.586646 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:40 crc kubenswrapper[4703]: I1209 12:06:40.586657 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:40 crc kubenswrapper[4703]: I1209 12:06:40.586681 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:40 crc kubenswrapper[4703]: I1209 12:06:40.586698 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:40Z","lastTransitionTime":"2025-12-09T12:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:40 crc kubenswrapper[4703]: I1209 12:06:40.689589 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:40 crc kubenswrapper[4703]: I1209 12:06:40.689690 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:40 crc kubenswrapper[4703]: I1209 12:06:40.689715 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:40 crc kubenswrapper[4703]: I1209 12:06:40.689751 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:40 crc kubenswrapper[4703]: I1209 12:06:40.689794 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:40Z","lastTransitionTime":"2025-12-09T12:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:40 crc kubenswrapper[4703]: I1209 12:06:40.793087 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:40 crc kubenswrapper[4703]: I1209 12:06:40.793139 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:40 crc kubenswrapper[4703]: I1209 12:06:40.793150 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:40 crc kubenswrapper[4703]: I1209 12:06:40.793167 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:40 crc kubenswrapper[4703]: I1209 12:06:40.793177 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:40Z","lastTransitionTime":"2025-12-09T12:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:40 crc kubenswrapper[4703]: I1209 12:06:40.895902 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:40 crc kubenswrapper[4703]: I1209 12:06:40.895944 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:40 crc kubenswrapper[4703]: I1209 12:06:40.895954 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:40 crc kubenswrapper[4703]: I1209 12:06:40.895968 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:40 crc kubenswrapper[4703]: I1209 12:06:40.895979 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:40Z","lastTransitionTime":"2025-12-09T12:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:40 crc kubenswrapper[4703]: I1209 12:06:40.999390 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:40 crc kubenswrapper[4703]: I1209 12:06:40.999450 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:40 crc kubenswrapper[4703]: I1209 12:06:40.999471 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:40 crc kubenswrapper[4703]: I1209 12:06:40.999495 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:40 crc kubenswrapper[4703]: I1209 12:06:40.999514 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:40Z","lastTransitionTime":"2025-12-09T12:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:41 crc kubenswrapper[4703]: I1209 12:06:41.069646 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:06:41 crc kubenswrapper[4703]: I1209 12:06:41.069724 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:06:41 crc kubenswrapper[4703]: I1209 12:06:41.069665 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:06:41 crc kubenswrapper[4703]: I1209 12:06:41.070006 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pf4r7" Dec 09 12:06:41 crc kubenswrapper[4703]: E1209 12:06:41.070000 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 12:06:41 crc kubenswrapper[4703]: E1209 12:06:41.070224 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pf4r7" podUID="9f199898-7916-48b6-b5e6-c878bacae384" Dec 09 12:06:41 crc kubenswrapper[4703]: E1209 12:06:41.070336 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 12:06:41 crc kubenswrapper[4703]: E1209 12:06:41.070407 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 12:06:41 crc kubenswrapper[4703]: I1209 12:06:41.087761 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efa666b8-6b68-4065-98e4-46d13ac311d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe51d85ebdde170cd7531214b7f4c74c76e435371aa9bdec11400c38d6733d6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://892955233c39ca67e919ba0709618461def13836b90c5a61dd8eb629bea71647\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://892955233c39ca67e919ba0709618461def13836b90c5a61dd8eb629bea71647\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:41Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:41 crc kubenswrapper[4703]: I1209 12:06:41.101978 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:41 crc kubenswrapper[4703]: I1209 12:06:41.102025 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:41 crc kubenswrapper[4703]: I1209 12:06:41.102036 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:41 crc kubenswrapper[4703]: I1209 12:06:41.102054 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:41 crc kubenswrapper[4703]: I1209 12:06:41.102067 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:41Z","lastTransitionTime":"2025-12-09T12:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:41 crc kubenswrapper[4703]: I1209 12:06:41.109163 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"687e9c7e-e4b8-4bbf-871e-714260501e27\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd9149c63c18f6178219d1f2e4f8e44bc861746b90fb9748105461192a870431\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8612defa917cc99574e7762a0260e1b670277ea754e0a66bca960ef6b5a97f2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc0c9f4fdf8ce1ac42d54d3823b0137e0fd8ddf4499236ac3b954f9bff7c491\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd716e645be3932ca219e76ba8d896a8cf27086bbbac8f5e68057873fa68a6f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da708e8829aba2597198cdab64374791885bac9ddf7cb04b296c81d4e81d42ff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"le observer\\\\nW1209 12:05:18.266857 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 12:05:18.266993 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 12:05:18.268588 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1075686048/tls.crt::/tmp/serving-cert-1075686048/tls.key\\\\\\\"\\\\nI1209 12:05:18.602670 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 12:05:18.604681 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 12:05:18.604730 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 12:05:18.604783 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 12:05:18.604808 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 12:05:18.609177 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 12:05:18.609280 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:05:18.609306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:05:18.609328 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 12:05:18.609348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 12:05:18.609367 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 12:05:18.609387 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 12:05:18.609237 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 12:05:18.612865 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://917390e847b6ec24f370316f0fbc305db0f667eab946b4f0e749855a62112cc9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55c7e1e49ab4af3fff08ccfa9a7d9b4f889fbdd1d60592bca6c87990f4623d5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55c7e1e49ab4af3fff08ccfa9a7d9b4f889fbdd1d60592bca6c87990f4623d5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:41Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:41 crc kubenswrapper[4703]: I1209 12:06:41.126798 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://998159f494b646ac88f1f1ffb09789bd146d7b3554b3a5106174d7c72f3ff583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:41Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:41 crc kubenswrapper[4703]: I1209 12:06:41.141693 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32956ceb-8540-406e-8693-e86efb46cd42\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1535901888e91bc0037177a3150033b17d6e2d143faa5a082dba0d096cad474\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tv4nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d149dfa980c5e9cee99cd7ba8a51bca19529368e037a1fd8e9b8dbea04179378\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tv4nx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-q8sfk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:41Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:41 crc kubenswrapper[4703]: I1209 12:06:41.156045 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4sss9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9dac2a02-8ee0-445c-bac8-4d448cda509d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c698b36fa974c95698ac8ac133b3caa9cbe8847626f5abf8088011c1a1a13032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q96fc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://660ecb626e35b5e693783b416e62c773b996e7e39233160e422dc9357ca0fd2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q96fc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4sss9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:41Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:41 crc kubenswrapper[4703]: I1209 12:06:41.183467 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ae32bd0-593a-4f16-9fe3-7d89da2ad915\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ff53944c366e95d2f2186c0202fbd21e2465d6d6ca6f2874d56550dc5a5ff6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70b368c179d04e90a5a01543b05fdd71a67c7877d7de57fbdbd416378cd586e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae6d08b388f19cf2d4587d19583d5c68d1ab921f809bc154917161a5e2cdc4fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://113937f0cd549b93c1c4e084ab24af7044c8a3a887db183bcebed112ee2e091e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f7f975f9fe265343d0c3c38c4a8310d8b9256bbfedb330fa42525f39c27cdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c76e6d78cb239cd0e6304e04f332e3d40053b95f7bb2c997b7ce43b6c3205547\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c76e6d78cb239cd0e6304e04f332e3d40053b95f7bb2c997b7ce43b6c3205547\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09657f3df2c538127b59a65becbd64662b5ebba1021b2e258bab7ae7a1c160f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09657f3df2c538127b59a65becbd64662b5ebba1021b2e258bab7ae7a1c160f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://76130a5c4f0e966af1c3d2840c4a41b05b0b32b3cc7099a53e2bb39183311415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76130a5c4f0e966af1c3d2840c4a41b05b0b32b3cc7099a53e2bb39183311415\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:41Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:41 crc kubenswrapper[4703]: I1209 12:06:41.197703 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e02414b1-6edb-41df-bef5-da5638c58c13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf2e3462f3adecb5658327310dd9a543e1f7fe8b7d477f0f90c77f9a3c1724cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b4933db7a36f445ad8289586a5de8f593c8f565d53b4b310292c2a6b436b86b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dd75d360dcc49fc8db00b8a075ce33da2bd845289f44a0148ab391635ab90d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://feccbfdb1fa7a9935e98927106faf0c924485823950276b473cdb8c06c4b07eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:41Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:41 crc kubenswrapper[4703]: I1209 12:06:41.204223 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:41 crc kubenswrapper[4703]: I1209 12:06:41.204254 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:41 crc kubenswrapper[4703]: I1209 12:06:41.204266 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:41 crc kubenswrapper[4703]: I1209 12:06:41.204280 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:41 crc kubenswrapper[4703]: I1209 12:06:41.204292 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:41Z","lastTransitionTime":"2025-12-09T12:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:41 crc kubenswrapper[4703]: I1209 12:06:41.211470 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e04b82b0-1939-4ba9-8913-d41e77c0d60e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4efcec3e0a1bc588b61035ca4ff2f4932da9a6f39c37e48f2d81d3c7d6999822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a54cbf7d80c64289b2c6534ae640af5e06588f6d89e949a307ecd54f87eb429d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22136ab0fb8a3051c168e5b6e650e9b4da83daa189e67b570a453603bd5f4814\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b2cdfd8cad0173fd775a465149996734b60c1c457ffc602c552103ecbb26f68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b2cdfd8cad0173fd775a465149996734b60c1c457ffc602c552103ecbb26f68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:01Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:41Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:41 crc kubenswrapper[4703]: I1209 12:06:41.225116 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:41Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:41 crc kubenswrapper[4703]: I1209 12:06:41.234065 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ncbbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c180fbd9-43db-436b-8166-3cbcb5a14da3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8f7249f78155fea92f6a72919fd391b841a3cdb25583c9b9bc1083c6e34f087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22tv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ncbbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:41Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:41 crc kubenswrapper[4703]: I1209 12:06:41.249947 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9173444-5181-4ee4-b651-11d92ccab0d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://382198a5fbf52c39a98bc79aa4e917d50dfbcb3ff73d831389901544d400817c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce8423b98a9127fea76c23c865d9840069fe97e1d10dc9e97816c916192bf8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5989b9b8e8fb619a07c3242f7c4310e042a281648593a05edd6cb16449480428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9652e0d783249140796d5c8586e300acb7918f3ced79a1daec01da21eac363c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06ebee5cde81f7c5fb299c4a346f466d3bd4667be4dd2e89712dcae660a1e08c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02b8fb5400b3ed69b7e65ae9f09aac617bd51972cfa0d5aec436c9f59eb65ac2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://500c141c181fc269b89b51bc952bf461e962782dfa52cec80966dce7f1ae181f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://500c141c181fc269b89b51bc952bf461e962782dfa52cec80966dce7f1ae181f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T12:06:19Z\\\",\\\"message\\\":\\\"nformer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:19Z is after 2025-08-24T17:21:41Z]\\\\nI1209 12:06:19.248568 6775 services_controller.go:434] Service openshift-dns-operator/metrics retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{metrics openshift-dns-operator 4bf7a6e2-037e-4e09-ad6b-2e7f1059a532 4106 0 2025-02-23 05:12:23 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[name:dns-operator] map[include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-secret-name:metrics-tls service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc000633e87 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{Serv\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-7hrm8_openshift-ovn-kubernetes(e9173444-5181-4ee4-b651-11d92ccab0d0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88a462e310fdbdf79496906a304fcf953c4e48c9cab9137f52cf5aafa9c4054c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f192db4b078c6bf3ffea6db58842ce7e8b4ad76c12fa8422496288bc0cfe0c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f192db4b078c6bf3ffea6db58842ce7e8b4ad76c12fa8422496288bc0cfe0c8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xm2fh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7hrm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:41Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:41 crc kubenswrapper[4703]: I1209 12:06:41.264823 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pf4r7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f199898-7916-48b6-b5e6-c878bacae384\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-856ln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-856ln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:33Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pf4r7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:41Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:41 crc kubenswrapper[4703]: I1209 12:06:41.280290 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f05c1cdf163073cebf69a5dcd00e285698d11f7c76995b202500503a9762d467\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:41Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:41 crc kubenswrapper[4703]: I1209 12:06:41.294982 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:41Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:41 crc kubenswrapper[4703]: I1209 12:06:41.306816 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:41 crc kubenswrapper[4703]: I1209 12:06:41.306873 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:41 crc kubenswrapper[4703]: I1209 12:06:41.306885 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:41 crc kubenswrapper[4703]: I1209 12:06:41.306906 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:41 crc kubenswrapper[4703]: I1209 12:06:41.306920 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:41Z","lastTransitionTime":"2025-12-09T12:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:41 crc kubenswrapper[4703]: I1209 12:06:41.310070 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15947b2dcc90c69809979bf2d4a222ee664245d897b1f9d9ca91d29551ad86ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://592c8564eeb877cf87544918cb97d0e32451a6afd1c201efafb4cfec9a1e5857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:41Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:41 crc kubenswrapper[4703]: I1209 12:06:41.322177 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rfmng" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a89eb00-454a-44b2-9b8e-6518b4a9d10c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6a0c3ac19e4c5217f17ea8cd7c742043ee7708b0f086c99341a4bbc062fa53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qf85c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:18Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rfmng\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:41Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:41 crc kubenswrapper[4703]: I1209 12:06:41.336068 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:18Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:41Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:41 crc kubenswrapper[4703]: I1209 12:06:41.352900 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4r9tc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65954588-9afb-47ff-8c0b-f83bf290da27\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://698ca5256efee75c73f1585ad1f7955676be6dac9cbd1c5aebdbf88848c35cee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aeaa589b823d27c4d29e522f951c5578acd67848c14381e3f61821458f74a966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aeaa589b823d27c4d29e522f951c5578acd67848c14381e3f61821458f74a966\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e7f1224d3313cfd9900663c77a41898ee26d0f91591e2f641a17a6a543c69b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e7f1224d3313cfd9900663c77a41898ee26d0f91591e2f641a17a6a543c69b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f86761f861d07fe4e22b95651d0ed069799b95020e8e168527fd838ea6cc1e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f86761f861d07fe4e22b95651d0ed069799b95020e8e168527fd838ea6cc1e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c0a351532024238eb4c192066350cdd4f94945cfa549586ced1c91a2379490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c0a351532024238eb4c192066350cdd4f94945cfa549586ced1c91a2379490\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7baabf4646f56bb08044eb6cb29976274522e962906d573a279e967e64d191f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7baabf4646f56bb08044eb6cb29976274522e962906d573a279e967e64d191f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87f16f3d165437fc695faa410c055f3f3716fbb47e2dc1d41bba05315872e950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87f16f3d165437fc695faa410c055f3f3716fbb47e2dc1d41bba05315872e950\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:05:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4r9tc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:41Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:41 crc kubenswrapper[4703]: I1209 12:06:41.368660 4703 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9zbgq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b57e1095-b0e1-4b30-a491-00852a5219e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:05:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71dde398c11e22726cc75a12c047119d7f10bc0a03bb5bda05b6cab5a65c9bda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c902e8b479756afb7cf929d0963afc0f950288658427c1d49ff22ee4319cda70\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T12:06:07Z\\\",\\\"message\\\":\\\"2025-12-09T12:05:22+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1457100e-c7f0-4b9d-98ec-8cebbfe75af3\\\\n2025-12-09T12:05:22+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1457100e-c7f0-4b9d-98ec-8cebbfe75af3 to /host/opt/cni/bin/\\\\n2025-12-09T12:05:22Z [verbose] multus-daemon started\\\\n2025-12-09T12:05:22Z [verbose] Readiness Indicator file check\\\\n2025-12-09T12:06:07Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:05:21Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6sdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:05:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9zbgq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:41Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:41 crc kubenswrapper[4703]: I1209 12:06:41.409775 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:41 crc kubenswrapper[4703]: I1209 12:06:41.409817 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:41 crc kubenswrapper[4703]: I1209 12:06:41.409827 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:41 crc kubenswrapper[4703]: I1209 12:06:41.409842 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:41 crc kubenswrapper[4703]: I1209 12:06:41.409852 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:41Z","lastTransitionTime":"2025-12-09T12:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:41 crc kubenswrapper[4703]: I1209 12:06:41.511640 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:41 crc kubenswrapper[4703]: I1209 12:06:41.511683 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:41 crc kubenswrapper[4703]: I1209 12:06:41.511695 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:41 crc kubenswrapper[4703]: I1209 12:06:41.511711 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:41 crc kubenswrapper[4703]: I1209 12:06:41.511722 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:41Z","lastTransitionTime":"2025-12-09T12:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:41 crc kubenswrapper[4703]: I1209 12:06:41.613442 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:41 crc kubenswrapper[4703]: I1209 12:06:41.613664 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:41 crc kubenswrapper[4703]: I1209 12:06:41.613767 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:41 crc kubenswrapper[4703]: I1209 12:06:41.613873 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:41 crc kubenswrapper[4703]: I1209 12:06:41.613979 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:41Z","lastTransitionTime":"2025-12-09T12:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:41 crc kubenswrapper[4703]: I1209 12:06:41.715751 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:41 crc kubenswrapper[4703]: I1209 12:06:41.715788 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:41 crc kubenswrapper[4703]: I1209 12:06:41.715796 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:41 crc kubenswrapper[4703]: I1209 12:06:41.715809 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:41 crc kubenswrapper[4703]: I1209 12:06:41.715820 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:41Z","lastTransitionTime":"2025-12-09T12:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:41 crc kubenswrapper[4703]: I1209 12:06:41.818145 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:41 crc kubenswrapper[4703]: I1209 12:06:41.818180 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:41 crc kubenswrapper[4703]: I1209 12:06:41.818201 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:41 crc kubenswrapper[4703]: I1209 12:06:41.818219 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:41 crc kubenswrapper[4703]: I1209 12:06:41.818233 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:41Z","lastTransitionTime":"2025-12-09T12:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:41 crc kubenswrapper[4703]: I1209 12:06:41.920497 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:41 crc kubenswrapper[4703]: I1209 12:06:41.920535 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:41 crc kubenswrapper[4703]: I1209 12:06:41.920552 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:41 crc kubenswrapper[4703]: I1209 12:06:41.920567 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:41 crc kubenswrapper[4703]: I1209 12:06:41.920578 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:41Z","lastTransitionTime":"2025-12-09T12:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:42 crc kubenswrapper[4703]: I1209 12:06:42.023354 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:42 crc kubenswrapper[4703]: I1209 12:06:42.023390 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:42 crc kubenswrapper[4703]: I1209 12:06:42.023408 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:42 crc kubenswrapper[4703]: I1209 12:06:42.023424 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:42 crc kubenswrapper[4703]: I1209 12:06:42.023434 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:42Z","lastTransitionTime":"2025-12-09T12:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:42 crc kubenswrapper[4703]: I1209 12:06:42.126115 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:42 crc kubenswrapper[4703]: I1209 12:06:42.126152 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:42 crc kubenswrapper[4703]: I1209 12:06:42.126160 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:42 crc kubenswrapper[4703]: I1209 12:06:42.126176 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:42 crc kubenswrapper[4703]: I1209 12:06:42.126201 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:42Z","lastTransitionTime":"2025-12-09T12:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:42 crc kubenswrapper[4703]: I1209 12:06:42.228548 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:42 crc kubenswrapper[4703]: I1209 12:06:42.229150 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:42 crc kubenswrapper[4703]: I1209 12:06:42.229240 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:42 crc kubenswrapper[4703]: I1209 12:06:42.229353 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:42 crc kubenswrapper[4703]: I1209 12:06:42.229459 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:42Z","lastTransitionTime":"2025-12-09T12:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:42 crc kubenswrapper[4703]: I1209 12:06:42.331749 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:42 crc kubenswrapper[4703]: I1209 12:06:42.331795 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:42 crc kubenswrapper[4703]: I1209 12:06:42.331809 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:42 crc kubenswrapper[4703]: I1209 12:06:42.331824 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:42 crc kubenswrapper[4703]: I1209 12:06:42.331834 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:42Z","lastTransitionTime":"2025-12-09T12:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:42 crc kubenswrapper[4703]: I1209 12:06:42.433823 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:42 crc kubenswrapper[4703]: I1209 12:06:42.433852 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:42 crc kubenswrapper[4703]: I1209 12:06:42.433860 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:42 crc kubenswrapper[4703]: I1209 12:06:42.433872 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:42 crc kubenswrapper[4703]: I1209 12:06:42.433882 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:42Z","lastTransitionTime":"2025-12-09T12:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:42 crc kubenswrapper[4703]: I1209 12:06:42.535820 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:42 crc kubenswrapper[4703]: I1209 12:06:42.535855 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:42 crc kubenswrapper[4703]: I1209 12:06:42.535867 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:42 crc kubenswrapper[4703]: I1209 12:06:42.535881 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:42 crc kubenswrapper[4703]: I1209 12:06:42.535892 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:42Z","lastTransitionTime":"2025-12-09T12:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:42 crc kubenswrapper[4703]: I1209 12:06:42.638211 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:42 crc kubenswrapper[4703]: I1209 12:06:42.638241 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:42 crc kubenswrapper[4703]: I1209 12:06:42.638251 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:42 crc kubenswrapper[4703]: I1209 12:06:42.638264 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:42 crc kubenswrapper[4703]: I1209 12:06:42.638272 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:42Z","lastTransitionTime":"2025-12-09T12:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:42 crc kubenswrapper[4703]: I1209 12:06:42.740865 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:42 crc kubenswrapper[4703]: I1209 12:06:42.741133 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:42 crc kubenswrapper[4703]: I1209 12:06:42.741235 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:42 crc kubenswrapper[4703]: I1209 12:06:42.741308 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:42 crc kubenswrapper[4703]: I1209 12:06:42.741377 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:42Z","lastTransitionTime":"2025-12-09T12:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:42 crc kubenswrapper[4703]: I1209 12:06:42.843405 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:42 crc kubenswrapper[4703]: I1209 12:06:42.843450 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:42 crc kubenswrapper[4703]: I1209 12:06:42.843461 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:42 crc kubenswrapper[4703]: I1209 12:06:42.843477 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:42 crc kubenswrapper[4703]: I1209 12:06:42.843487 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:42Z","lastTransitionTime":"2025-12-09T12:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:42 crc kubenswrapper[4703]: I1209 12:06:42.945346 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:42 crc kubenswrapper[4703]: I1209 12:06:42.945607 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:42 crc kubenswrapper[4703]: I1209 12:06:42.945682 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:42 crc kubenswrapper[4703]: I1209 12:06:42.945758 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:42 crc kubenswrapper[4703]: I1209 12:06:42.945823 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:42Z","lastTransitionTime":"2025-12-09T12:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:43 crc kubenswrapper[4703]: I1209 12:06:43.048604 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:43 crc kubenswrapper[4703]: I1209 12:06:43.048642 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:43 crc kubenswrapper[4703]: I1209 12:06:43.048684 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:43 crc kubenswrapper[4703]: I1209 12:06:43.048701 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:43 crc kubenswrapper[4703]: I1209 12:06:43.048713 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:43Z","lastTransitionTime":"2025-12-09T12:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:43 crc kubenswrapper[4703]: I1209 12:06:43.069339 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:06:43 crc kubenswrapper[4703]: E1209 12:06:43.069479 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 12:06:43 crc kubenswrapper[4703]: I1209 12:06:43.069359 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:06:43 crc kubenswrapper[4703]: I1209 12:06:43.069534 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pf4r7" Dec 09 12:06:43 crc kubenswrapper[4703]: E1209 12:06:43.069592 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 12:06:43 crc kubenswrapper[4703]: I1209 12:06:43.069345 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:06:43 crc kubenswrapper[4703]: E1209 12:06:43.069643 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 12:06:43 crc kubenswrapper[4703]: E1209 12:06:43.069685 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pf4r7" podUID="9f199898-7916-48b6-b5e6-c878bacae384" Dec 09 12:06:43 crc kubenswrapper[4703]: I1209 12:06:43.150955 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:43 crc kubenswrapper[4703]: I1209 12:06:43.151009 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:43 crc kubenswrapper[4703]: I1209 12:06:43.151020 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:43 crc kubenswrapper[4703]: I1209 12:06:43.151038 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:43 crc kubenswrapper[4703]: I1209 12:06:43.151049 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:43Z","lastTransitionTime":"2025-12-09T12:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:43 crc kubenswrapper[4703]: I1209 12:06:43.214665 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:43 crc kubenswrapper[4703]: I1209 12:06:43.214714 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:43 crc kubenswrapper[4703]: I1209 12:06:43.214726 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:43 crc kubenswrapper[4703]: I1209 12:06:43.214744 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:43 crc kubenswrapper[4703]: I1209 12:06:43.214758 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:43Z","lastTransitionTime":"2025-12-09T12:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:43 crc kubenswrapper[4703]: E1209 12:06:43.227127 4703 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cc650f57-0f1e-4118-b5e8-874027bb4fd3\\\",\\\"systemUUID\\\":\\\"538480e3-ee75-4c42-9816-5a001726e0b5\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:43Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:43 crc kubenswrapper[4703]: I1209 12:06:43.230926 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:43 crc kubenswrapper[4703]: I1209 12:06:43.231036 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:43 crc kubenswrapper[4703]: I1209 12:06:43.231065 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:43 crc kubenswrapper[4703]: I1209 12:06:43.231102 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:43 crc kubenswrapper[4703]: I1209 12:06:43.231126 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:43Z","lastTransitionTime":"2025-12-09T12:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:43 crc kubenswrapper[4703]: E1209 12:06:43.245475 4703 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cc650f57-0f1e-4118-b5e8-874027bb4fd3\\\",\\\"systemUUID\\\":\\\"538480e3-ee75-4c42-9816-5a001726e0b5\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:43Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:43 crc kubenswrapper[4703]: I1209 12:06:43.249114 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:43 crc kubenswrapper[4703]: I1209 12:06:43.249218 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:43 crc kubenswrapper[4703]: I1209 12:06:43.249233 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:43 crc kubenswrapper[4703]: I1209 12:06:43.249260 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:43 crc kubenswrapper[4703]: I1209 12:06:43.249275 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:43Z","lastTransitionTime":"2025-12-09T12:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:43 crc kubenswrapper[4703]: E1209 12:06:43.263362 4703 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cc650f57-0f1e-4118-b5e8-874027bb4fd3\\\",\\\"systemUUID\\\":\\\"538480e3-ee75-4c42-9816-5a001726e0b5\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:43Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:43 crc kubenswrapper[4703]: I1209 12:06:43.268793 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:43 crc kubenswrapper[4703]: I1209 12:06:43.269155 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:43 crc kubenswrapper[4703]: I1209 12:06:43.269322 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:43 crc kubenswrapper[4703]: I1209 12:06:43.269421 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:43 crc kubenswrapper[4703]: I1209 12:06:43.269501 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:43Z","lastTransitionTime":"2025-12-09T12:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:43 crc kubenswrapper[4703]: E1209 12:06:43.282276 4703 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cc650f57-0f1e-4118-b5e8-874027bb4fd3\\\",\\\"systemUUID\\\":\\\"538480e3-ee75-4c42-9816-5a001726e0b5\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:43Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:43 crc kubenswrapper[4703]: I1209 12:06:43.286057 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:43 crc kubenswrapper[4703]: I1209 12:06:43.286099 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:43 crc kubenswrapper[4703]: I1209 12:06:43.286108 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:43 crc kubenswrapper[4703]: I1209 12:06:43.286123 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:43 crc kubenswrapper[4703]: I1209 12:06:43.286132 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:43Z","lastTransitionTime":"2025-12-09T12:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:43 crc kubenswrapper[4703]: E1209 12:06:43.296617 4703 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cc650f57-0f1e-4118-b5e8-874027bb4fd3\\\",\\\"systemUUID\\\":\\\"538480e3-ee75-4c42-9816-5a001726e0b5\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:43Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:43 crc kubenswrapper[4703]: E1209 12:06:43.296748 4703 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 09 12:06:43 crc kubenswrapper[4703]: I1209 12:06:43.298070 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:43 crc kubenswrapper[4703]: I1209 12:06:43.298164 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:43 crc kubenswrapper[4703]: I1209 12:06:43.298263 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:43 crc kubenswrapper[4703]: I1209 12:06:43.298357 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:43 crc kubenswrapper[4703]: I1209 12:06:43.298441 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:43Z","lastTransitionTime":"2025-12-09T12:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:43 crc kubenswrapper[4703]: I1209 12:06:43.400860 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:43 crc kubenswrapper[4703]: I1209 12:06:43.400918 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:43 crc kubenswrapper[4703]: I1209 12:06:43.400928 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:43 crc kubenswrapper[4703]: I1209 12:06:43.400941 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:43 crc kubenswrapper[4703]: I1209 12:06:43.400950 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:43Z","lastTransitionTime":"2025-12-09T12:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:43 crc kubenswrapper[4703]: I1209 12:06:43.504666 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:43 crc kubenswrapper[4703]: I1209 12:06:43.504736 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:43 crc kubenswrapper[4703]: I1209 12:06:43.504758 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:43 crc kubenswrapper[4703]: I1209 12:06:43.504787 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:43 crc kubenswrapper[4703]: I1209 12:06:43.504812 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:43Z","lastTransitionTime":"2025-12-09T12:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:43 crc kubenswrapper[4703]: I1209 12:06:43.608750 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:43 crc kubenswrapper[4703]: I1209 12:06:43.608836 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:43 crc kubenswrapper[4703]: I1209 12:06:43.608859 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:43 crc kubenswrapper[4703]: I1209 12:06:43.608897 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:43 crc kubenswrapper[4703]: I1209 12:06:43.609579 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:43Z","lastTransitionTime":"2025-12-09T12:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:43 crc kubenswrapper[4703]: I1209 12:06:43.712760 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:43 crc kubenswrapper[4703]: I1209 12:06:43.712820 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:43 crc kubenswrapper[4703]: I1209 12:06:43.712835 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:43 crc kubenswrapper[4703]: I1209 12:06:43.712855 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:43 crc kubenswrapper[4703]: I1209 12:06:43.712867 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:43Z","lastTransitionTime":"2025-12-09T12:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:43 crc kubenswrapper[4703]: I1209 12:06:43.816417 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:43 crc kubenswrapper[4703]: I1209 12:06:43.816475 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:43 crc kubenswrapper[4703]: I1209 12:06:43.816493 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:43 crc kubenswrapper[4703]: I1209 12:06:43.816513 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:43 crc kubenswrapper[4703]: I1209 12:06:43.816530 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:43Z","lastTransitionTime":"2025-12-09T12:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:43 crc kubenswrapper[4703]: I1209 12:06:43.920102 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:43 crc kubenswrapper[4703]: I1209 12:06:43.920631 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:43 crc kubenswrapper[4703]: I1209 12:06:43.920721 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:43 crc kubenswrapper[4703]: I1209 12:06:43.920835 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:43 crc kubenswrapper[4703]: I1209 12:06:43.920941 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:43Z","lastTransitionTime":"2025-12-09T12:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:44 crc kubenswrapper[4703]: I1209 12:06:44.023954 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:44 crc kubenswrapper[4703]: I1209 12:06:44.024004 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:44 crc kubenswrapper[4703]: I1209 12:06:44.024016 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:44 crc kubenswrapper[4703]: I1209 12:06:44.024035 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:44 crc kubenswrapper[4703]: I1209 12:06:44.024049 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:44Z","lastTransitionTime":"2025-12-09T12:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:44 crc kubenswrapper[4703]: I1209 12:06:44.127418 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:44 crc kubenswrapper[4703]: I1209 12:06:44.127498 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:44 crc kubenswrapper[4703]: I1209 12:06:44.127510 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:44 crc kubenswrapper[4703]: I1209 12:06:44.127529 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:44 crc kubenswrapper[4703]: I1209 12:06:44.127543 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:44Z","lastTransitionTime":"2025-12-09T12:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:44 crc kubenswrapper[4703]: I1209 12:06:44.229593 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:44 crc kubenswrapper[4703]: I1209 12:06:44.229876 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:44 crc kubenswrapper[4703]: I1209 12:06:44.229978 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:44 crc kubenswrapper[4703]: I1209 12:06:44.230080 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:44 crc kubenswrapper[4703]: I1209 12:06:44.230161 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:44Z","lastTransitionTime":"2025-12-09T12:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:44 crc kubenswrapper[4703]: I1209 12:06:44.333426 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:44 crc kubenswrapper[4703]: I1209 12:06:44.333462 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:44 crc kubenswrapper[4703]: I1209 12:06:44.333473 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:44 crc kubenswrapper[4703]: I1209 12:06:44.333489 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:44 crc kubenswrapper[4703]: I1209 12:06:44.333499 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:44Z","lastTransitionTime":"2025-12-09T12:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:44 crc kubenswrapper[4703]: I1209 12:06:44.436232 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:44 crc kubenswrapper[4703]: I1209 12:06:44.436542 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:44 crc kubenswrapper[4703]: I1209 12:06:44.436650 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:44 crc kubenswrapper[4703]: I1209 12:06:44.436785 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:44 crc kubenswrapper[4703]: I1209 12:06:44.436866 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:44Z","lastTransitionTime":"2025-12-09T12:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:44 crc kubenswrapper[4703]: I1209 12:06:44.540376 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:44 crc kubenswrapper[4703]: I1209 12:06:44.540424 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:44 crc kubenswrapper[4703]: I1209 12:06:44.540436 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:44 crc kubenswrapper[4703]: I1209 12:06:44.540451 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:44 crc kubenswrapper[4703]: I1209 12:06:44.540460 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:44Z","lastTransitionTime":"2025-12-09T12:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:44 crc kubenswrapper[4703]: I1209 12:06:44.642171 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:44 crc kubenswrapper[4703]: I1209 12:06:44.642241 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:44 crc kubenswrapper[4703]: I1209 12:06:44.642252 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:44 crc kubenswrapper[4703]: I1209 12:06:44.642266 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:44 crc kubenswrapper[4703]: I1209 12:06:44.642276 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:44Z","lastTransitionTime":"2025-12-09T12:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:44 crc kubenswrapper[4703]: I1209 12:06:44.744901 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:44 crc kubenswrapper[4703]: I1209 12:06:44.744937 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:44 crc kubenswrapper[4703]: I1209 12:06:44.744948 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:44 crc kubenswrapper[4703]: I1209 12:06:44.744962 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:44 crc kubenswrapper[4703]: I1209 12:06:44.744971 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:44Z","lastTransitionTime":"2025-12-09T12:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:44 crc kubenswrapper[4703]: I1209 12:06:44.847720 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:44 crc kubenswrapper[4703]: I1209 12:06:44.847766 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:44 crc kubenswrapper[4703]: I1209 12:06:44.847777 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:44 crc kubenswrapper[4703]: I1209 12:06:44.847797 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:44 crc kubenswrapper[4703]: I1209 12:06:44.847811 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:44Z","lastTransitionTime":"2025-12-09T12:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:44 crc kubenswrapper[4703]: I1209 12:06:44.950127 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:44 crc kubenswrapper[4703]: I1209 12:06:44.950160 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:44 crc kubenswrapper[4703]: I1209 12:06:44.950169 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:44 crc kubenswrapper[4703]: I1209 12:06:44.950183 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:44 crc kubenswrapper[4703]: I1209 12:06:44.950208 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:44Z","lastTransitionTime":"2025-12-09T12:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:45 crc kubenswrapper[4703]: I1209 12:06:45.052029 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:45 crc kubenswrapper[4703]: I1209 12:06:45.052115 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:45 crc kubenswrapper[4703]: I1209 12:06:45.052126 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:45 crc kubenswrapper[4703]: I1209 12:06:45.052142 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:45 crc kubenswrapper[4703]: I1209 12:06:45.052152 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:45Z","lastTransitionTime":"2025-12-09T12:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:45 crc kubenswrapper[4703]: I1209 12:06:45.069396 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:06:45 crc kubenswrapper[4703]: I1209 12:06:45.069464 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:06:45 crc kubenswrapper[4703]: I1209 12:06:45.069519 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:06:45 crc kubenswrapper[4703]: I1209 12:06:45.069667 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pf4r7" Dec 09 12:06:45 crc kubenswrapper[4703]: E1209 12:06:45.069657 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 12:06:45 crc kubenswrapper[4703]: E1209 12:06:45.069838 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pf4r7" podUID="9f199898-7916-48b6-b5e6-c878bacae384" Dec 09 12:06:45 crc kubenswrapper[4703]: E1209 12:06:45.069878 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 12:06:45 crc kubenswrapper[4703]: E1209 12:06:45.069951 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 12:06:45 crc kubenswrapper[4703]: I1209 12:06:45.070645 4703 scope.go:117] "RemoveContainer" containerID="500c141c181fc269b89b51bc952bf461e962782dfa52cec80966dce7f1ae181f" Dec 09 12:06:45 crc kubenswrapper[4703]: E1209 12:06:45.070846 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-7hrm8_openshift-ovn-kubernetes(e9173444-5181-4ee4-b651-11d92ccab0d0)\"" pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" podUID="e9173444-5181-4ee4-b651-11d92ccab0d0" Dec 09 12:06:45 crc kubenswrapper[4703]: I1209 12:06:45.154311 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:45 crc kubenswrapper[4703]: I1209 12:06:45.154350 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:45 crc kubenswrapper[4703]: I1209 12:06:45.154363 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:45 crc kubenswrapper[4703]: I1209 12:06:45.154433 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:45 crc kubenswrapper[4703]: I1209 12:06:45.154448 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:45Z","lastTransitionTime":"2025-12-09T12:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:45 crc kubenswrapper[4703]: I1209 12:06:45.256447 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:45 crc kubenswrapper[4703]: I1209 12:06:45.256484 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:45 crc kubenswrapper[4703]: I1209 12:06:45.256494 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:45 crc kubenswrapper[4703]: I1209 12:06:45.256509 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:45 crc kubenswrapper[4703]: I1209 12:06:45.256519 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:45Z","lastTransitionTime":"2025-12-09T12:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:45 crc kubenswrapper[4703]: I1209 12:06:45.358848 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:45 crc kubenswrapper[4703]: I1209 12:06:45.358888 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:45 crc kubenswrapper[4703]: I1209 12:06:45.358897 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:45 crc kubenswrapper[4703]: I1209 12:06:45.358917 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:45 crc kubenswrapper[4703]: I1209 12:06:45.358929 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:45Z","lastTransitionTime":"2025-12-09T12:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:45 crc kubenswrapper[4703]: I1209 12:06:45.461229 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:45 crc kubenswrapper[4703]: I1209 12:06:45.461270 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:45 crc kubenswrapper[4703]: I1209 12:06:45.461278 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:45 crc kubenswrapper[4703]: I1209 12:06:45.461290 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:45 crc kubenswrapper[4703]: I1209 12:06:45.461299 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:45Z","lastTransitionTime":"2025-12-09T12:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:45 crc kubenswrapper[4703]: I1209 12:06:45.563344 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:45 crc kubenswrapper[4703]: I1209 12:06:45.563624 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:45 crc kubenswrapper[4703]: I1209 12:06:45.563691 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:45 crc kubenswrapper[4703]: I1209 12:06:45.563769 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:45 crc kubenswrapper[4703]: I1209 12:06:45.563831 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:45Z","lastTransitionTime":"2025-12-09T12:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:45 crc kubenswrapper[4703]: I1209 12:06:45.665604 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:45 crc kubenswrapper[4703]: I1209 12:06:45.665896 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:45 crc kubenswrapper[4703]: I1209 12:06:45.665993 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:45 crc kubenswrapper[4703]: I1209 12:06:45.666068 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:45 crc kubenswrapper[4703]: I1209 12:06:45.666129 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:45Z","lastTransitionTime":"2025-12-09T12:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:45 crc kubenswrapper[4703]: I1209 12:06:45.768064 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:45 crc kubenswrapper[4703]: I1209 12:06:45.768332 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:45 crc kubenswrapper[4703]: I1209 12:06:45.768398 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:45 crc kubenswrapper[4703]: I1209 12:06:45.768474 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:45 crc kubenswrapper[4703]: I1209 12:06:45.768541 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:45Z","lastTransitionTime":"2025-12-09T12:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:45 crc kubenswrapper[4703]: I1209 12:06:45.870382 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:45 crc kubenswrapper[4703]: I1209 12:06:45.870452 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:45 crc kubenswrapper[4703]: I1209 12:06:45.870469 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:45 crc kubenswrapper[4703]: I1209 12:06:45.870489 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:45 crc kubenswrapper[4703]: I1209 12:06:45.870503 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:45Z","lastTransitionTime":"2025-12-09T12:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:45 crc kubenswrapper[4703]: I1209 12:06:45.973150 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:45 crc kubenswrapper[4703]: I1209 12:06:45.973242 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:45 crc kubenswrapper[4703]: I1209 12:06:45.973252 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:45 crc kubenswrapper[4703]: I1209 12:06:45.973270 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:45 crc kubenswrapper[4703]: I1209 12:06:45.973279 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:45Z","lastTransitionTime":"2025-12-09T12:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:46 crc kubenswrapper[4703]: I1209 12:06:46.075893 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:46 crc kubenswrapper[4703]: I1209 12:06:46.075937 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:46 crc kubenswrapper[4703]: I1209 12:06:46.075945 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:46 crc kubenswrapper[4703]: I1209 12:06:46.075960 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:46 crc kubenswrapper[4703]: I1209 12:06:46.075970 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:46Z","lastTransitionTime":"2025-12-09T12:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:46 crc kubenswrapper[4703]: I1209 12:06:46.181934 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:46 crc kubenswrapper[4703]: I1209 12:06:46.181970 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:46 crc kubenswrapper[4703]: I1209 12:06:46.181979 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:46 crc kubenswrapper[4703]: I1209 12:06:46.181992 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:46 crc kubenswrapper[4703]: I1209 12:06:46.182001 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:46Z","lastTransitionTime":"2025-12-09T12:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:46 crc kubenswrapper[4703]: I1209 12:06:46.285643 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:46 crc kubenswrapper[4703]: I1209 12:06:46.285689 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:46 crc kubenswrapper[4703]: I1209 12:06:46.285701 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:46 crc kubenswrapper[4703]: I1209 12:06:46.285718 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:46 crc kubenswrapper[4703]: I1209 12:06:46.285730 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:46Z","lastTransitionTime":"2025-12-09T12:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:46 crc kubenswrapper[4703]: I1209 12:06:46.387896 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:46 crc kubenswrapper[4703]: I1209 12:06:46.387930 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:46 crc kubenswrapper[4703]: I1209 12:06:46.387939 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:46 crc kubenswrapper[4703]: I1209 12:06:46.387953 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:46 crc kubenswrapper[4703]: I1209 12:06:46.387962 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:46Z","lastTransitionTime":"2025-12-09T12:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:46 crc kubenswrapper[4703]: I1209 12:06:46.490486 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:46 crc kubenswrapper[4703]: I1209 12:06:46.490548 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:46 crc kubenswrapper[4703]: I1209 12:06:46.490559 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:46 crc kubenswrapper[4703]: I1209 12:06:46.490576 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:46 crc kubenswrapper[4703]: I1209 12:06:46.490587 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:46Z","lastTransitionTime":"2025-12-09T12:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:46 crc kubenswrapper[4703]: I1209 12:06:46.592156 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:46 crc kubenswrapper[4703]: I1209 12:06:46.592423 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:46 crc kubenswrapper[4703]: I1209 12:06:46.592520 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:46 crc kubenswrapper[4703]: I1209 12:06:46.592599 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:46 crc kubenswrapper[4703]: I1209 12:06:46.592678 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:46Z","lastTransitionTime":"2025-12-09T12:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:46 crc kubenswrapper[4703]: I1209 12:06:46.695437 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:46 crc kubenswrapper[4703]: I1209 12:06:46.695473 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:46 crc kubenswrapper[4703]: I1209 12:06:46.695481 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:46 crc kubenswrapper[4703]: I1209 12:06:46.695494 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:46 crc kubenswrapper[4703]: I1209 12:06:46.695503 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:46Z","lastTransitionTime":"2025-12-09T12:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:46 crc kubenswrapper[4703]: I1209 12:06:46.797822 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:46 crc kubenswrapper[4703]: I1209 12:06:46.797862 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:46 crc kubenswrapper[4703]: I1209 12:06:46.797873 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:46 crc kubenswrapper[4703]: I1209 12:06:46.797886 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:46 crc kubenswrapper[4703]: I1209 12:06:46.797896 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:46Z","lastTransitionTime":"2025-12-09T12:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:46 crc kubenswrapper[4703]: I1209 12:06:46.900377 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:46 crc kubenswrapper[4703]: I1209 12:06:46.900628 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:46 crc kubenswrapper[4703]: I1209 12:06:46.900736 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:46 crc kubenswrapper[4703]: I1209 12:06:46.900826 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:46 crc kubenswrapper[4703]: I1209 12:06:46.900898 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:46Z","lastTransitionTime":"2025-12-09T12:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:47 crc kubenswrapper[4703]: I1209 12:06:47.003659 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:47 crc kubenswrapper[4703]: I1209 12:06:47.003701 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:47 crc kubenswrapper[4703]: I1209 12:06:47.003713 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:47 crc kubenswrapper[4703]: I1209 12:06:47.003727 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:47 crc kubenswrapper[4703]: I1209 12:06:47.003736 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:47Z","lastTransitionTime":"2025-12-09T12:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:47 crc kubenswrapper[4703]: I1209 12:06:47.069532 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:06:47 crc kubenswrapper[4703]: I1209 12:06:47.069656 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pf4r7" Dec 09 12:06:47 crc kubenswrapper[4703]: I1209 12:06:47.069700 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:06:47 crc kubenswrapper[4703]: E1209 12:06:47.069751 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 12:06:47 crc kubenswrapper[4703]: E1209 12:06:47.069817 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 12:06:47 crc kubenswrapper[4703]: I1209 12:06:47.069840 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:06:47 crc kubenswrapper[4703]: E1209 12:06:47.069881 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pf4r7" podUID="9f199898-7916-48b6-b5e6-c878bacae384" Dec 09 12:06:47 crc kubenswrapper[4703]: E1209 12:06:47.070498 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 12:06:47 crc kubenswrapper[4703]: I1209 12:06:47.105945 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:47 crc kubenswrapper[4703]: I1209 12:06:47.106233 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:47 crc kubenswrapper[4703]: I1209 12:06:47.106364 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:47 crc kubenswrapper[4703]: I1209 12:06:47.106466 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:47 crc kubenswrapper[4703]: I1209 12:06:47.106547 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:47Z","lastTransitionTime":"2025-12-09T12:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:47 crc kubenswrapper[4703]: I1209 12:06:47.209513 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:47 crc kubenswrapper[4703]: I1209 12:06:47.209564 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:47 crc kubenswrapper[4703]: I1209 12:06:47.209578 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:47 crc kubenswrapper[4703]: I1209 12:06:47.209596 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:47 crc kubenswrapper[4703]: I1209 12:06:47.209607 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:47Z","lastTransitionTime":"2025-12-09T12:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:47 crc kubenswrapper[4703]: I1209 12:06:47.311764 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:47 crc kubenswrapper[4703]: I1209 12:06:47.311809 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:47 crc kubenswrapper[4703]: I1209 12:06:47.311820 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:47 crc kubenswrapper[4703]: I1209 12:06:47.311835 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:47 crc kubenswrapper[4703]: I1209 12:06:47.311846 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:47Z","lastTransitionTime":"2025-12-09T12:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:47 crc kubenswrapper[4703]: I1209 12:06:47.414328 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:47 crc kubenswrapper[4703]: I1209 12:06:47.414383 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:47 crc kubenswrapper[4703]: I1209 12:06:47.414398 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:47 crc kubenswrapper[4703]: I1209 12:06:47.414418 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:47 crc kubenswrapper[4703]: I1209 12:06:47.414433 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:47Z","lastTransitionTime":"2025-12-09T12:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:47 crc kubenswrapper[4703]: I1209 12:06:47.516498 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:47 crc kubenswrapper[4703]: I1209 12:06:47.516537 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:47 crc kubenswrapper[4703]: I1209 12:06:47.516546 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:47 crc kubenswrapper[4703]: I1209 12:06:47.516561 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:47 crc kubenswrapper[4703]: I1209 12:06:47.516572 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:47Z","lastTransitionTime":"2025-12-09T12:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:47 crc kubenswrapper[4703]: I1209 12:06:47.618875 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:47 crc kubenswrapper[4703]: I1209 12:06:47.619222 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:47 crc kubenswrapper[4703]: I1209 12:06:47.619312 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:47 crc kubenswrapper[4703]: I1209 12:06:47.619407 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:47 crc kubenswrapper[4703]: I1209 12:06:47.619499 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:47Z","lastTransitionTime":"2025-12-09T12:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:47 crc kubenswrapper[4703]: I1209 12:06:47.722447 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:47 crc kubenswrapper[4703]: I1209 12:06:47.722754 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:47 crc kubenswrapper[4703]: I1209 12:06:47.722833 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:47 crc kubenswrapper[4703]: I1209 12:06:47.722914 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:47 crc kubenswrapper[4703]: I1209 12:06:47.722994 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:47Z","lastTransitionTime":"2025-12-09T12:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:47 crc kubenswrapper[4703]: I1209 12:06:47.825884 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:47 crc kubenswrapper[4703]: I1209 12:06:47.825926 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:47 crc kubenswrapper[4703]: I1209 12:06:47.825936 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:47 crc kubenswrapper[4703]: I1209 12:06:47.825951 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:47 crc kubenswrapper[4703]: I1209 12:06:47.825962 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:47Z","lastTransitionTime":"2025-12-09T12:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:47 crc kubenswrapper[4703]: I1209 12:06:47.928785 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:47 crc kubenswrapper[4703]: I1209 12:06:47.928824 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:47 crc kubenswrapper[4703]: I1209 12:06:47.928834 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:47 crc kubenswrapper[4703]: I1209 12:06:47.928849 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:47 crc kubenswrapper[4703]: I1209 12:06:47.928860 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:47Z","lastTransitionTime":"2025-12-09T12:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:48 crc kubenswrapper[4703]: I1209 12:06:48.031405 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:48 crc kubenswrapper[4703]: I1209 12:06:48.031444 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:48 crc kubenswrapper[4703]: I1209 12:06:48.031454 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:48 crc kubenswrapper[4703]: I1209 12:06:48.031470 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:48 crc kubenswrapper[4703]: I1209 12:06:48.031481 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:48Z","lastTransitionTime":"2025-12-09T12:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:48 crc kubenswrapper[4703]: I1209 12:06:48.134121 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:48 crc kubenswrapper[4703]: I1209 12:06:48.134264 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:48 crc kubenswrapper[4703]: I1209 12:06:48.134284 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:48 crc kubenswrapper[4703]: I1209 12:06:48.134313 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:48 crc kubenswrapper[4703]: I1209 12:06:48.134334 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:48Z","lastTransitionTime":"2025-12-09T12:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:48 crc kubenswrapper[4703]: I1209 12:06:48.236978 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:48 crc kubenswrapper[4703]: I1209 12:06:48.237027 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:48 crc kubenswrapper[4703]: I1209 12:06:48.237039 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:48 crc kubenswrapper[4703]: I1209 12:06:48.237057 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:48 crc kubenswrapper[4703]: I1209 12:06:48.237069 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:48Z","lastTransitionTime":"2025-12-09T12:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:48 crc kubenswrapper[4703]: I1209 12:06:48.339775 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:48 crc kubenswrapper[4703]: I1209 12:06:48.339820 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:48 crc kubenswrapper[4703]: I1209 12:06:48.339830 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:48 crc kubenswrapper[4703]: I1209 12:06:48.339851 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:48 crc kubenswrapper[4703]: I1209 12:06:48.339862 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:48Z","lastTransitionTime":"2025-12-09T12:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:48 crc kubenswrapper[4703]: I1209 12:06:48.442138 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:48 crc kubenswrapper[4703]: I1209 12:06:48.442188 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:48 crc kubenswrapper[4703]: I1209 12:06:48.442216 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:48 crc kubenswrapper[4703]: I1209 12:06:48.442230 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:48 crc kubenswrapper[4703]: I1209 12:06:48.442240 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:48Z","lastTransitionTime":"2025-12-09T12:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:48 crc kubenswrapper[4703]: I1209 12:06:48.544325 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:48 crc kubenswrapper[4703]: I1209 12:06:48.544360 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:48 crc kubenswrapper[4703]: I1209 12:06:48.544371 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:48 crc kubenswrapper[4703]: I1209 12:06:48.544386 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:48 crc kubenswrapper[4703]: I1209 12:06:48.544397 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:48Z","lastTransitionTime":"2025-12-09T12:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:48 crc kubenswrapper[4703]: I1209 12:06:48.647230 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:48 crc kubenswrapper[4703]: I1209 12:06:48.647283 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:48 crc kubenswrapper[4703]: I1209 12:06:48.647294 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:48 crc kubenswrapper[4703]: I1209 12:06:48.647310 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:48 crc kubenswrapper[4703]: I1209 12:06:48.647319 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:48Z","lastTransitionTime":"2025-12-09T12:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:48 crc kubenswrapper[4703]: I1209 12:06:48.749842 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:48 crc kubenswrapper[4703]: I1209 12:06:48.749889 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:48 crc kubenswrapper[4703]: I1209 12:06:48.749903 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:48 crc kubenswrapper[4703]: I1209 12:06:48.749922 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:48 crc kubenswrapper[4703]: I1209 12:06:48.749936 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:48Z","lastTransitionTime":"2025-12-09T12:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:48 crc kubenswrapper[4703]: I1209 12:06:48.852559 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:48 crc kubenswrapper[4703]: I1209 12:06:48.852623 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:48 crc kubenswrapper[4703]: I1209 12:06:48.852637 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:48 crc kubenswrapper[4703]: I1209 12:06:48.852659 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:48 crc kubenswrapper[4703]: I1209 12:06:48.852675 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:48Z","lastTransitionTime":"2025-12-09T12:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:48 crc kubenswrapper[4703]: I1209 12:06:48.955086 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:48 crc kubenswrapper[4703]: I1209 12:06:48.955123 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:48 crc kubenswrapper[4703]: I1209 12:06:48.955135 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:48 crc kubenswrapper[4703]: I1209 12:06:48.955150 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:48 crc kubenswrapper[4703]: I1209 12:06:48.955160 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:48Z","lastTransitionTime":"2025-12-09T12:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:49 crc kubenswrapper[4703]: I1209 12:06:49.057675 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:49 crc kubenswrapper[4703]: I1209 12:06:49.057764 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:49 crc kubenswrapper[4703]: I1209 12:06:49.057781 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:49 crc kubenswrapper[4703]: I1209 12:06:49.057804 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:49 crc kubenswrapper[4703]: I1209 12:06:49.057816 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:49Z","lastTransitionTime":"2025-12-09T12:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:49 crc kubenswrapper[4703]: I1209 12:06:49.069086 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:06:49 crc kubenswrapper[4703]: I1209 12:06:49.069116 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:06:49 crc kubenswrapper[4703]: E1209 12:06:49.069203 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 12:06:49 crc kubenswrapper[4703]: I1209 12:06:49.069233 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:06:49 crc kubenswrapper[4703]: I1209 12:06:49.069096 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pf4r7" Dec 09 12:06:49 crc kubenswrapper[4703]: E1209 12:06:49.069280 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 12:06:49 crc kubenswrapper[4703]: E1209 12:06:49.069339 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pf4r7" podUID="9f199898-7916-48b6-b5e6-c878bacae384" Dec 09 12:06:49 crc kubenswrapper[4703]: E1209 12:06:49.069490 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 12:06:49 crc kubenswrapper[4703]: I1209 12:06:49.160181 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:49 crc kubenswrapper[4703]: I1209 12:06:49.160263 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:49 crc kubenswrapper[4703]: I1209 12:06:49.160273 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:49 crc kubenswrapper[4703]: I1209 12:06:49.160287 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:49 crc kubenswrapper[4703]: I1209 12:06:49.160297 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:49Z","lastTransitionTime":"2025-12-09T12:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:49 crc kubenswrapper[4703]: I1209 12:06:49.262869 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:49 crc kubenswrapper[4703]: I1209 12:06:49.262917 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:49 crc kubenswrapper[4703]: I1209 12:06:49.262929 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:49 crc kubenswrapper[4703]: I1209 12:06:49.262948 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:49 crc kubenswrapper[4703]: I1209 12:06:49.262961 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:49Z","lastTransitionTime":"2025-12-09T12:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:49 crc kubenswrapper[4703]: I1209 12:06:49.366523 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:49 crc kubenswrapper[4703]: I1209 12:06:49.366568 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:49 crc kubenswrapper[4703]: I1209 12:06:49.366578 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:49 crc kubenswrapper[4703]: I1209 12:06:49.366594 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:49 crc kubenswrapper[4703]: I1209 12:06:49.366606 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:49Z","lastTransitionTime":"2025-12-09T12:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:49 crc kubenswrapper[4703]: I1209 12:06:49.469956 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:49 crc kubenswrapper[4703]: I1209 12:06:49.470011 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:49 crc kubenswrapper[4703]: I1209 12:06:49.470023 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:49 crc kubenswrapper[4703]: I1209 12:06:49.470042 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:49 crc kubenswrapper[4703]: I1209 12:06:49.470053 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:49Z","lastTransitionTime":"2025-12-09T12:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:49 crc kubenswrapper[4703]: I1209 12:06:49.573969 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:49 crc kubenswrapper[4703]: I1209 12:06:49.574015 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:49 crc kubenswrapper[4703]: I1209 12:06:49.574025 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:49 crc kubenswrapper[4703]: I1209 12:06:49.574040 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:49 crc kubenswrapper[4703]: I1209 12:06:49.574053 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:49Z","lastTransitionTime":"2025-12-09T12:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:49 crc kubenswrapper[4703]: I1209 12:06:49.676621 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:49 crc kubenswrapper[4703]: I1209 12:06:49.676668 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:49 crc kubenswrapper[4703]: I1209 12:06:49.676679 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:49 crc kubenswrapper[4703]: I1209 12:06:49.676694 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:49 crc kubenswrapper[4703]: I1209 12:06:49.676704 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:49Z","lastTransitionTime":"2025-12-09T12:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:49 crc kubenswrapper[4703]: I1209 12:06:49.779929 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:49 crc kubenswrapper[4703]: I1209 12:06:49.779973 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:49 crc kubenswrapper[4703]: I1209 12:06:49.779982 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:49 crc kubenswrapper[4703]: I1209 12:06:49.779999 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:49 crc kubenswrapper[4703]: I1209 12:06:49.780010 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:49Z","lastTransitionTime":"2025-12-09T12:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:49 crc kubenswrapper[4703]: I1209 12:06:49.882123 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:49 crc kubenswrapper[4703]: I1209 12:06:49.882175 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:49 crc kubenswrapper[4703]: I1209 12:06:49.882202 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:49 crc kubenswrapper[4703]: I1209 12:06:49.882221 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:49 crc kubenswrapper[4703]: I1209 12:06:49.882258 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:49Z","lastTransitionTime":"2025-12-09T12:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:49 crc kubenswrapper[4703]: I1209 12:06:49.984552 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:49 crc kubenswrapper[4703]: I1209 12:06:49.984665 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:49 crc kubenswrapper[4703]: I1209 12:06:49.984678 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:49 crc kubenswrapper[4703]: I1209 12:06:49.984694 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:49 crc kubenswrapper[4703]: I1209 12:06:49.984702 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:49Z","lastTransitionTime":"2025-12-09T12:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:50 crc kubenswrapper[4703]: I1209 12:06:50.088216 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:50 crc kubenswrapper[4703]: I1209 12:06:50.088274 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:50 crc kubenswrapper[4703]: I1209 12:06:50.088285 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:50 crc kubenswrapper[4703]: I1209 12:06:50.088306 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:50 crc kubenswrapper[4703]: I1209 12:06:50.088318 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:50Z","lastTransitionTime":"2025-12-09T12:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:50 crc kubenswrapper[4703]: I1209 12:06:50.190971 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:50 crc kubenswrapper[4703]: I1209 12:06:50.191060 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:50 crc kubenswrapper[4703]: I1209 12:06:50.191074 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:50 crc kubenswrapper[4703]: I1209 12:06:50.191101 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:50 crc kubenswrapper[4703]: I1209 12:06:50.191117 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:50Z","lastTransitionTime":"2025-12-09T12:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:50 crc kubenswrapper[4703]: I1209 12:06:50.294309 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:50 crc kubenswrapper[4703]: I1209 12:06:50.294390 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:50 crc kubenswrapper[4703]: I1209 12:06:50.294405 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:50 crc kubenswrapper[4703]: I1209 12:06:50.294430 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:50 crc kubenswrapper[4703]: I1209 12:06:50.294446 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:50Z","lastTransitionTime":"2025-12-09T12:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:50 crc kubenswrapper[4703]: I1209 12:06:50.397357 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:50 crc kubenswrapper[4703]: I1209 12:06:50.397430 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:50 crc kubenswrapper[4703]: I1209 12:06:50.397445 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:50 crc kubenswrapper[4703]: I1209 12:06:50.397473 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:50 crc kubenswrapper[4703]: I1209 12:06:50.397492 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:50Z","lastTransitionTime":"2025-12-09T12:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:50 crc kubenswrapper[4703]: I1209 12:06:50.500008 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:50 crc kubenswrapper[4703]: I1209 12:06:50.500046 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:50 crc kubenswrapper[4703]: I1209 12:06:50.500055 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:50 crc kubenswrapper[4703]: I1209 12:06:50.500068 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:50 crc kubenswrapper[4703]: I1209 12:06:50.500077 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:50Z","lastTransitionTime":"2025-12-09T12:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:50 crc kubenswrapper[4703]: I1209 12:06:50.601935 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:50 crc kubenswrapper[4703]: I1209 12:06:50.601978 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:50 crc kubenswrapper[4703]: I1209 12:06:50.601989 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:50 crc kubenswrapper[4703]: I1209 12:06:50.602003 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:50 crc kubenswrapper[4703]: I1209 12:06:50.602013 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:50Z","lastTransitionTime":"2025-12-09T12:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:50 crc kubenswrapper[4703]: I1209 12:06:50.704547 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:50 crc kubenswrapper[4703]: I1209 12:06:50.704596 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:50 crc kubenswrapper[4703]: I1209 12:06:50.704609 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:50 crc kubenswrapper[4703]: I1209 12:06:50.704628 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:50 crc kubenswrapper[4703]: I1209 12:06:50.704641 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:50Z","lastTransitionTime":"2025-12-09T12:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:50 crc kubenswrapper[4703]: I1209 12:06:50.806619 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:50 crc kubenswrapper[4703]: I1209 12:06:50.806905 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:50 crc kubenswrapper[4703]: I1209 12:06:50.806995 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:50 crc kubenswrapper[4703]: I1209 12:06:50.807083 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:50 crc kubenswrapper[4703]: I1209 12:06:50.807165 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:50Z","lastTransitionTime":"2025-12-09T12:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:50 crc kubenswrapper[4703]: I1209 12:06:50.910294 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:50 crc kubenswrapper[4703]: I1209 12:06:50.910342 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:50 crc kubenswrapper[4703]: I1209 12:06:50.910353 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:50 crc kubenswrapper[4703]: I1209 12:06:50.910374 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:50 crc kubenswrapper[4703]: I1209 12:06:50.910384 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:50Z","lastTransitionTime":"2025-12-09T12:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:51 crc kubenswrapper[4703]: I1209 12:06:51.013329 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:51 crc kubenswrapper[4703]: I1209 12:06:51.013408 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:51 crc kubenswrapper[4703]: I1209 12:06:51.013419 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:51 crc kubenswrapper[4703]: I1209 12:06:51.013435 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:51 crc kubenswrapper[4703]: I1209 12:06:51.013446 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:51Z","lastTransitionTime":"2025-12-09T12:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:51 crc kubenswrapper[4703]: I1209 12:06:51.068864 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:06:51 crc kubenswrapper[4703]: I1209 12:06:51.068883 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:06:51 crc kubenswrapper[4703]: I1209 12:06:51.068987 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:06:51 crc kubenswrapper[4703]: E1209 12:06:51.069092 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 12:06:51 crc kubenswrapper[4703]: I1209 12:06:51.069117 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pf4r7" Dec 09 12:06:51 crc kubenswrapper[4703]: E1209 12:06:51.069511 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 12:06:51 crc kubenswrapper[4703]: E1209 12:06:51.069547 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 12:06:51 crc kubenswrapper[4703]: E1209 12:06:51.069621 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pf4r7" podUID="9f199898-7916-48b6-b5e6-c878bacae384" Dec 09 12:06:51 crc kubenswrapper[4703]: I1209 12:06:51.102778 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-ncbbx" podStartSLOduration=92.102761115 podStartE2EDuration="1m32.102761115s" podCreationTimestamp="2025-12-09 12:05:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:06:51.102683283 +0000 UTC m=+110.351446802" watchObservedRunningTime="2025-12-09 12:06:51.102761115 +0000 UTC m=+110.351524634" Dec 09 12:06:51 crc kubenswrapper[4703]: I1209 12:06:51.115839 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:51 crc kubenswrapper[4703]: I1209 12:06:51.115873 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:51 crc kubenswrapper[4703]: I1209 12:06:51.115883 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:51 crc kubenswrapper[4703]: I1209 12:06:51.115897 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:51 crc kubenswrapper[4703]: I1209 12:06:51.115907 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:51Z","lastTransitionTime":"2025-12-09T12:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:51 crc kubenswrapper[4703]: I1209 12:06:51.175321 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=31.175301016 podStartE2EDuration="31.175301016s" podCreationTimestamp="2025-12-09 12:06:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:06:51.158805229 +0000 UTC m=+110.407568768" watchObservedRunningTime="2025-12-09 12:06:51.175301016 +0000 UTC m=+110.424064535" Dec 09 12:06:51 crc kubenswrapper[4703]: I1209 12:06:51.189727 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=87.189708457 podStartE2EDuration="1m27.189708457s" podCreationTimestamp="2025-12-09 12:05:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:06:51.176031359 +0000 UTC m=+110.424794878" watchObservedRunningTime="2025-12-09 12:06:51.189708457 +0000 UTC m=+110.438471976" Dec 09 12:06:51 crc kubenswrapper[4703]: I1209 12:06:51.203368 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=59.203347473 podStartE2EDuration="59.203347473s" podCreationTimestamp="2025-12-09 12:05:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:06:51.190886463 +0000 UTC m=+110.439650002" watchObservedRunningTime="2025-12-09 12:06:51.203347473 +0000 UTC m=+110.452110992" Dec 09 12:06:51 crc kubenswrapper[4703]: I1209 12:06:51.203685 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-rfmng" podStartSLOduration=93.203679895 podStartE2EDuration="1m33.203679895s" podCreationTimestamp="2025-12-09 12:05:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:06:51.202944281 +0000 UTC m=+110.451707800" watchObservedRunningTime="2025-12-09 12:06:51.203679895 +0000 UTC m=+110.452443414" Dec 09 12:06:51 crc kubenswrapper[4703]: I1209 12:06:51.218377 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:51 crc kubenswrapper[4703]: I1209 12:06:51.218439 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:51 crc kubenswrapper[4703]: I1209 12:06:51.218450 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:51 crc kubenswrapper[4703]: I1209 12:06:51.218464 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:51 crc kubenswrapper[4703]: I1209 12:06:51.218473 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:51Z","lastTransitionTime":"2025-12-09T12:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:51 crc kubenswrapper[4703]: I1209 12:06:51.281095 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-4r9tc" podStartSLOduration=92.281076707 podStartE2EDuration="1m32.281076707s" podCreationTimestamp="2025-12-09 12:05:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:06:51.280471577 +0000 UTC m=+110.529235116" watchObservedRunningTime="2025-12-09 12:06:51.281076707 +0000 UTC m=+110.529840226" Dec 09 12:06:51 crc kubenswrapper[4703]: I1209 12:06:51.314547 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-9zbgq" podStartSLOduration=92.314524593 podStartE2EDuration="1m32.314524593s" podCreationTimestamp="2025-12-09 12:05:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:06:51.297592174 +0000 UTC m=+110.546355693" watchObservedRunningTime="2025-12-09 12:06:51.314524593 +0000 UTC m=+110.563288112" Dec 09 12:06:51 crc kubenswrapper[4703]: I1209 12:06:51.320446 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:51 crc kubenswrapper[4703]: I1209 12:06:51.320489 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:51 crc kubenswrapper[4703]: I1209 12:06:51.320501 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:51 crc kubenswrapper[4703]: I1209 12:06:51.320520 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:51 crc kubenswrapper[4703]: I1209 12:06:51.320532 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:51Z","lastTransitionTime":"2025-12-09T12:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:51 crc kubenswrapper[4703]: I1209 12:06:51.330313 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podStartSLOduration=92.330291677 podStartE2EDuration="1m32.330291677s" podCreationTimestamp="2025-12-09 12:05:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:06:51.31601296 +0000 UTC m=+110.564776479" watchObservedRunningTime="2025-12-09 12:06:51.330291677 +0000 UTC m=+110.579055196" Dec 09 12:06:51 crc kubenswrapper[4703]: I1209 12:06:51.341011 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4sss9" podStartSLOduration=92.340995482 podStartE2EDuration="1m32.340995482s" podCreationTimestamp="2025-12-09 12:05:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:06:51.33133157 +0000 UTC m=+110.580095109" watchObservedRunningTime="2025-12-09 12:06:51.340995482 +0000 UTC m=+110.589759011" Dec 09 12:06:51 crc kubenswrapper[4703]: I1209 12:06:51.341721 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=20.341715714 podStartE2EDuration="20.341715714s" podCreationTimestamp="2025-12-09 12:06:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:06:51.341510788 +0000 UTC m=+110.590274307" watchObservedRunningTime="2025-12-09 12:06:51.341715714 +0000 UTC m=+110.590479223" Dec 09 12:06:51 crc kubenswrapper[4703]: I1209 12:06:51.369118 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=92.369102442 podStartE2EDuration="1m32.369102442s" podCreationTimestamp="2025-12-09 12:05:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:06:51.368651947 +0000 UTC m=+110.617415466" watchObservedRunningTime="2025-12-09 12:06:51.369102442 +0000 UTC m=+110.617865961" Dec 09 12:06:51 crc kubenswrapper[4703]: I1209 12:06:51.423454 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:51 crc kubenswrapper[4703]: I1209 12:06:51.423490 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:51 crc kubenswrapper[4703]: I1209 12:06:51.423501 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:51 crc kubenswrapper[4703]: I1209 12:06:51.423517 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:51 crc kubenswrapper[4703]: I1209 12:06:51.423528 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:51Z","lastTransitionTime":"2025-12-09T12:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:51 crc kubenswrapper[4703]: I1209 12:06:51.525870 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:51 crc kubenswrapper[4703]: I1209 12:06:51.525915 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:51 crc kubenswrapper[4703]: I1209 12:06:51.525927 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:51 crc kubenswrapper[4703]: I1209 12:06:51.525942 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:51 crc kubenswrapper[4703]: I1209 12:06:51.525952 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:51Z","lastTransitionTime":"2025-12-09T12:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:51 crc kubenswrapper[4703]: I1209 12:06:51.628992 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:51 crc kubenswrapper[4703]: I1209 12:06:51.629055 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:51 crc kubenswrapper[4703]: I1209 12:06:51.629065 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:51 crc kubenswrapper[4703]: I1209 12:06:51.629082 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:51 crc kubenswrapper[4703]: I1209 12:06:51.629098 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:51Z","lastTransitionTime":"2025-12-09T12:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:51 crc kubenswrapper[4703]: I1209 12:06:51.731719 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:51 crc kubenswrapper[4703]: I1209 12:06:51.731791 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:51 crc kubenswrapper[4703]: I1209 12:06:51.731807 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:51 crc kubenswrapper[4703]: I1209 12:06:51.731840 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:51 crc kubenswrapper[4703]: I1209 12:06:51.731856 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:51Z","lastTransitionTime":"2025-12-09T12:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:51 crc kubenswrapper[4703]: I1209 12:06:51.834787 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:51 crc kubenswrapper[4703]: I1209 12:06:51.834829 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:51 crc kubenswrapper[4703]: I1209 12:06:51.834840 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:51 crc kubenswrapper[4703]: I1209 12:06:51.834857 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:51 crc kubenswrapper[4703]: I1209 12:06:51.834872 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:51Z","lastTransitionTime":"2025-12-09T12:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:51 crc kubenswrapper[4703]: I1209 12:06:51.937292 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:51 crc kubenswrapper[4703]: I1209 12:06:51.937335 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:51 crc kubenswrapper[4703]: I1209 12:06:51.937345 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:51 crc kubenswrapper[4703]: I1209 12:06:51.937358 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:51 crc kubenswrapper[4703]: I1209 12:06:51.937368 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:51Z","lastTransitionTime":"2025-12-09T12:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:52 crc kubenswrapper[4703]: I1209 12:06:52.040241 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:52 crc kubenswrapper[4703]: I1209 12:06:52.040315 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:52 crc kubenswrapper[4703]: I1209 12:06:52.040339 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:52 crc kubenswrapper[4703]: I1209 12:06:52.040378 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:52 crc kubenswrapper[4703]: I1209 12:06:52.040391 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:52Z","lastTransitionTime":"2025-12-09T12:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:52 crc kubenswrapper[4703]: I1209 12:06:52.143560 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:52 crc kubenswrapper[4703]: I1209 12:06:52.143611 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:52 crc kubenswrapper[4703]: I1209 12:06:52.143623 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:52 crc kubenswrapper[4703]: I1209 12:06:52.143638 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:52 crc kubenswrapper[4703]: I1209 12:06:52.143648 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:52Z","lastTransitionTime":"2025-12-09T12:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:52 crc kubenswrapper[4703]: I1209 12:06:52.246051 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:52 crc kubenswrapper[4703]: I1209 12:06:52.246116 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:52 crc kubenswrapper[4703]: I1209 12:06:52.246128 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:52 crc kubenswrapper[4703]: I1209 12:06:52.246146 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:52 crc kubenswrapper[4703]: I1209 12:06:52.246158 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:52Z","lastTransitionTime":"2025-12-09T12:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:52 crc kubenswrapper[4703]: I1209 12:06:52.348566 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:52 crc kubenswrapper[4703]: I1209 12:06:52.348598 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:52 crc kubenswrapper[4703]: I1209 12:06:52.348621 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:52 crc kubenswrapper[4703]: I1209 12:06:52.348636 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:52 crc kubenswrapper[4703]: I1209 12:06:52.348645 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:52Z","lastTransitionTime":"2025-12-09T12:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:52 crc kubenswrapper[4703]: I1209 12:06:52.451217 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:52 crc kubenswrapper[4703]: I1209 12:06:52.451280 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:52 crc kubenswrapper[4703]: I1209 12:06:52.451292 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:52 crc kubenswrapper[4703]: I1209 12:06:52.451309 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:52 crc kubenswrapper[4703]: I1209 12:06:52.451321 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:52Z","lastTransitionTime":"2025-12-09T12:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:52 crc kubenswrapper[4703]: I1209 12:06:52.553451 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:52 crc kubenswrapper[4703]: I1209 12:06:52.554057 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:52 crc kubenswrapper[4703]: I1209 12:06:52.554137 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:52 crc kubenswrapper[4703]: I1209 12:06:52.554244 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:52 crc kubenswrapper[4703]: I1209 12:06:52.554314 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:52Z","lastTransitionTime":"2025-12-09T12:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:52 crc kubenswrapper[4703]: I1209 12:06:52.656481 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:52 crc kubenswrapper[4703]: I1209 12:06:52.656534 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:52 crc kubenswrapper[4703]: I1209 12:06:52.656545 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:52 crc kubenswrapper[4703]: I1209 12:06:52.656561 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:52 crc kubenswrapper[4703]: I1209 12:06:52.656571 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:52Z","lastTransitionTime":"2025-12-09T12:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:52 crc kubenswrapper[4703]: I1209 12:06:52.758422 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:52 crc kubenswrapper[4703]: I1209 12:06:52.758486 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:52 crc kubenswrapper[4703]: I1209 12:06:52.758500 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:52 crc kubenswrapper[4703]: I1209 12:06:52.758517 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:52 crc kubenswrapper[4703]: I1209 12:06:52.758529 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:52Z","lastTransitionTime":"2025-12-09T12:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:52 crc kubenswrapper[4703]: I1209 12:06:52.860788 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:52 crc kubenswrapper[4703]: I1209 12:06:52.860817 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:52 crc kubenswrapper[4703]: I1209 12:06:52.860827 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:52 crc kubenswrapper[4703]: I1209 12:06:52.860840 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:52 crc kubenswrapper[4703]: I1209 12:06:52.860849 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:52Z","lastTransitionTime":"2025-12-09T12:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:52 crc kubenswrapper[4703]: I1209 12:06:52.963252 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:52 crc kubenswrapper[4703]: I1209 12:06:52.963297 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:52 crc kubenswrapper[4703]: I1209 12:06:52.963314 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:52 crc kubenswrapper[4703]: I1209 12:06:52.963338 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:52 crc kubenswrapper[4703]: I1209 12:06:52.963350 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:52Z","lastTransitionTime":"2025-12-09T12:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:53 crc kubenswrapper[4703]: I1209 12:06:53.065338 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:53 crc kubenswrapper[4703]: I1209 12:06:53.065376 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:53 crc kubenswrapper[4703]: I1209 12:06:53.065387 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:53 crc kubenswrapper[4703]: I1209 12:06:53.065403 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:53 crc kubenswrapper[4703]: I1209 12:06:53.065414 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:53Z","lastTransitionTime":"2025-12-09T12:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:53 crc kubenswrapper[4703]: I1209 12:06:53.068578 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:06:53 crc kubenswrapper[4703]: I1209 12:06:53.068626 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pf4r7" Dec 09 12:06:53 crc kubenswrapper[4703]: I1209 12:06:53.068623 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:06:53 crc kubenswrapper[4703]: E1209 12:06:53.068704 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 12:06:53 crc kubenswrapper[4703]: E1209 12:06:53.068776 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pf4r7" podUID="9f199898-7916-48b6-b5e6-c878bacae384" Dec 09 12:06:53 crc kubenswrapper[4703]: I1209 12:06:53.068782 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:06:53 crc kubenswrapper[4703]: E1209 12:06:53.068859 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 12:06:53 crc kubenswrapper[4703]: E1209 12:06:53.068904 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 12:06:53 crc kubenswrapper[4703]: I1209 12:06:53.167919 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:53 crc kubenswrapper[4703]: I1209 12:06:53.167969 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:53 crc kubenswrapper[4703]: I1209 12:06:53.167989 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:53 crc kubenswrapper[4703]: I1209 12:06:53.168009 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:53 crc kubenswrapper[4703]: I1209 12:06:53.168022 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:53Z","lastTransitionTime":"2025-12-09T12:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:53 crc kubenswrapper[4703]: I1209 12:06:53.270381 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:53 crc kubenswrapper[4703]: I1209 12:06:53.270424 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:53 crc kubenswrapper[4703]: I1209 12:06:53.270468 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:53 crc kubenswrapper[4703]: I1209 12:06:53.270490 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:53 crc kubenswrapper[4703]: I1209 12:06:53.270502 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:53Z","lastTransitionTime":"2025-12-09T12:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:53 crc kubenswrapper[4703]: I1209 12:06:53.372480 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:53 crc kubenswrapper[4703]: I1209 12:06:53.372518 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:53 crc kubenswrapper[4703]: I1209 12:06:53.372528 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:53 crc kubenswrapper[4703]: I1209 12:06:53.372541 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:53 crc kubenswrapper[4703]: I1209 12:06:53.372550 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:53Z","lastTransitionTime":"2025-12-09T12:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:53 crc kubenswrapper[4703]: I1209 12:06:53.475053 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:53 crc kubenswrapper[4703]: I1209 12:06:53.475099 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:53 crc kubenswrapper[4703]: I1209 12:06:53.475113 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:53 crc kubenswrapper[4703]: I1209 12:06:53.475133 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:53 crc kubenswrapper[4703]: I1209 12:06:53.475148 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:53Z","lastTransitionTime":"2025-12-09T12:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:53 crc kubenswrapper[4703]: I1209 12:06:53.568340 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9zbgq_b57e1095-b0e1-4b30-a491-00852a5219e7/kube-multus/1.log" Dec 09 12:06:53 crc kubenswrapper[4703]: I1209 12:06:53.569564 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9zbgq_b57e1095-b0e1-4b30-a491-00852a5219e7/kube-multus/0.log" Dec 09 12:06:53 crc kubenswrapper[4703]: I1209 12:06:53.569626 4703 generic.go:334] "Generic (PLEG): container finished" podID="b57e1095-b0e1-4b30-a491-00852a5219e7" containerID="71dde398c11e22726cc75a12c047119d7f10bc0a03bb5bda05b6cab5a65c9bda" exitCode=1 Dec 09 12:06:53 crc kubenswrapper[4703]: I1209 12:06:53.569662 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9zbgq" event={"ID":"b57e1095-b0e1-4b30-a491-00852a5219e7","Type":"ContainerDied","Data":"71dde398c11e22726cc75a12c047119d7f10bc0a03bb5bda05b6cab5a65c9bda"} Dec 09 12:06:53 crc kubenswrapper[4703]: I1209 12:06:53.569696 4703 scope.go:117] "RemoveContainer" containerID="c902e8b479756afb7cf929d0963afc0f950288658427c1d49ff22ee4319cda70" Dec 09 12:06:53 crc kubenswrapper[4703]: I1209 12:06:53.570038 4703 scope.go:117] "RemoveContainer" containerID="71dde398c11e22726cc75a12c047119d7f10bc0a03bb5bda05b6cab5a65c9bda" Dec 09 12:06:53 crc kubenswrapper[4703]: E1209 12:06:53.570214 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-9zbgq_openshift-multus(b57e1095-b0e1-4b30-a491-00852a5219e7)\"" pod="openshift-multus/multus-9zbgq" podUID="b57e1095-b0e1-4b30-a491-00852a5219e7" Dec 09 12:06:53 crc kubenswrapper[4703]: I1209 12:06:53.579595 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:53 crc kubenswrapper[4703]: I1209 12:06:53.580064 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:53 crc kubenswrapper[4703]: I1209 12:06:53.580073 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:53 crc kubenswrapper[4703]: I1209 12:06:53.580088 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:53 crc kubenswrapper[4703]: I1209 12:06:53.580097 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:53Z","lastTransitionTime":"2025-12-09T12:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:53 crc kubenswrapper[4703]: I1209 12:06:53.609702 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:53 crc kubenswrapper[4703]: I1209 12:06:53.609920 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:53 crc kubenswrapper[4703]: I1209 12:06:53.610070 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:53 crc kubenswrapper[4703]: I1209 12:06:53.610221 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:53 crc kubenswrapper[4703]: I1209 12:06:53.610398 4703 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:53Z","lastTransitionTime":"2025-12-09T12:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:53 crc kubenswrapper[4703]: I1209 12:06:53.649548 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-2wrxp"] Dec 09 12:06:53 crc kubenswrapper[4703]: I1209 12:06:53.650138 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2wrxp" Dec 09 12:06:53 crc kubenswrapper[4703]: I1209 12:06:53.652951 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Dec 09 12:06:53 crc kubenswrapper[4703]: I1209 12:06:53.653046 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 09 12:06:53 crc kubenswrapper[4703]: I1209 12:06:53.653222 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 09 12:06:53 crc kubenswrapper[4703]: I1209 12:06:53.654024 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 09 12:06:53 crc kubenswrapper[4703]: I1209 12:06:53.763254 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/a85293ae-d250-4360-854c-ace7d0eefd58-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-2wrxp\" (UID: \"a85293ae-d250-4360-854c-ace7d0eefd58\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2wrxp" Dec 09 12:06:53 crc kubenswrapper[4703]: I1209 12:06:53.763395 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/a85293ae-d250-4360-854c-ace7d0eefd58-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-2wrxp\" (UID: \"a85293ae-d250-4360-854c-ace7d0eefd58\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2wrxp" Dec 09 12:06:53 crc kubenswrapper[4703]: I1209 12:06:53.763436 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a85293ae-d250-4360-854c-ace7d0eefd58-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-2wrxp\" (UID: \"a85293ae-d250-4360-854c-ace7d0eefd58\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2wrxp" Dec 09 12:06:53 crc kubenswrapper[4703]: I1209 12:06:53.763492 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a85293ae-d250-4360-854c-ace7d0eefd58-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-2wrxp\" (UID: \"a85293ae-d250-4360-854c-ace7d0eefd58\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2wrxp" Dec 09 12:06:53 crc kubenswrapper[4703]: I1209 12:06:53.763528 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a85293ae-d250-4360-854c-ace7d0eefd58-service-ca\") pod \"cluster-version-operator-5c965bbfc6-2wrxp\" (UID: \"a85293ae-d250-4360-854c-ace7d0eefd58\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2wrxp" Dec 09 12:06:53 crc kubenswrapper[4703]: I1209 12:06:53.864650 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a85293ae-d250-4360-854c-ace7d0eefd58-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-2wrxp\" (UID: \"a85293ae-d250-4360-854c-ace7d0eefd58\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2wrxp" Dec 09 12:06:53 crc kubenswrapper[4703]: I1209 12:06:53.864692 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a85293ae-d250-4360-854c-ace7d0eefd58-service-ca\") pod \"cluster-version-operator-5c965bbfc6-2wrxp\" (UID: \"a85293ae-d250-4360-854c-ace7d0eefd58\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2wrxp" Dec 09 12:06:53 crc kubenswrapper[4703]: I1209 12:06:53.864723 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/a85293ae-d250-4360-854c-ace7d0eefd58-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-2wrxp\" (UID: \"a85293ae-d250-4360-854c-ace7d0eefd58\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2wrxp" Dec 09 12:06:53 crc kubenswrapper[4703]: I1209 12:06:53.864758 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/a85293ae-d250-4360-854c-ace7d0eefd58-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-2wrxp\" (UID: \"a85293ae-d250-4360-854c-ace7d0eefd58\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2wrxp" Dec 09 12:06:53 crc kubenswrapper[4703]: I1209 12:06:53.864773 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a85293ae-d250-4360-854c-ace7d0eefd58-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-2wrxp\" (UID: \"a85293ae-d250-4360-854c-ace7d0eefd58\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2wrxp" Dec 09 12:06:53 crc kubenswrapper[4703]: I1209 12:06:53.864842 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/a85293ae-d250-4360-854c-ace7d0eefd58-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-2wrxp\" (UID: \"a85293ae-d250-4360-854c-ace7d0eefd58\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2wrxp" Dec 09 12:06:53 crc kubenswrapper[4703]: I1209 12:06:53.864862 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/a85293ae-d250-4360-854c-ace7d0eefd58-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-2wrxp\" (UID: \"a85293ae-d250-4360-854c-ace7d0eefd58\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2wrxp" Dec 09 12:06:53 crc kubenswrapper[4703]: I1209 12:06:53.866127 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a85293ae-d250-4360-854c-ace7d0eefd58-service-ca\") pod \"cluster-version-operator-5c965bbfc6-2wrxp\" (UID: \"a85293ae-d250-4360-854c-ace7d0eefd58\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2wrxp" Dec 09 12:06:53 crc kubenswrapper[4703]: I1209 12:06:53.874126 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a85293ae-d250-4360-854c-ace7d0eefd58-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-2wrxp\" (UID: \"a85293ae-d250-4360-854c-ace7d0eefd58\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2wrxp" Dec 09 12:06:53 crc kubenswrapper[4703]: I1209 12:06:53.880812 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a85293ae-d250-4360-854c-ace7d0eefd58-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-2wrxp\" (UID: \"a85293ae-d250-4360-854c-ace7d0eefd58\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2wrxp" Dec 09 12:06:53 crc kubenswrapper[4703]: I1209 12:06:53.968950 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2wrxp" Dec 09 12:06:54 crc kubenswrapper[4703]: I1209 12:06:54.574096 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2wrxp" event={"ID":"a85293ae-d250-4360-854c-ace7d0eefd58","Type":"ContainerStarted","Data":"2dc894cf4e66d0a5bcd12186f3ca1722c4395166e82591c4774192d4866f55e8"} Dec 09 12:06:54 crc kubenswrapper[4703]: I1209 12:06:54.574141 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2wrxp" event={"ID":"a85293ae-d250-4360-854c-ace7d0eefd58","Type":"ContainerStarted","Data":"a2c50069c37c8d6706fe496f3cadb3bf3c56448775c20f4452153a0acc045ee6"} Dec 09 12:06:54 crc kubenswrapper[4703]: I1209 12:06:54.575606 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9zbgq_b57e1095-b0e1-4b30-a491-00852a5219e7/kube-multus/1.log" Dec 09 12:06:54 crc kubenswrapper[4703]: I1209 12:06:54.591305 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2wrxp" podStartSLOduration=95.591287976 podStartE2EDuration="1m35.591287976s" podCreationTimestamp="2025-12-09 12:05:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:06:54.590870963 +0000 UTC m=+113.839634482" watchObservedRunningTime="2025-12-09 12:06:54.591287976 +0000 UTC m=+113.840051495" Dec 09 12:06:55 crc kubenswrapper[4703]: I1209 12:06:55.068795 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:06:55 crc kubenswrapper[4703]: I1209 12:06:55.068899 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pf4r7" Dec 09 12:06:55 crc kubenswrapper[4703]: I1209 12:06:55.068950 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:06:55 crc kubenswrapper[4703]: I1209 12:06:55.068796 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:06:55 crc kubenswrapper[4703]: E1209 12:06:55.068910 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 12:06:55 crc kubenswrapper[4703]: E1209 12:06:55.069059 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pf4r7" podUID="9f199898-7916-48b6-b5e6-c878bacae384" Dec 09 12:06:55 crc kubenswrapper[4703]: E1209 12:06:55.069147 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 12:06:55 crc kubenswrapper[4703]: E1209 12:06:55.069275 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 12:06:57 crc kubenswrapper[4703]: I1209 12:06:57.069397 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:06:57 crc kubenswrapper[4703]: I1209 12:06:57.069444 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:06:57 crc kubenswrapper[4703]: I1209 12:06:57.069493 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pf4r7" Dec 09 12:06:57 crc kubenswrapper[4703]: E1209 12:06:57.069538 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 12:06:57 crc kubenswrapper[4703]: I1209 12:06:57.069568 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:06:57 crc kubenswrapper[4703]: E1209 12:06:57.069631 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pf4r7" podUID="9f199898-7916-48b6-b5e6-c878bacae384" Dec 09 12:06:57 crc kubenswrapper[4703]: E1209 12:06:57.069707 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 12:06:57 crc kubenswrapper[4703]: E1209 12:06:57.069780 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 12:06:57 crc kubenswrapper[4703]: I1209 12:06:57.070675 4703 scope.go:117] "RemoveContainer" containerID="500c141c181fc269b89b51bc952bf461e962782dfa52cec80966dce7f1ae181f" Dec 09 12:06:57 crc kubenswrapper[4703]: E1209 12:06:57.070856 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-7hrm8_openshift-ovn-kubernetes(e9173444-5181-4ee4-b651-11d92ccab0d0)\"" pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" podUID="e9173444-5181-4ee4-b651-11d92ccab0d0" Dec 09 12:06:59 crc kubenswrapper[4703]: I1209 12:06:59.069173 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:06:59 crc kubenswrapper[4703]: I1209 12:06:59.069323 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:06:59 crc kubenswrapper[4703]: E1209 12:06:59.069380 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 12:06:59 crc kubenswrapper[4703]: E1209 12:06:59.069504 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 12:06:59 crc kubenswrapper[4703]: I1209 12:06:59.069603 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:06:59 crc kubenswrapper[4703]: E1209 12:06:59.069674 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 12:06:59 crc kubenswrapper[4703]: I1209 12:06:59.069918 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pf4r7" Dec 09 12:06:59 crc kubenswrapper[4703]: E1209 12:06:59.070024 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pf4r7" podUID="9f199898-7916-48b6-b5e6-c878bacae384" Dec 09 12:07:01 crc kubenswrapper[4703]: I1209 12:07:01.068936 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:07:01 crc kubenswrapper[4703]: I1209 12:07:01.068986 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:07:01 crc kubenswrapper[4703]: I1209 12:07:01.068936 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:07:01 crc kubenswrapper[4703]: I1209 12:07:01.069840 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pf4r7" Dec 09 12:07:01 crc kubenswrapper[4703]: E1209 12:07:01.069828 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 12:07:01 crc kubenswrapper[4703]: E1209 12:07:01.069937 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 12:07:01 crc kubenswrapper[4703]: E1209 12:07:01.069995 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pf4r7" podUID="9f199898-7916-48b6-b5e6-c878bacae384" Dec 09 12:07:01 crc kubenswrapper[4703]: E1209 12:07:01.070093 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 12:07:01 crc kubenswrapper[4703]: E1209 12:07:01.086994 4703 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Dec 09 12:07:01 crc kubenswrapper[4703]: E1209 12:07:01.166903 4703 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 09 12:07:03 crc kubenswrapper[4703]: I1209 12:07:03.069664 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:07:03 crc kubenswrapper[4703]: I1209 12:07:03.069708 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pf4r7" Dec 09 12:07:03 crc kubenswrapper[4703]: E1209 12:07:03.070947 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 12:07:03 crc kubenswrapper[4703]: I1209 12:07:03.069869 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:07:03 crc kubenswrapper[4703]: I1209 12:07:03.069781 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:07:03 crc kubenswrapper[4703]: E1209 12:07:03.071102 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pf4r7" podUID="9f199898-7916-48b6-b5e6-c878bacae384" Dec 09 12:07:03 crc kubenswrapper[4703]: E1209 12:07:03.071225 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 12:07:03 crc kubenswrapper[4703]: E1209 12:07:03.071331 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 12:07:05 crc kubenswrapper[4703]: I1209 12:07:05.069620 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:07:05 crc kubenswrapper[4703]: I1209 12:07:05.069665 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:07:05 crc kubenswrapper[4703]: I1209 12:07:05.069662 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pf4r7" Dec 09 12:07:05 crc kubenswrapper[4703]: I1209 12:07:05.069620 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:07:05 crc kubenswrapper[4703]: E1209 12:07:05.069872 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 12:07:05 crc kubenswrapper[4703]: E1209 12:07:05.069786 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 12:07:05 crc kubenswrapper[4703]: E1209 12:07:05.070031 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 12:07:05 crc kubenswrapper[4703]: E1209 12:07:05.070107 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pf4r7" podUID="9f199898-7916-48b6-b5e6-c878bacae384" Dec 09 12:07:06 crc kubenswrapper[4703]: E1209 12:07:06.167976 4703 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 09 12:07:07 crc kubenswrapper[4703]: I1209 12:07:07.069470 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:07:07 crc kubenswrapper[4703]: I1209 12:07:07.069483 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:07:07 crc kubenswrapper[4703]: I1209 12:07:07.069532 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pf4r7" Dec 09 12:07:07 crc kubenswrapper[4703]: E1209 12:07:07.069702 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 12:07:07 crc kubenswrapper[4703]: E1209 12:07:07.069596 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 12:07:07 crc kubenswrapper[4703]: E1209 12:07:07.069774 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pf4r7" podUID="9f199898-7916-48b6-b5e6-c878bacae384" Dec 09 12:07:07 crc kubenswrapper[4703]: I1209 12:07:07.069532 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:07:07 crc kubenswrapper[4703]: E1209 12:07:07.070071 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 12:07:08 crc kubenswrapper[4703]: I1209 12:07:08.069333 4703 scope.go:117] "RemoveContainer" containerID="71dde398c11e22726cc75a12c047119d7f10bc0a03bb5bda05b6cab5a65c9bda" Dec 09 12:07:08 crc kubenswrapper[4703]: I1209 12:07:08.069670 4703 scope.go:117] "RemoveContainer" containerID="500c141c181fc269b89b51bc952bf461e962782dfa52cec80966dce7f1ae181f" Dec 09 12:07:08 crc kubenswrapper[4703]: I1209 12:07:08.618015 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7hrm8_e9173444-5181-4ee4-b651-11d92ccab0d0/ovnkube-controller/3.log" Dec 09 12:07:08 crc kubenswrapper[4703]: I1209 12:07:08.622955 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" event={"ID":"e9173444-5181-4ee4-b651-11d92ccab0d0","Type":"ContainerStarted","Data":"6e6972743cb84820f7134dd61623aa0e6430203e481128bb119defbfafc8d344"} Dec 09 12:07:08 crc kubenswrapper[4703]: I1209 12:07:08.629339 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" Dec 09 12:07:08 crc kubenswrapper[4703]: I1209 12:07:08.631566 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9zbgq_b57e1095-b0e1-4b30-a491-00852a5219e7/kube-multus/1.log" Dec 09 12:07:08 crc kubenswrapper[4703]: I1209 12:07:08.631633 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9zbgq" event={"ID":"b57e1095-b0e1-4b30-a491-00852a5219e7","Type":"ContainerStarted","Data":"0fb8e3daa497dbbdcbe504e2bf923948ae25a5522138c22da61febd77f079c8d"} Dec 09 12:07:08 crc kubenswrapper[4703]: I1209 12:07:08.662734 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" podStartSLOduration=109.66272012 podStartE2EDuration="1m49.66272012s" podCreationTimestamp="2025-12-09 12:05:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:07:08.661935365 +0000 UTC m=+127.910698884" watchObservedRunningTime="2025-12-09 12:07:08.66272012 +0000 UTC m=+127.911483639" Dec 09 12:07:09 crc kubenswrapper[4703]: I1209 12:07:09.069608 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:07:09 crc kubenswrapper[4703]: I1209 12:07:09.069774 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pf4r7" Dec 09 12:07:09 crc kubenswrapper[4703]: I1209 12:07:09.069804 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:07:09 crc kubenswrapper[4703]: E1209 12:07:09.069864 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pf4r7" podUID="9f199898-7916-48b6-b5e6-c878bacae384" Dec 09 12:07:09 crc kubenswrapper[4703]: I1209 12:07:09.070024 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:07:09 crc kubenswrapper[4703]: E1209 12:07:09.070067 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 12:07:09 crc kubenswrapper[4703]: E1209 12:07:09.070172 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 12:07:09 crc kubenswrapper[4703]: E1209 12:07:09.070286 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 12:07:09 crc kubenswrapper[4703]: I1209 12:07:09.236262 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-pf4r7"] Dec 09 12:07:09 crc kubenswrapper[4703]: I1209 12:07:09.635462 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pf4r7" Dec 09 12:07:09 crc kubenswrapper[4703]: E1209 12:07:09.636255 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pf4r7" podUID="9f199898-7916-48b6-b5e6-c878bacae384" Dec 09 12:07:11 crc kubenswrapper[4703]: I1209 12:07:11.069292 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pf4r7" Dec 09 12:07:11 crc kubenswrapper[4703]: I1209 12:07:11.069340 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:07:11 crc kubenswrapper[4703]: E1209 12:07:11.070545 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pf4r7" podUID="9f199898-7916-48b6-b5e6-c878bacae384" Dec 09 12:07:11 crc kubenswrapper[4703]: I1209 12:07:11.070575 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:07:11 crc kubenswrapper[4703]: I1209 12:07:11.070603 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:07:11 crc kubenswrapper[4703]: E1209 12:07:11.070685 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 12:07:11 crc kubenswrapper[4703]: E1209 12:07:11.070757 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 12:07:11 crc kubenswrapper[4703]: E1209 12:07:11.070964 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 12:07:11 crc kubenswrapper[4703]: E1209 12:07:11.168385 4703 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 09 12:07:13 crc kubenswrapper[4703]: I1209 12:07:13.069046 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:07:13 crc kubenswrapper[4703]: I1209 12:07:13.069071 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pf4r7" Dec 09 12:07:13 crc kubenswrapper[4703]: E1209 12:07:13.069154 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 12:07:13 crc kubenswrapper[4703]: I1209 12:07:13.069164 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:07:13 crc kubenswrapper[4703]: E1209 12:07:13.069267 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pf4r7" podUID="9f199898-7916-48b6-b5e6-c878bacae384" Dec 09 12:07:13 crc kubenswrapper[4703]: E1209 12:07:13.069341 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 12:07:13 crc kubenswrapper[4703]: I1209 12:07:13.069388 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:07:13 crc kubenswrapper[4703]: E1209 12:07:13.069471 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 12:07:15 crc kubenswrapper[4703]: I1209 12:07:15.068865 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:07:15 crc kubenswrapper[4703]: I1209 12:07:15.068914 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:07:15 crc kubenswrapper[4703]: I1209 12:07:15.069006 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:07:15 crc kubenswrapper[4703]: E1209 12:07:15.068995 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 12:07:15 crc kubenswrapper[4703]: I1209 12:07:15.069080 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pf4r7" Dec 09 12:07:15 crc kubenswrapper[4703]: E1209 12:07:15.069120 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 12:07:15 crc kubenswrapper[4703]: E1209 12:07:15.069219 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 12:07:15 crc kubenswrapper[4703]: E1209 12:07:15.069270 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pf4r7" podUID="9f199898-7916-48b6-b5e6-c878bacae384" Dec 09 12:07:17 crc kubenswrapper[4703]: I1209 12:07:17.068724 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:07:17 crc kubenswrapper[4703]: I1209 12:07:17.068741 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pf4r7" Dec 09 12:07:17 crc kubenswrapper[4703]: I1209 12:07:17.068793 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:07:17 crc kubenswrapper[4703]: I1209 12:07:17.068913 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:07:17 crc kubenswrapper[4703]: I1209 12:07:17.073473 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 09 12:07:17 crc kubenswrapper[4703]: I1209 12:07:17.073617 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 09 12:07:17 crc kubenswrapper[4703]: I1209 12:07:17.073851 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 09 12:07:17 crc kubenswrapper[4703]: I1209 12:07:17.074787 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 09 12:07:17 crc kubenswrapper[4703]: I1209 12:07:17.075111 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 09 12:07:17 crc kubenswrapper[4703]: I1209 12:07:17.075165 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Dec 09 12:07:18 crc kubenswrapper[4703]: I1209 12:07:18.583816 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.038320 4703 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.066895 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-mdtkr"] Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.067672 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-mdtkr" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.069004 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.072506 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.075012 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.076172 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.076234 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.076648 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.076784 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.077039 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.077004 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.077781 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.086309 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-8cpwb"] Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.087277 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-8cpwb" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.088398 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-2zgjr"] Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.097019 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5cd2dc72-d636-4b53-94ee-f759caaf76e0-service-ca-bundle\") pod \"authentication-operator-69f744f599-8cpwb\" (UID: \"5cd2dc72-d636-4b53-94ee-f759caaf76e0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8cpwb" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.097077 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a58cf20a-3255-44f0-b38a-2017d8cce4e0-encryption-config\") pod \"apiserver-76f77b778f-mdtkr\" (UID: \"a58cf20a-3255-44f0-b38a-2017d8cce4e0\") " pod="openshift-apiserver/apiserver-76f77b778f-mdtkr" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.097111 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a58cf20a-3255-44f0-b38a-2017d8cce4e0-audit-dir\") pod \"apiserver-76f77b778f-mdtkr\" (UID: \"a58cf20a-3255-44f0-b38a-2017d8cce4e0\") " pod="openshift-apiserver/apiserver-76f77b778f-mdtkr" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.097136 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hbjx\" (UniqueName: \"kubernetes.io/projected/a58cf20a-3255-44f0-b38a-2017d8cce4e0-kube-api-access-9hbjx\") pod \"apiserver-76f77b778f-mdtkr\" (UID: \"a58cf20a-3255-44f0-b38a-2017d8cce4e0\") " pod="openshift-apiserver/apiserver-76f77b778f-mdtkr" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.097161 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5cd2dc72-d636-4b53-94ee-f759caaf76e0-config\") pod \"authentication-operator-69f744f599-8cpwb\" (UID: \"5cd2dc72-d636-4b53-94ee-f759caaf76e0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8cpwb" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.097180 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5cd2dc72-d636-4b53-94ee-f759caaf76e0-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-8cpwb\" (UID: \"5cd2dc72-d636-4b53-94ee-f759caaf76e0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8cpwb" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.097243 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a58cf20a-3255-44f0-b38a-2017d8cce4e0-trusted-ca-bundle\") pod \"apiserver-76f77b778f-mdtkr\" (UID: \"a58cf20a-3255-44f0-b38a-2017d8cce4e0\") " pod="openshift-apiserver/apiserver-76f77b778f-mdtkr" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.097268 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a58cf20a-3255-44f0-b38a-2017d8cce4e0-etcd-client\") pod \"apiserver-76f77b778f-mdtkr\" (UID: \"a58cf20a-3255-44f0-b38a-2017d8cce4e0\") " pod="openshift-apiserver/apiserver-76f77b778f-mdtkr" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.097290 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2zgjr" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.097303 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5cd2dc72-d636-4b53-94ee-f759caaf76e0-serving-cert\") pod \"authentication-operator-69f744f599-8cpwb\" (UID: \"5cd2dc72-d636-4b53-94ee-f759caaf76e0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8cpwb" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.097538 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdvj9\" (UniqueName: \"kubernetes.io/projected/5cd2dc72-d636-4b53-94ee-f759caaf76e0-kube-api-access-tdvj9\") pod \"authentication-operator-69f744f599-8cpwb\" (UID: \"5cd2dc72-d636-4b53-94ee-f759caaf76e0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8cpwb" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.097575 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a58cf20a-3255-44f0-b38a-2017d8cce4e0-config\") pod \"apiserver-76f77b778f-mdtkr\" (UID: \"a58cf20a-3255-44f0-b38a-2017d8cce4e0\") " pod="openshift-apiserver/apiserver-76f77b778f-mdtkr" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.097691 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/a58cf20a-3255-44f0-b38a-2017d8cce4e0-audit\") pod \"apiserver-76f77b778f-mdtkr\" (UID: \"a58cf20a-3255-44f0-b38a-2017d8cce4e0\") " pod="openshift-apiserver/apiserver-76f77b778f-mdtkr" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.097715 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/a58cf20a-3255-44f0-b38a-2017d8cce4e0-image-import-ca\") pod \"apiserver-76f77b778f-mdtkr\" (UID: \"a58cf20a-3255-44f0-b38a-2017d8cce4e0\") " pod="openshift-apiserver/apiserver-76f77b778f-mdtkr" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.097756 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a58cf20a-3255-44f0-b38a-2017d8cce4e0-node-pullsecrets\") pod \"apiserver-76f77b778f-mdtkr\" (UID: \"a58cf20a-3255-44f0-b38a-2017d8cce4e0\") " pod="openshift-apiserver/apiserver-76f77b778f-mdtkr" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.097789 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a58cf20a-3255-44f0-b38a-2017d8cce4e0-serving-cert\") pod \"apiserver-76f77b778f-mdtkr\" (UID: \"a58cf20a-3255-44f0-b38a-2017d8cce4e0\") " pod="openshift-apiserver/apiserver-76f77b778f-mdtkr" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.097835 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a58cf20a-3255-44f0-b38a-2017d8cce4e0-etcd-serving-ca\") pod \"apiserver-76f77b778f-mdtkr\" (UID: \"a58cf20a-3255-44f0-b38a-2017d8cce4e0\") " pod="openshift-apiserver/apiserver-76f77b778f-mdtkr" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.097942 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-2xlcr"] Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.098424 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9d4qd"] Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.098732 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.098940 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9d4qd" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.099156 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-llbbl"] Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.099462 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-2xlcr" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.098944 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.099409 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.099873 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.099476 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-llbbl" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.100566 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-zg8pp"] Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.100964 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zg8pp" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.101470 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.101491 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.101587 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.105143 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.105298 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-hcmrp"] Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.105824 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-cjdjc"] Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.105970 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-hcmrp" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.106105 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-cjdjc" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.106644 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-hqx9m"] Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.107037 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-hqx9m" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.107101 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-6hpw7"] Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.107974 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rsr7z"] Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.108268 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rsr7z" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.110392 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vqd7s"] Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.110747 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-rpbfz"] Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.111073 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-rpbfz" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.111332 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ng8gw"] Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.111391 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vqd7s" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.111856 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-6hpw7" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.111896 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ng8gw" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.112941 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-dsplm"] Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.113457 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-mktpn"] Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.113736 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-mktpn" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.114003 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dsplm" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.114863 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tf7x5"] Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.120844 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421360-wvnqz"] Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.121225 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-czvkf"] Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.121616 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tf7x5" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.121693 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-5tsxq"] Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.122178 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dh6tp"] Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.122530 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-fh878"] Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.122875 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.123157 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-w2z24"] Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.129363 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421360-wvnqz" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.129585 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.129757 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5tsxq" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.130056 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.130256 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fh878" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.130440 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.131902 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.132500 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.132862 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.133168 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.133124 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-92q4r"] Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.133316 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dh6tp" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.133541 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.133564 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.133622 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-w2z24" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.133943 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.134851 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.135407 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.135844 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-92q4r" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.130569 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.130868 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.137903 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.139393 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.139821 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.140022 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.140128 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-czvkf" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.140575 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.144659 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vv85r"] Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.146624 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vv85r" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.150946 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-kzgkb"] Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.153278 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.176128 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-rsr8p"] Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.176784 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-82x8s"] Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.177458 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.177498 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-82x8s" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.177631 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.177728 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.177918 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.178084 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.178296 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.178442 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.178645 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.178835 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.178950 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.179063 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.179376 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.179537 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.179726 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kzgkb" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.179988 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-rsr8p" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.181741 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.181760 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.181909 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.182007 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.182089 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.182162 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.182283 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.182363 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.182421 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.182473 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.182559 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.182588 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.182661 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.182728 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.182739 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.182433 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.182925 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.182997 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.183276 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.183346 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.183403 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.183420 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.184794 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.184984 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.185414 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.185794 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.185924 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8szns"] Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.189043 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.189187 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.189235 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.189294 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.189363 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.189394 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.189471 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.189561 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.189597 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.189680 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.189721 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.189799 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.189827 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.189686 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.189566 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.190150 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.190801 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8szns" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.192224 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6cpwj"] Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.192862 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6cpwj" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.192900 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-psc6r"] Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.193329 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.193493 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-psc6r" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.189070 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.199262 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-knw7d"] Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.200354 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-knw7d" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.200517 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/558df921-8b36-45ba-8cc8-b25a1a2f6172-images\") pod \"machine-api-operator-5694c8668f-czvkf\" (UID: \"558df921-8b36-45ba-8cc8-b25a1a2f6172\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-czvkf" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.200823 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fd6b0396-1528-45bf-a713-67e7e20b3e96-audit-dir\") pod \"apiserver-7bbb656c7d-fh878\" (UID: \"fd6b0396-1528-45bf-a713-67e7e20b3e96\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fh878" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.200915 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/b7d07c12-8633-4029-89e3-9298dd140444-etcd-service-ca\") pod \"etcd-operator-b45778765-w2z24\" (UID: \"b7d07c12-8633-4029-89e3-9298dd140444\") " pod="openshift-etcd-operator/etcd-operator-b45778765-w2z24" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.200942 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/2b57ba45-1dc5-4f41-93f1-94c5d07cebaa-machine-approver-tls\") pod \"machine-approver-56656f9798-zg8pp\" (UID: \"2b57ba45-1dc5-4f41-93f1-94c5d07cebaa\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zg8pp" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.201089 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a1193b2-a3d3-4480-8a79-a56b722e062e-config\") pod \"kube-controller-manager-operator-78b949d7b-vqd7s\" (UID: \"7a1193b2-a3d3-4480-8a79-a56b722e062e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vqd7s" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.201292 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/a58cf20a-3255-44f0-b38a-2017d8cce4e0-audit\") pod \"apiserver-76f77b778f-mdtkr\" (UID: \"a58cf20a-3255-44f0-b38a-2017d8cce4e0\") " pod="openshift-apiserver/apiserver-76f77b778f-mdtkr" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.201451 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/a58cf20a-3255-44f0-b38a-2017d8cce4e0-image-import-ca\") pod \"apiserver-76f77b778f-mdtkr\" (UID: \"a58cf20a-3255-44f0-b38a-2017d8cce4e0\") " pod="openshift-apiserver/apiserver-76f77b778f-mdtkr" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.201988 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/a58cf20a-3255-44f0-b38a-2017d8cce4e0-audit\") pod \"apiserver-76f77b778f-mdtkr\" (UID: \"a58cf20a-3255-44f0-b38a-2017d8cce4e0\") " pod="openshift-apiserver/apiserver-76f77b778f-mdtkr" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.202122 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/296d291a-3dbf-46c2-a60c-8646965dcbdc-serving-cert\") pod \"route-controller-manager-6576b87f9c-2zgjr\" (UID: \"296d291a-3dbf-46c2-a60c-8646965dcbdc\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2zgjr" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.202159 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a58cf20a-3255-44f0-b38a-2017d8cce4e0-node-pullsecrets\") pod \"apiserver-76f77b778f-mdtkr\" (UID: \"a58cf20a-3255-44f0-b38a-2017d8cce4e0\") " pod="openshift-apiserver/apiserver-76f77b778f-mdtkr" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.202225 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a58cf20a-3255-44f0-b38a-2017d8cce4e0-node-pullsecrets\") pod \"apiserver-76f77b778f-mdtkr\" (UID: \"a58cf20a-3255-44f0-b38a-2017d8cce4e0\") " pod="openshift-apiserver/apiserver-76f77b778f-mdtkr" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.202253 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a58cf20a-3255-44f0-b38a-2017d8cce4e0-serving-cert\") pod \"apiserver-76f77b778f-mdtkr\" (UID: \"a58cf20a-3255-44f0-b38a-2017d8cce4e0\") " pod="openshift-apiserver/apiserver-76f77b778f-mdtkr" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.202363 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fd6b0396-1528-45bf-a713-67e7e20b3e96-etcd-client\") pod \"apiserver-7bbb656c7d-fh878\" (UID: \"fd6b0396-1528-45bf-a713-67e7e20b3e96\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fh878" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.202679 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4kjb\" (UniqueName: \"kubernetes.io/projected/8a37abfc-17ea-4aac-bbb9-9650cb15a2f6-kube-api-access-w4kjb\") pod \"machine-config-controller-84d6567774-psc6r\" (UID: \"8a37abfc-17ea-4aac-bbb9-9650cb15a2f6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-psc6r" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.202877 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-w77wb"] Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.203548 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-w77wb" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.202876 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/a58cf20a-3255-44f0-b38a-2017d8cce4e0-image-import-ca\") pod \"apiserver-76f77b778f-mdtkr\" (UID: \"a58cf20a-3255-44f0-b38a-2017d8cce4e0\") " pod="openshift-apiserver/apiserver-76f77b778f-mdtkr" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.204172 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99nfl\" (UniqueName: \"kubernetes.io/projected/d353402b-aa82-4c6a-aec3-4253503dfe34-kube-api-access-99nfl\") pod \"service-ca-operator-777779d784-mktpn\" (UID: \"d353402b-aa82-4c6a-aec3-4253503dfe34\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-mktpn" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.204221 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8a37abfc-17ea-4aac-bbb9-9650cb15a2f6-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-psc6r\" (UID: \"8a37abfc-17ea-4aac-bbb9-9650cb15a2f6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-psc6r" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.204456 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7d07c12-8633-4029-89e3-9298dd140444-config\") pod \"etcd-operator-b45778765-w2z24\" (UID: \"b7d07c12-8633-4029-89e3-9298dd140444\") " pod="openshift-etcd-operator/etcd-operator-b45778765-w2z24" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.204504 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a58cf20a-3255-44f0-b38a-2017d8cce4e0-etcd-serving-ca\") pod \"apiserver-76f77b778f-mdtkr\" (UID: \"a58cf20a-3255-44f0-b38a-2017d8cce4e0\") " pod="openshift-apiserver/apiserver-76f77b778f-mdtkr" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.204534 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b5760fc2-3dd8-4966-a906-41bebd96de5d-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-92q4r\" (UID: \"b5760fc2-3dd8-4966-a906-41bebd96de5d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-92q4r" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.204555 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd6b0396-1528-45bf-a713-67e7e20b3e96-serving-cert\") pod \"apiserver-7bbb656c7d-fh878\" (UID: \"fd6b0396-1528-45bf-a713-67e7e20b3e96\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fh878" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.204579 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzr6k\" (UniqueName: \"kubernetes.io/projected/2b57ba45-1dc5-4f41-93f1-94c5d07cebaa-kube-api-access-fzr6k\") pod \"machine-approver-56656f9798-zg8pp\" (UID: \"2b57ba45-1dc5-4f41-93f1-94c5d07cebaa\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zg8pp" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.204607 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2b57ba45-1dc5-4f41-93f1-94c5d07cebaa-auth-proxy-config\") pod \"machine-approver-56656f9798-zg8pp\" (UID: \"2b57ba45-1dc5-4f41-93f1-94c5d07cebaa\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zg8pp" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.204630 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/558df921-8b36-45ba-8cc8-b25a1a2f6172-config\") pod \"machine-api-operator-5694c8668f-czvkf\" (UID: \"558df921-8b36-45ba-8cc8-b25a1a2f6172\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-czvkf" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.204655 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5cd2dc72-d636-4b53-94ee-f759caaf76e0-service-ca-bundle\") pod \"authentication-operator-69f744f599-8cpwb\" (UID: \"5cd2dc72-d636-4b53-94ee-f759caaf76e0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8cpwb" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.204679 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a58cf20a-3255-44f0-b38a-2017d8cce4e0-encryption-config\") pod \"apiserver-76f77b778f-mdtkr\" (UID: \"a58cf20a-3255-44f0-b38a-2017d8cce4e0\") " pod="openshift-apiserver/apiserver-76f77b778f-mdtkr" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.204704 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7a1193b2-a3d3-4480-8a79-a56b722e062e-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-vqd7s\" (UID: \"7a1193b2-a3d3-4480-8a79-a56b722e062e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vqd7s" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.204734 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fd6b0396-1528-45bf-a713-67e7e20b3e96-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-fh878\" (UID: \"fd6b0396-1528-45bf-a713-67e7e20b3e96\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fh878" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.204750 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b7d07c12-8633-4029-89e3-9298dd140444-serving-cert\") pod \"etcd-operator-b45778765-w2z24\" (UID: \"b7d07c12-8633-4029-89e3-9298dd140444\") " pod="openshift-etcd-operator/etcd-operator-b45778765-w2z24" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.204768 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8a37abfc-17ea-4aac-bbb9-9650cb15a2f6-proxy-tls\") pod \"machine-config-controller-84d6567774-psc6r\" (UID: \"8a37abfc-17ea-4aac-bbb9-9650cb15a2f6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-psc6r" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.204782 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b5760fc2-3dd8-4966-a906-41bebd96de5d-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-92q4r\" (UID: \"b5760fc2-3dd8-4966-a906-41bebd96de5d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-92q4r" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.204797 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b5760fc2-3dd8-4966-a906-41bebd96de5d-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-92q4r\" (UID: \"b5760fc2-3dd8-4966-a906-41bebd96de5d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-92q4r" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.204810 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8fx2\" (UniqueName: \"kubernetes.io/projected/b7d07c12-8633-4029-89e3-9298dd140444-kube-api-access-w8fx2\") pod \"etcd-operator-b45778765-w2z24\" (UID: \"b7d07c12-8633-4029-89e3-9298dd140444\") " pod="openshift-etcd-operator/etcd-operator-b45778765-w2z24" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.204825 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwwnd\" (UniqueName: \"kubernetes.io/projected/4491d82e-ecc9-4873-b1dd-412889079392-kube-api-access-bwwnd\") pod \"dns-operator-744455d44c-rsr8p\" (UID: \"4491d82e-ecc9-4873-b1dd-412889079392\") " pod="openshift-dns-operator/dns-operator-744455d44c-rsr8p" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.204839 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsgr4\" (UniqueName: \"kubernetes.io/projected/296d291a-3dbf-46c2-a60c-8646965dcbdc-kube-api-access-nsgr4\") pod \"route-controller-manager-6576b87f9c-2zgjr\" (UID: \"296d291a-3dbf-46c2-a60c-8646965dcbdc\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2zgjr" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.204855 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a58cf20a-3255-44f0-b38a-2017d8cce4e0-audit-dir\") pod \"apiserver-76f77b778f-mdtkr\" (UID: \"a58cf20a-3255-44f0-b38a-2017d8cce4e0\") " pod="openshift-apiserver/apiserver-76f77b778f-mdtkr" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.204873 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d353402b-aa82-4c6a-aec3-4253503dfe34-serving-cert\") pod \"service-ca-operator-777779d784-mktpn\" (UID: \"d353402b-aa82-4c6a-aec3-4253503dfe34\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-mktpn" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.204889 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9492w\" (UniqueName: \"kubernetes.io/projected/b5760fc2-3dd8-4966-a906-41bebd96de5d-kube-api-access-9492w\") pod \"cluster-image-registry-operator-dc59b4c8b-92q4r\" (UID: \"b5760fc2-3dd8-4966-a906-41bebd96de5d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-92q4r" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.204903 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/558df921-8b36-45ba-8cc8-b25a1a2f6172-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-czvkf\" (UID: \"558df921-8b36-45ba-8cc8-b25a1a2f6172\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-czvkf" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.204920 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hbjx\" (UniqueName: \"kubernetes.io/projected/a58cf20a-3255-44f0-b38a-2017d8cce4e0-kube-api-access-9hbjx\") pod \"apiserver-76f77b778f-mdtkr\" (UID: \"a58cf20a-3255-44f0-b38a-2017d8cce4e0\") " pod="openshift-apiserver/apiserver-76f77b778f-mdtkr" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.204934 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d353402b-aa82-4c6a-aec3-4253503dfe34-config\") pod \"service-ca-operator-777779d784-mktpn\" (UID: \"d353402b-aa82-4c6a-aec3-4253503dfe34\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-mktpn" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.204950 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4491d82e-ecc9-4873-b1dd-412889079392-metrics-tls\") pod \"dns-operator-744455d44c-rsr8p\" (UID: \"4491d82e-ecc9-4873-b1dd-412889079392\") " pod="openshift-dns-operator/dns-operator-744455d44c-rsr8p" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.204964 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rdhh\" (UniqueName: \"kubernetes.io/projected/558df921-8b36-45ba-8cc8-b25a1a2f6172-kube-api-access-2rdhh\") pod \"machine-api-operator-5694c8668f-czvkf\" (UID: \"558df921-8b36-45ba-8cc8-b25a1a2f6172\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-czvkf" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.204983 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a1193b2-a3d3-4480-8a79-a56b722e062e-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-vqd7s\" (UID: \"7a1193b2-a3d3-4480-8a79-a56b722e062e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vqd7s" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.204997 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/b7d07c12-8633-4029-89e3-9298dd140444-etcd-ca\") pod \"etcd-operator-b45778765-w2z24\" (UID: \"b7d07c12-8633-4029-89e3-9298dd140444\") " pod="openshift-etcd-operator/etcd-operator-b45778765-w2z24" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.205013 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5cd2dc72-d636-4b53-94ee-f759caaf76e0-config\") pod \"authentication-operator-69f744f599-8cpwb\" (UID: \"5cd2dc72-d636-4b53-94ee-f759caaf76e0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8cpwb" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.205080 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5cd2dc72-d636-4b53-94ee-f759caaf76e0-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-8cpwb\" (UID: \"5cd2dc72-d636-4b53-94ee-f759caaf76e0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8cpwb" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.205098 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/fd6b0396-1528-45bf-a713-67e7e20b3e96-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-fh878\" (UID: \"fd6b0396-1528-45bf-a713-67e7e20b3e96\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fh878" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.205167 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a58cf20a-3255-44f0-b38a-2017d8cce4e0-audit-dir\") pod \"apiserver-76f77b778f-mdtkr\" (UID: \"a58cf20a-3255-44f0-b38a-2017d8cce4e0\") " pod="openshift-apiserver/apiserver-76f77b778f-mdtkr" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.205350 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vlkrh"] Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.205771 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a58cf20a-3255-44f0-b38a-2017d8cce4e0-etcd-serving-ca\") pod \"apiserver-76f77b778f-mdtkr\" (UID: \"a58cf20a-3255-44f0-b38a-2017d8cce4e0\") " pod="openshift-apiserver/apiserver-76f77b778f-mdtkr" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.205837 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5cd2dc72-d636-4b53-94ee-f759caaf76e0-config\") pod \"authentication-operator-69f744f599-8cpwb\" (UID: \"5cd2dc72-d636-4b53-94ee-f759caaf76e0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8cpwb" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.206295 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-vlkrh" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.206577 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5cd2dc72-d636-4b53-94ee-f759caaf76e0-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-8cpwb\" (UID: \"5cd2dc72-d636-4b53-94ee-f759caaf76e0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8cpwb" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.206945 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5cd2dc72-d636-4b53-94ee-f759caaf76e0-service-ca-bundle\") pod \"authentication-operator-69f744f599-8cpwb\" (UID: \"5cd2dc72-d636-4b53-94ee-f759caaf76e0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8cpwb" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.207241 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b57ba45-1dc5-4f41-93f1-94c5d07cebaa-config\") pod \"machine-approver-56656f9798-zg8pp\" (UID: \"2b57ba45-1dc5-4f41-93f1-94c5d07cebaa\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zg8pp" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.207301 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/fd6b0396-1528-45bf-a713-67e7e20b3e96-encryption-config\") pod \"apiserver-7bbb656c7d-fh878\" (UID: \"fd6b0396-1528-45bf-a713-67e7e20b3e96\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fh878" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.207326 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/296d291a-3dbf-46c2-a60c-8646965dcbdc-config\") pod \"route-controller-manager-6576b87f9c-2zgjr\" (UID: \"296d291a-3dbf-46c2-a60c-8646965dcbdc\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2zgjr" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.207345 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b7d07c12-8633-4029-89e3-9298dd140444-etcd-client\") pod \"etcd-operator-b45778765-w2z24\" (UID: \"b7d07c12-8633-4029-89e3-9298dd140444\") " pod="openshift-etcd-operator/etcd-operator-b45778765-w2z24" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.207366 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a58cf20a-3255-44f0-b38a-2017d8cce4e0-trusted-ca-bundle\") pod \"apiserver-76f77b778f-mdtkr\" (UID: \"a58cf20a-3255-44f0-b38a-2017d8cce4e0\") " pod="openshift-apiserver/apiserver-76f77b778f-mdtkr" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.207383 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnfnb\" (UniqueName: \"kubernetes.io/projected/fd6b0396-1528-45bf-a713-67e7e20b3e96-kube-api-access-bnfnb\") pod \"apiserver-7bbb656c7d-fh878\" (UID: \"fd6b0396-1528-45bf-a713-67e7e20b3e96\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fh878" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.207403 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a58cf20a-3255-44f0-b38a-2017d8cce4e0-etcd-client\") pod \"apiserver-76f77b778f-mdtkr\" (UID: \"a58cf20a-3255-44f0-b38a-2017d8cce4e0\") " pod="openshift-apiserver/apiserver-76f77b778f-mdtkr" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.207430 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5cd2dc72-d636-4b53-94ee-f759caaf76e0-serving-cert\") pod \"authentication-operator-69f744f599-8cpwb\" (UID: \"5cd2dc72-d636-4b53-94ee-f759caaf76e0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8cpwb" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.207448 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdvj9\" (UniqueName: \"kubernetes.io/projected/5cd2dc72-d636-4b53-94ee-f759caaf76e0-kube-api-access-tdvj9\") pod \"authentication-operator-69f744f599-8cpwb\" (UID: \"5cd2dc72-d636-4b53-94ee-f759caaf76e0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8cpwb" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.207467 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a58cf20a-3255-44f0-b38a-2017d8cce4e0-config\") pod \"apiserver-76f77b778f-mdtkr\" (UID: \"a58cf20a-3255-44f0-b38a-2017d8cce4e0\") " pod="openshift-apiserver/apiserver-76f77b778f-mdtkr" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.207487 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fd6b0396-1528-45bf-a713-67e7e20b3e96-audit-policies\") pod \"apiserver-7bbb656c7d-fh878\" (UID: \"fd6b0396-1528-45bf-a713-67e7e20b3e96\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fh878" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.207505 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/296d291a-3dbf-46c2-a60c-8646965dcbdc-client-ca\") pod \"route-controller-manager-6576b87f9c-2zgjr\" (UID: \"296d291a-3dbf-46c2-a60c-8646965dcbdc\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2zgjr" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.208400 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-99nm5"] Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.208909 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a58cf20a-3255-44f0-b38a-2017d8cce4e0-encryption-config\") pod \"apiserver-76f77b778f-mdtkr\" (UID: \"a58cf20a-3255-44f0-b38a-2017d8cce4e0\") " pod="openshift-apiserver/apiserver-76f77b778f-mdtkr" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.209094 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-99nm5" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.209169 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a58cf20a-3255-44f0-b38a-2017d8cce4e0-config\") pod \"apiserver-76f77b778f-mdtkr\" (UID: \"a58cf20a-3255-44f0-b38a-2017d8cce4e0\") " pod="openshift-apiserver/apiserver-76f77b778f-mdtkr" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.209272 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a58cf20a-3255-44f0-b38a-2017d8cce4e0-trusted-ca-bundle\") pod \"apiserver-76f77b778f-mdtkr\" (UID: \"a58cf20a-3255-44f0-b38a-2017d8cce4e0\") " pod="openshift-apiserver/apiserver-76f77b778f-mdtkr" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.227887 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.228001 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.229249 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a58cf20a-3255-44f0-b38a-2017d8cce4e0-etcd-client\") pod \"apiserver-76f77b778f-mdtkr\" (UID: \"a58cf20a-3255-44f0-b38a-2017d8cce4e0\") " pod="openshift-apiserver/apiserver-76f77b778f-mdtkr" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.232341 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-q57qh"] Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.233771 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.235220 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-q57qh" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.236399 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5cd2dc72-d636-4b53-94ee-f759caaf76e0-serving-cert\") pod \"authentication-operator-69f744f599-8cpwb\" (UID: \"5cd2dc72-d636-4b53-94ee-f759caaf76e0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8cpwb" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.239602 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.247715 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.247984 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.248039 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.248576 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.248608 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a58cf20a-3255-44f0-b38a-2017d8cce4e0-serving-cert\") pod \"apiserver-76f77b778f-mdtkr\" (UID: \"a58cf20a-3255-44f0-b38a-2017d8cce4e0\") " pod="openshift-apiserver/apiserver-76f77b778f-mdtkr" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.249532 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.266678 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.267584 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-b4s68"] Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.267687 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.268350 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-b4s68" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.269258 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-mdtkr"] Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.272084 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-8cpwb"] Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.272135 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-2zgjr"] Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.282756 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-2xlcr"] Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.282813 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9d4qd"] Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.286275 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-llbbl"] Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.286342 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-hdphr"] Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.287527 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vqd7s"] Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.287622 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hdphr" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.288005 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-rpbfz"] Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.289382 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ng8gw"] Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.290422 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.295705 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.299821 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-6hpw7"] Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.301280 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rsr7z"] Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.302762 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-7phbd"] Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.303810 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-7phbd" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.305288 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-hqx9m"] Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.307933 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-hcmrp"] Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.308029 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-mktpn"] Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.308791 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c7a93dcd-9a01-4ae7-bf05-d9e13e83d6aa-srv-cert\") pod \"catalog-operator-68c6474976-tf7x5\" (UID: \"c7a93dcd-9a01-4ae7-bf05-d9e13e83d6aa\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tf7x5" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.308825 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5668acde-421e-4a2c-8172-0030b25db0f6-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-2xlcr\" (UID: \"5668acde-421e-4a2c-8172-0030b25db0f6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2xlcr" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.308858 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fd6b0396-1528-45bf-a713-67e7e20b3e96-audit-policies\") pod \"apiserver-7bbb656c7d-fh878\" (UID: \"fd6b0396-1528-45bf-a713-67e7e20b3e96\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fh878" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.308875 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/296d291a-3dbf-46c2-a60c-8646965dcbdc-client-ca\") pod \"route-controller-manager-6576b87f9c-2zgjr\" (UID: \"296d291a-3dbf-46c2-a60c-8646965dcbdc\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2zgjr" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.308890 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f0a85bf1-837f-4175-803a-c09fde56e5d9-proxy-tls\") pod \"machine-config-operator-74547568cd-dsplm\" (UID: \"f0a85bf1-837f-4175-803a-c09fde56e5d9\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dsplm" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.308905 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/558df921-8b36-45ba-8cc8-b25a1a2f6172-images\") pod \"machine-api-operator-5694c8668f-czvkf\" (UID: \"558df921-8b36-45ba-8cc8-b25a1a2f6172\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-czvkf" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.308923 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/95fb5258-8faf-4e0a-ba69-319222cca40a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-6hpw7\" (UID: \"95fb5258-8faf-4e0a-ba69-319222cca40a\") " pod="openshift-authentication/oauth-openshift-558db77b4-6hpw7" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.308949 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwfbs\" (UniqueName: \"kubernetes.io/projected/95fb5258-8faf-4e0a-ba69-319222cca40a-kube-api-access-dwfbs\") pod \"oauth-openshift-558db77b4-6hpw7\" (UID: \"95fb5258-8faf-4e0a-ba69-319222cca40a\") " pod="openshift-authentication/oauth-openshift-558db77b4-6hpw7" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.308971 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwd2g\" (UniqueName: \"kubernetes.io/projected/edf79140-82dd-4511-a921-d2a8de8635bf-kube-api-access-mwd2g\") pod \"openshift-controller-manager-operator-756b6f6bc6-9d4qd\" (UID: \"edf79140-82dd-4511-a921-d2a8de8635bf\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9d4qd" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.308990 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/b7d07c12-8633-4029-89e3-9298dd140444-etcd-service-ca\") pod \"etcd-operator-b45778765-w2z24\" (UID: \"b7d07c12-8633-4029-89e3-9298dd140444\") " pod="openshift-etcd-operator/etcd-operator-b45778765-w2z24" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.309005 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/95fb5258-8faf-4e0a-ba69-319222cca40a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-6hpw7\" (UID: \"95fb5258-8faf-4e0a-ba69-319222cca40a\") " pod="openshift-authentication/oauth-openshift-558db77b4-6hpw7" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.309022 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/95fb5258-8faf-4e0a-ba69-319222cca40a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-6hpw7\" (UID: \"95fb5258-8faf-4e0a-ba69-319222cca40a\") " pod="openshift-authentication/oauth-openshift-558db77b4-6hpw7" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.309041 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/296d291a-3dbf-46c2-a60c-8646965dcbdc-serving-cert\") pod \"route-controller-manager-6576b87f9c-2zgjr\" (UID: \"296d291a-3dbf-46c2-a60c-8646965dcbdc\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2zgjr" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.309057 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bjnj\" (UniqueName: \"kubernetes.io/projected/ae21c58e-0b2b-450e-980d-c2d839fda11b-kube-api-access-2bjnj\") pod \"ingress-operator-5b745b69d9-kzgkb\" (UID: \"ae21c58e-0b2b-450e-980d-c2d839fda11b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kzgkb" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.309072 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de18af96-27c1-4d28-acfe-e0317de38dba-service-ca-bundle\") pod \"router-default-5444994796-cjdjc\" (UID: \"de18af96-27c1-4d28-acfe-e0317de38dba\") " pod="openshift-ingress/router-default-5444994796-cjdjc" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.309094 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ae21c58e-0b2b-450e-980d-c2d839fda11b-trusted-ca\") pod \"ingress-operator-5b745b69d9-kzgkb\" (UID: \"ae21c58e-0b2b-450e-980d-c2d839fda11b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kzgkb" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.309110 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/80ff55a4-f343-4795-9c6f-4ff56a52ea82-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-llbbl\" (UID: \"80ff55a4-f343-4795-9c6f-4ff56a52ea82\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-llbbl" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.309127 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/80ff55a4-f343-4795-9c6f-4ff56a52ea82-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-llbbl\" (UID: \"80ff55a4-f343-4795-9c6f-4ff56a52ea82\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-llbbl" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.309142 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f077d38-7967-4ef5-ba75-d833279fdd96-serving-cert\") pod \"console-operator-58897d9998-hcmrp\" (UID: \"8f077d38-7967-4ef5-ba75-d833279fdd96\") " pod="openshift-console-operator/console-operator-58897d9998-hcmrp" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.309160 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fd6b0396-1528-45bf-a713-67e7e20b3e96-etcd-client\") pod \"apiserver-7bbb656c7d-fh878\" (UID: \"fd6b0396-1528-45bf-a713-67e7e20b3e96\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fh878" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.309179 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pspm\" (UniqueName: \"kubernetes.io/projected/36981648-d6b7-4c08-96ec-622d069c4c19-kube-api-access-5pspm\") pod \"downloads-7954f5f757-hqx9m\" (UID: \"36981648-d6b7-4c08-96ec-622d069c4c19\") " pod="openshift-console/downloads-7954f5f757-hqx9m" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.309216 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fc394d2-c901-4c55-b5e4-c4e2f889cb82-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-8szns\" (UID: \"1fc394d2-c901-4c55-b5e4-c4e2f889cb82\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8szns" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.309240 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4n8p\" (UniqueName: \"kubernetes.io/projected/5668acde-421e-4a2c-8172-0030b25db0f6-kube-api-access-j4n8p\") pod \"controller-manager-879f6c89f-2xlcr\" (UID: \"5668acde-421e-4a2c-8172-0030b25db0f6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2xlcr" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.309270 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1fc394d2-c901-4c55-b5e4-c4e2f889cb82-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-8szns\" (UID: \"1fc394d2-c901-4c55-b5e4-c4e2f889cb82\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8szns" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.309289 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ad543716-1bbb-4a16-9670-084660959961-apiservice-cert\") pod \"packageserver-d55dfcdfc-ng8gw\" (UID: \"ad543716-1bbb-4a16-9670-084660959961\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ng8gw" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.309306 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd6b0396-1528-45bf-a713-67e7e20b3e96-serving-cert\") pod \"apiserver-7bbb656c7d-fh878\" (UID: \"fd6b0396-1528-45bf-a713-67e7e20b3e96\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fh878" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.309323 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzr6k\" (UniqueName: \"kubernetes.io/projected/2b57ba45-1dc5-4f41-93f1-94c5d07cebaa-kube-api-access-fzr6k\") pod \"machine-approver-56656f9798-zg8pp\" (UID: \"2b57ba45-1dc5-4f41-93f1-94c5d07cebaa\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zg8pp" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.309339 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/de18af96-27c1-4d28-acfe-e0317de38dba-default-certificate\") pod \"router-default-5444994796-cjdjc\" (UID: \"de18af96-27c1-4d28-acfe-e0317de38dba\") " pod="openshift-ingress/router-default-5444994796-cjdjc" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.309356 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4g6hs\" (UniqueName: \"kubernetes.io/projected/8f077d38-7967-4ef5-ba75-d833279fdd96-kube-api-access-4g6hs\") pod \"console-operator-58897d9998-hcmrp\" (UID: \"8f077d38-7967-4ef5-ba75-d833279fdd96\") " pod="openshift-console-operator/console-operator-58897d9998-hcmrp" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.309373 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f0a85bf1-837f-4175-803a-c09fde56e5d9-images\") pod \"machine-config-operator-74547568cd-dsplm\" (UID: \"f0a85bf1-837f-4175-803a-c09fde56e5d9\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dsplm" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.309390 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-665kr\" (UniqueName: \"kubernetes.io/projected/b002710e-eb24-40e3-ba04-74fd233f4def-kube-api-access-665kr\") pod \"cluster-samples-operator-665b6dd947-vv85r\" (UID: \"b002710e-eb24-40e3-ba04-74fd233f4def\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vv85r" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.309410 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7a1193b2-a3d3-4480-8a79-a56b722e062e-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-vqd7s\" (UID: \"7a1193b2-a3d3-4480-8a79-a56b722e062e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vqd7s" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.309428 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/95fb5258-8faf-4e0a-ba69-319222cca40a-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-6hpw7\" (UID: \"95fb5258-8faf-4e0a-ba69-319222cca40a\") " pod="openshift-authentication/oauth-openshift-558db77b4-6hpw7" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.309447 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f3c962d1-1735-4e31-9b41-246c2876d628-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-rpbfz\" (UID: \"f3c962d1-1735-4e31-9b41-246c2876d628\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-rpbfz" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.309462 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80ff55a4-f343-4795-9c6f-4ff56a52ea82-config\") pod \"kube-apiserver-operator-766d6c64bb-llbbl\" (UID: \"80ff55a4-f343-4795-9c6f-4ff56a52ea82\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-llbbl" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.309503 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8x9r\" (UniqueName: \"kubernetes.io/projected/c4a6920d-8199-478b-bc08-a96dbc58d236-kube-api-access-j8x9r\") pod \"openshift-config-operator-7777fb866f-5tsxq\" (UID: \"c4a6920d-8199-478b-bc08-a96dbc58d236\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5tsxq" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.309522 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b7d07c12-8633-4029-89e3-9298dd140444-serving-cert\") pod \"etcd-operator-b45778765-w2z24\" (UID: \"b7d07c12-8633-4029-89e3-9298dd140444\") " pod="openshift-etcd-operator/etcd-operator-b45778765-w2z24" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.309537 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c7a93dcd-9a01-4ae7-bf05-d9e13e83d6aa-profile-collector-cert\") pod \"catalog-operator-68c6474976-tf7x5\" (UID: \"c7a93dcd-9a01-4ae7-bf05-d9e13e83d6aa\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tf7x5" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.309553 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/95fb5258-8faf-4e0a-ba69-319222cca40a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-6hpw7\" (UID: \"95fb5258-8faf-4e0a-ba69-319222cca40a\") " pod="openshift-authentication/oauth-openshift-558db77b4-6hpw7" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.309574 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b5760fc2-3dd8-4966-a906-41bebd96de5d-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-92q4r\" (UID: \"b5760fc2-3dd8-4966-a906-41bebd96de5d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-92q4r" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.309590 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b5760fc2-3dd8-4966-a906-41bebd96de5d-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-92q4r\" (UID: \"b5760fc2-3dd8-4966-a906-41bebd96de5d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-92q4r" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.309608 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8fx2\" (UniqueName: \"kubernetes.io/projected/b7d07c12-8633-4029-89e3-9298dd140444-kube-api-access-w8fx2\") pod \"etcd-operator-b45778765-w2z24\" (UID: \"b7d07c12-8633-4029-89e3-9298dd140444\") " pod="openshift-etcd-operator/etcd-operator-b45778765-w2z24" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.309627 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwwnd\" (UniqueName: \"kubernetes.io/projected/4491d82e-ecc9-4873-b1dd-412889079392-kube-api-access-bwwnd\") pod \"dns-operator-744455d44c-rsr8p\" (UID: \"4491d82e-ecc9-4873-b1dd-412889079392\") " pod="openshift-dns-operator/dns-operator-744455d44c-rsr8p" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.309642 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nsgr4\" (UniqueName: \"kubernetes.io/projected/296d291a-3dbf-46c2-a60c-8646965dcbdc-kube-api-access-nsgr4\") pod \"route-controller-manager-6576b87f9c-2zgjr\" (UID: \"296d291a-3dbf-46c2-a60c-8646965dcbdc\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2zgjr" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.309659 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/95fb5258-8faf-4e0a-ba69-319222cca40a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-6hpw7\" (UID: \"95fb5258-8faf-4e0a-ba69-319222cca40a\") " pod="openshift-authentication/oauth-openshift-558db77b4-6hpw7" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.309676 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d353402b-aa82-4c6a-aec3-4253503dfe34-serving-cert\") pod \"service-ca-operator-777779d784-mktpn\" (UID: \"d353402b-aa82-4c6a-aec3-4253503dfe34\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-mktpn" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.309691 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d353402b-aa82-4c6a-aec3-4253503dfe34-config\") pod \"service-ca-operator-777779d784-mktpn\" (UID: \"d353402b-aa82-4c6a-aec3-4253503dfe34\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-mktpn" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.309709 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9492w\" (UniqueName: \"kubernetes.io/projected/b5760fc2-3dd8-4966-a906-41bebd96de5d-kube-api-access-9492w\") pod \"cluster-image-registry-operator-dc59b4c8b-92q4r\" (UID: \"b5760fc2-3dd8-4966-a906-41bebd96de5d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-92q4r" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.309724 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ae21c58e-0b2b-450e-980d-c2d839fda11b-bound-sa-token\") pod \"ingress-operator-5b745b69d9-kzgkb\" (UID: \"ae21c58e-0b2b-450e-980d-c2d839fda11b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kzgkb" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.309741 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/b002710e-eb24-40e3-ba04-74fd233f4def-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-vv85r\" (UID: \"b002710e-eb24-40e3-ba04-74fd233f4def\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vv85r" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.309759 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c4a6920d-8199-478b-bc08-a96dbc58d236-serving-cert\") pod \"openshift-config-operator-7777fb866f-5tsxq\" (UID: \"c4a6920d-8199-478b-bc08-a96dbc58d236\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5tsxq" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.309775 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/edf79140-82dd-4511-a921-d2a8de8635bf-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-9d4qd\" (UID: \"edf79140-82dd-4511-a921-d2a8de8635bf\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9d4qd" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.309792 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5668acde-421e-4a2c-8172-0030b25db0f6-serving-cert\") pod \"controller-manager-879f6c89f-2xlcr\" (UID: \"5668acde-421e-4a2c-8172-0030b25db0f6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2xlcr" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.309809 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcqr8\" (UniqueName: \"kubernetes.io/projected/68775d5a-a682-4965-ac8a-74216e8471fb-kube-api-access-pcqr8\") pod \"openshift-apiserver-operator-796bbdcf4f-dh6tp\" (UID: \"68775d5a-a682-4965-ac8a-74216e8471fb\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dh6tp" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.309826 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkn9k\" (UniqueName: \"kubernetes.io/projected/ad543716-1bbb-4a16-9670-084660959961-kube-api-access-tkn9k\") pod \"packageserver-d55dfcdfc-ng8gw\" (UID: \"ad543716-1bbb-4a16-9670-084660959961\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ng8gw" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.309844 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7qts\" (UniqueName: \"kubernetes.io/projected/1fc394d2-c901-4c55-b5e4-c4e2f889cb82-kube-api-access-h7qts\") pod \"kube-storage-version-migrator-operator-b67b599dd-8szns\" (UID: \"1fc394d2-c901-4c55-b5e4-c4e2f889cb82\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8szns" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.309862 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/68775d5a-a682-4965-ac8a-74216e8471fb-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-dh6tp\" (UID: \"68775d5a-a682-4965-ac8a-74216e8471fb\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dh6tp" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.309879 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/fd6b0396-1528-45bf-a713-67e7e20b3e96-encryption-config\") pod \"apiserver-7bbb656c7d-fh878\" (UID: \"fd6b0396-1528-45bf-a713-67e7e20b3e96\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fh878" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.309894 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/049a6f1a-4b47-421d-bfd6-f1c89a5c0a80-config-volume\") pod \"collect-profiles-29421360-wvnqz\" (UID: \"049a6f1a-4b47-421d-bfd6-f1c89a5c0a80\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421360-wvnqz" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.309914 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b7d07c12-8633-4029-89e3-9298dd140444-etcd-client\") pod \"etcd-operator-b45778765-w2z24\" (UID: \"b7d07c12-8633-4029-89e3-9298dd140444\") " pod="openshift-etcd-operator/etcd-operator-b45778765-w2z24" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.309938 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/296d291a-3dbf-46c2-a60c-8646965dcbdc-config\") pod \"route-controller-manager-6576b87f9c-2zgjr\" (UID: \"296d291a-3dbf-46c2-a60c-8646965dcbdc\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2zgjr" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.309958 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnfnb\" (UniqueName: \"kubernetes.io/projected/fd6b0396-1528-45bf-a713-67e7e20b3e96-kube-api-access-bnfnb\") pod \"apiserver-7bbb656c7d-fh878\" (UID: \"fd6b0396-1528-45bf-a713-67e7e20b3e96\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fh878" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.309976 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/95fb5258-8faf-4e0a-ba69-319222cca40a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-6hpw7\" (UID: \"95fb5258-8faf-4e0a-ba69-319222cca40a\") " pod="openshift-authentication/oauth-openshift-558db77b4-6hpw7" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.309993 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/95fb5258-8faf-4e0a-ba69-319222cca40a-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-6hpw7\" (UID: \"95fb5258-8faf-4e0a-ba69-319222cca40a\") " pod="openshift-authentication/oauth-openshift-558db77b4-6hpw7" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.310014 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzx62\" (UniqueName: \"kubernetes.io/projected/66e2c60b-b67f-4256-b15b-d987c05f3ea8-kube-api-access-jzx62\") pod \"migrator-59844c95c7-82x8s\" (UID: \"66e2c60b-b67f-4256-b15b-d987c05f3ea8\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-82x8s" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.310047 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/95fb5258-8faf-4e0a-ba69-319222cca40a-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-6hpw7\" (UID: \"95fb5258-8faf-4e0a-ba69-319222cca40a\") " pod="openshift-authentication/oauth-openshift-558db77b4-6hpw7" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.310071 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ae21c58e-0b2b-450e-980d-c2d839fda11b-metrics-tls\") pod \"ingress-operator-5b745b69d9-kzgkb\" (UID: \"ae21c58e-0b2b-450e-980d-c2d839fda11b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kzgkb" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.310101 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5668acde-421e-4a2c-8172-0030b25db0f6-client-ca\") pod \"controller-manager-879f6c89f-2xlcr\" (UID: \"5668acde-421e-4a2c-8172-0030b25db0f6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2xlcr" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.310119 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a1193b2-a3d3-4480-8a79-a56b722e062e-config\") pod \"kube-controller-manager-operator-78b949d7b-vqd7s\" (UID: \"7a1193b2-a3d3-4480-8a79-a56b722e062e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vqd7s" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.310136 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fd6b0396-1528-45bf-a713-67e7e20b3e96-audit-dir\") pod \"apiserver-7bbb656c7d-fh878\" (UID: \"fd6b0396-1528-45bf-a713-67e7e20b3e96\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fh878" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.310152 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/2b57ba45-1dc5-4f41-93f1-94c5d07cebaa-machine-approver-tls\") pod \"machine-approver-56656f9798-zg8pp\" (UID: \"2b57ba45-1dc5-4f41-93f1-94c5d07cebaa\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zg8pp" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.310171 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/de18af96-27c1-4d28-acfe-e0317de38dba-stats-auth\") pod \"router-default-5444994796-cjdjc\" (UID: \"de18af96-27c1-4d28-acfe-e0317de38dba\") " pod="openshift-ingress/router-default-5444994796-cjdjc" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.311049 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-dsplm"] Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.312935 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/296d291a-3dbf-46c2-a60c-8646965dcbdc-client-ca\") pod \"route-controller-manager-6576b87f9c-2zgjr\" (UID: \"296d291a-3dbf-46c2-a60c-8646965dcbdc\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2zgjr" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.313413 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf95fba6-ebed-4f09-be10-b8d67bb51752-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rsr7z\" (UID: \"bf95fba6-ebed-4f09-be10-b8d67bb51752\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rsr7z" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.313455 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4kjb\" (UniqueName: \"kubernetes.io/projected/8a37abfc-17ea-4aac-bbb9-9650cb15a2f6-kube-api-access-w4kjb\") pod \"machine-config-controller-84d6567774-psc6r\" (UID: \"8a37abfc-17ea-4aac-bbb9-9650cb15a2f6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-psc6r" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.313509 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtpsz\" (UniqueName: \"kubernetes.io/projected/c7a93dcd-9a01-4ae7-bf05-d9e13e83d6aa-kube-api-access-jtpsz\") pod \"catalog-operator-68c6474976-tf7x5\" (UID: \"c7a93dcd-9a01-4ae7-bf05-d9e13e83d6aa\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tf7x5" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.313533 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/de18af96-27c1-4d28-acfe-e0317de38dba-metrics-certs\") pod \"router-default-5444994796-cjdjc\" (UID: \"de18af96-27c1-4d28-acfe-e0317de38dba\") " pod="openshift-ingress/router-default-5444994796-cjdjc" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.313649 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42f7caf7-af08-4406-b189-3e4ce5fa6819-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-6cpwj\" (UID: \"42f7caf7-af08-4406-b189-3e4ce5fa6819\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6cpwj" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.313686 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99nfl\" (UniqueName: \"kubernetes.io/projected/d353402b-aa82-4c6a-aec3-4253503dfe34-kube-api-access-99nfl\") pod \"service-ca-operator-777779d784-mktpn\" (UID: \"d353402b-aa82-4c6a-aec3-4253503dfe34\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-mktpn" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.313710 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8a37abfc-17ea-4aac-bbb9-9650cb15a2f6-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-psc6r\" (UID: \"8a37abfc-17ea-4aac-bbb9-9650cb15a2f6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-psc6r" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.313733 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/95fb5258-8faf-4e0a-ba69-319222cca40a-audit-policies\") pod \"oauth-openshift-558db77b4-6hpw7\" (UID: \"95fb5258-8faf-4e0a-ba69-319222cca40a\") " pod="openshift-authentication/oauth-openshift-558db77b4-6hpw7" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.313756 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/c4a6920d-8199-478b-bc08-a96dbc58d236-available-featuregates\") pod \"openshift-config-operator-7777fb866f-5tsxq\" (UID: \"c4a6920d-8199-478b-bc08-a96dbc58d236\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5tsxq" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.313781 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7d07c12-8633-4029-89e3-9298dd140444-config\") pod \"etcd-operator-b45778765-w2z24\" (UID: \"b7d07c12-8633-4029-89e3-9298dd140444\") " pod="openshift-etcd-operator/etcd-operator-b45778765-w2z24" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.313803 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b5760fc2-3dd8-4966-a906-41bebd96de5d-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-92q4r\" (UID: \"b5760fc2-3dd8-4966-a906-41bebd96de5d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-92q4r" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.313828 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2b57ba45-1dc5-4f41-93f1-94c5d07cebaa-auth-proxy-config\") pod \"machine-approver-56656f9798-zg8pp\" (UID: \"2b57ba45-1dc5-4f41-93f1-94c5d07cebaa\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zg8pp" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.313851 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/558df921-8b36-45ba-8cc8-b25a1a2f6172-config\") pod \"machine-api-operator-5694c8668f-czvkf\" (UID: \"558df921-8b36-45ba-8cc8-b25a1a2f6172\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-czvkf" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.313876 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68775d5a-a682-4965-ac8a-74216e8471fb-config\") pod \"openshift-apiserver-operator-796bbdcf4f-dh6tp\" (UID: \"68775d5a-a682-4965-ac8a-74216e8471fb\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dh6tp" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.313903 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fd6b0396-1528-45bf-a713-67e7e20b3e96-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-fh878\" (UID: \"fd6b0396-1528-45bf-a713-67e7e20b3e96\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fh878" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.313929 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bf95fba6-ebed-4f09-be10-b8d67bb51752-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rsr7z\" (UID: \"bf95fba6-ebed-4f09-be10-b8d67bb51752\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rsr7z" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.313952 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5668acde-421e-4a2c-8172-0030b25db0f6-config\") pod \"controller-manager-879f6c89f-2xlcr\" (UID: \"5668acde-421e-4a2c-8172-0030b25db0f6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2xlcr" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.313978 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8a37abfc-17ea-4aac-bbb9-9650cb15a2f6-proxy-tls\") pod \"machine-config-controller-84d6567774-psc6r\" (UID: \"8a37abfc-17ea-4aac-bbb9-9650cb15a2f6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-psc6r" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.314002 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wx5qx\" (UniqueName: \"kubernetes.io/projected/f3c962d1-1735-4e31-9b41-246c2876d628-kube-api-access-wx5qx\") pod \"multus-admission-controller-857f4d67dd-rpbfz\" (UID: \"f3c962d1-1735-4e31-9b41-246c2876d628\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-rpbfz" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.314027 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f077d38-7967-4ef5-ba75-d833279fdd96-config\") pod \"console-operator-58897d9998-hcmrp\" (UID: \"8f077d38-7967-4ef5-ba75-d833279fdd96\") " pod="openshift-console-operator/console-operator-58897d9998-hcmrp" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.314057 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/558df921-8b36-45ba-8cc8-b25a1a2f6172-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-czvkf\" (UID: \"558df921-8b36-45ba-8cc8-b25a1a2f6172\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-czvkf" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.314080 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-b4s68"] Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.314081 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ztdz\" (UniqueName: \"kubernetes.io/projected/049a6f1a-4b47-421d-bfd6-f1c89a5c0a80-kube-api-access-9ztdz\") pod \"collect-profiles-29421360-wvnqz\" (UID: \"049a6f1a-4b47-421d-bfd6-f1c89a5c0a80\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421360-wvnqz" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.314137 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98t5l\" (UniqueName: \"kubernetes.io/projected/f0a85bf1-837f-4175-803a-c09fde56e5d9-kube-api-access-98t5l\") pod \"machine-config-operator-74547568cd-dsplm\" (UID: \"f0a85bf1-837f-4175-803a-c09fde56e5d9\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dsplm" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.314165 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/95fb5258-8faf-4e0a-ba69-319222cca40a-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-6hpw7\" (UID: \"95fb5258-8faf-4e0a-ba69-319222cca40a\") " pod="openshift-authentication/oauth-openshift-558db77b4-6hpw7" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.314689 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4491d82e-ecc9-4873-b1dd-412889079392-metrics-tls\") pod \"dns-operator-744455d44c-rsr8p\" (UID: \"4491d82e-ecc9-4873-b1dd-412889079392\") " pod="openshift-dns-operator/dns-operator-744455d44c-rsr8p" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.314735 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rdhh\" (UniqueName: \"kubernetes.io/projected/558df921-8b36-45ba-8cc8-b25a1a2f6172-kube-api-access-2rdhh\") pod \"machine-api-operator-5694c8668f-czvkf\" (UID: \"558df921-8b36-45ba-8cc8-b25a1a2f6172\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-czvkf" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.314760 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/ad543716-1bbb-4a16-9670-084660959961-tmpfs\") pod \"packageserver-d55dfcdfc-ng8gw\" (UID: \"ad543716-1bbb-4a16-9670-084660959961\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ng8gw" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.314780 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bf95fba6-ebed-4f09-be10-b8d67bb51752-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rsr7z\" (UID: \"bf95fba6-ebed-4f09-be10-b8d67bb51752\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rsr7z" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.314796 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/95fb5258-8faf-4e0a-ba69-319222cca40a-audit-dir\") pod \"oauth-openshift-558db77b4-6hpw7\" (UID: \"95fb5258-8faf-4e0a-ba69-319222cca40a\") " pod="openshift-authentication/oauth-openshift-558db77b4-6hpw7" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.314813 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/95fb5258-8faf-4e0a-ba69-319222cca40a-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-6hpw7\" (UID: \"95fb5258-8faf-4e0a-ba69-319222cca40a\") " pod="openshift-authentication/oauth-openshift-558db77b4-6hpw7" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.314832 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a1193b2-a3d3-4480-8a79-a56b722e062e-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-vqd7s\" (UID: \"7a1193b2-a3d3-4480-8a79-a56b722e062e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vqd7s" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.314848 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/b7d07c12-8633-4029-89e3-9298dd140444-etcd-ca\") pod \"etcd-operator-b45778765-w2z24\" (UID: \"b7d07c12-8633-4029-89e3-9298dd140444\") " pod="openshift-etcd-operator/etcd-operator-b45778765-w2z24" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.314865 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/fd6b0396-1528-45bf-a713-67e7e20b3e96-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-fh878\" (UID: \"fd6b0396-1528-45bf-a713-67e7e20b3e96\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fh878" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.314881 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b57ba45-1dc5-4f41-93f1-94c5d07cebaa-config\") pod \"machine-approver-56656f9798-zg8pp\" (UID: \"2b57ba45-1dc5-4f41-93f1-94c5d07cebaa\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zg8pp" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.314924 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/049a6f1a-4b47-421d-bfd6-f1c89a5c0a80-secret-volume\") pod \"collect-profiles-29421360-wvnqz\" (UID: \"049a6f1a-4b47-421d-bfd6-f1c89a5c0a80\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421360-wvnqz" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.314951 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfkgm\" (UniqueName: \"kubernetes.io/projected/42f7caf7-af08-4406-b189-3e4ce5fa6819-kube-api-access-xfkgm\") pod \"control-plane-machine-set-operator-78cbb6b69f-6cpwj\" (UID: \"42f7caf7-af08-4406-b189-3e4ce5fa6819\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6cpwj" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.314969 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f0a85bf1-837f-4175-803a-c09fde56e5d9-auth-proxy-config\") pod \"machine-config-operator-74547568cd-dsplm\" (UID: \"f0a85bf1-837f-4175-803a-c09fde56e5d9\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dsplm" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.314990 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jr2rq\" (UniqueName: \"kubernetes.io/projected/de18af96-27c1-4d28-acfe-e0317de38dba-kube-api-access-jr2rq\") pod \"router-default-5444994796-cjdjc\" (UID: \"de18af96-27c1-4d28-acfe-e0317de38dba\") " pod="openshift-ingress/router-default-5444994796-cjdjc" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.315007 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f077d38-7967-4ef5-ba75-d833279fdd96-trusted-ca\") pod \"console-operator-58897d9998-hcmrp\" (UID: \"8f077d38-7967-4ef5-ba75-d833279fdd96\") " pod="openshift-console-operator/console-operator-58897d9998-hcmrp" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.315025 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ad543716-1bbb-4a16-9670-084660959961-webhook-cert\") pod \"packageserver-d55dfcdfc-ng8gw\" (UID: \"ad543716-1bbb-4a16-9670-084660959961\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ng8gw" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.315044 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edf79140-82dd-4511-a921-d2a8de8635bf-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-9d4qd\" (UID: \"edf79140-82dd-4511-a921-d2a8de8635bf\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9d4qd" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.316394 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/296d291a-3dbf-46c2-a60c-8646965dcbdc-serving-cert\") pod \"route-controller-manager-6576b87f9c-2zgjr\" (UID: \"296d291a-3dbf-46c2-a60c-8646965dcbdc\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2zgjr" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.317210 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vlkrh"] Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.317478 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fd6b0396-1528-45bf-a713-67e7e20b3e96-audit-dir\") pod \"apiserver-7bbb656c7d-fh878\" (UID: \"fd6b0396-1528-45bf-a713-67e7e20b3e96\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fh878" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.317757 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b57ba45-1dc5-4f41-93f1-94c5d07cebaa-config\") pod \"machine-approver-56656f9798-zg8pp\" (UID: \"2b57ba45-1dc5-4f41-93f1-94c5d07cebaa\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zg8pp" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.319070 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a1193b2-a3d3-4480-8a79-a56b722e062e-config\") pod \"kube-controller-manager-operator-78b949d7b-vqd7s\" (UID: \"7a1193b2-a3d3-4480-8a79-a56b722e062e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vqd7s" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.320293 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2b57ba45-1dc5-4f41-93f1-94c5d07cebaa-auth-proxy-config\") pod \"machine-approver-56656f9798-zg8pp\" (UID: \"2b57ba45-1dc5-4f41-93f1-94c5d07cebaa\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zg8pp" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.320910 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fd6b0396-1528-45bf-a713-67e7e20b3e96-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-fh878\" (UID: \"fd6b0396-1528-45bf-a713-67e7e20b3e96\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fh878" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.321520 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8a37abfc-17ea-4aac-bbb9-9650cb15a2f6-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-psc6r\" (UID: \"8a37abfc-17ea-4aac-bbb9-9650cb15a2f6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-psc6r" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.321588 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.322073 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/2b57ba45-1dc5-4f41-93f1-94c5d07cebaa-machine-approver-tls\") pod \"machine-approver-56656f9798-zg8pp\" (UID: \"2b57ba45-1dc5-4f41-93f1-94c5d07cebaa\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zg8pp" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.322748 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d353402b-aa82-4c6a-aec3-4253503dfe34-config\") pod \"service-ca-operator-777779d784-mktpn\" (UID: \"d353402b-aa82-4c6a-aec3-4253503dfe34\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-mktpn" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.323323 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-5tsxq"] Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.323421 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a1193b2-a3d3-4480-8a79-a56b722e062e-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-vqd7s\" (UID: \"7a1193b2-a3d3-4480-8a79-a56b722e062e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vqd7s" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.325280 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-82x8s"] Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.325645 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tf7x5"] Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.327008 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-99nm5"] Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.327703 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dh6tp"] Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.328133 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d353402b-aa82-4c6a-aec3-4253503dfe34-serving-cert\") pod \"service-ca-operator-777779d784-mktpn\" (UID: \"d353402b-aa82-4c6a-aec3-4253503dfe34\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-mktpn" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.328887 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-fh878"] Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.330102 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/296d291a-3dbf-46c2-a60c-8646965dcbdc-config\") pod \"route-controller-manager-6576b87f9c-2zgjr\" (UID: \"296d291a-3dbf-46c2-a60c-8646965dcbdc\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2zgjr" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.330174 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-92q4r"] Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.332322 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-rsr8p"] Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.332414 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-hdphr"] Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.332941 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.334670 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421360-wvnqz"] Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.334690 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fd6b0396-1528-45bf-a713-67e7e20b3e96-etcd-client\") pod \"apiserver-7bbb656c7d-fh878\" (UID: \"fd6b0396-1528-45bf-a713-67e7e20b3e96\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fh878" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.336403 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-7phbd"] Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.338190 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-w77wb"] Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.340610 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd6b0396-1528-45bf-a713-67e7e20b3e96-serving-cert\") pod \"apiserver-7bbb656c7d-fh878\" (UID: \"fd6b0396-1528-45bf-a713-67e7e20b3e96\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fh878" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.340915 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-czvkf"] Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.342174 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-psc6r"] Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.343682 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-q57qh"] Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.344829 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8szns"] Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.345924 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-w2z24"] Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.347163 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-kzgkb"] Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.348365 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6cpwj"] Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.349281 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.352255 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vv85r"] Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.357797 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-428j9"] Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.359794 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-428j9" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.361389 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-trptt"] Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.366180 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/fd6b0396-1528-45bf-a713-67e7e20b3e96-encryption-config\") pod \"apiserver-7bbb656c7d-fh878\" (UID: \"fd6b0396-1528-45bf-a713-67e7e20b3e96\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fh878" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.369215 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-knw7d"] Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.369250 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-trptt"] Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.369519 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-trptt" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.370826 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.389594 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.409940 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.412696 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fd6b0396-1528-45bf-a713-67e7e20b3e96-audit-policies\") pod \"apiserver-7bbb656c7d-fh878\" (UID: \"fd6b0396-1528-45bf-a713-67e7e20b3e96\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fh878" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.415590 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/95fb5258-8faf-4e0a-ba69-319222cca40a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-6hpw7\" (UID: \"95fb5258-8faf-4e0a-ba69-319222cca40a\") " pod="openshift-authentication/oauth-openshift-558db77b4-6hpw7" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.415719 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de18af96-27c1-4d28-acfe-e0317de38dba-service-ca-bundle\") pod \"router-default-5444994796-cjdjc\" (UID: \"de18af96-27c1-4d28-acfe-e0317de38dba\") " pod="openshift-ingress/router-default-5444994796-cjdjc" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.415835 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bjnj\" (UniqueName: \"kubernetes.io/projected/ae21c58e-0b2b-450e-980d-c2d839fda11b-kube-api-access-2bjnj\") pod \"ingress-operator-5b745b69d9-kzgkb\" (UID: \"ae21c58e-0b2b-450e-980d-c2d839fda11b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kzgkb" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.415954 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ae21c58e-0b2b-450e-980d-c2d839fda11b-trusted-ca\") pod \"ingress-operator-5b745b69d9-kzgkb\" (UID: \"ae21c58e-0b2b-450e-980d-c2d839fda11b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kzgkb" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.416102 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/80ff55a4-f343-4795-9c6f-4ff56a52ea82-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-llbbl\" (UID: \"80ff55a4-f343-4795-9c6f-4ff56a52ea82\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-llbbl" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.416249 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/80ff55a4-f343-4795-9c6f-4ff56a52ea82-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-llbbl\" (UID: \"80ff55a4-f343-4795-9c6f-4ff56a52ea82\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-llbbl" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.416382 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f077d38-7967-4ef5-ba75-d833279fdd96-serving-cert\") pod \"console-operator-58897d9998-hcmrp\" (UID: \"8f077d38-7967-4ef5-ba75-d833279fdd96\") " pod="openshift-console-operator/console-operator-58897d9998-hcmrp" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.416521 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4n8p\" (UniqueName: \"kubernetes.io/projected/5668acde-421e-4a2c-8172-0030b25db0f6-kube-api-access-j4n8p\") pod \"controller-manager-879f6c89f-2xlcr\" (UID: \"5668acde-421e-4a2c-8172-0030b25db0f6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2xlcr" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.416854 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pspm\" (UniqueName: \"kubernetes.io/projected/36981648-d6b7-4c08-96ec-622d069c4c19-kube-api-access-5pspm\") pod \"downloads-7954f5f757-hqx9m\" (UID: \"36981648-d6b7-4c08-96ec-622d069c4c19\") " pod="openshift-console/downloads-7954f5f757-hqx9m" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.417096 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fc394d2-c901-4c55-b5e4-c4e2f889cb82-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-8szns\" (UID: \"1fc394d2-c901-4c55-b5e4-c4e2f889cb82\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8szns" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.417495 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1fc394d2-c901-4c55-b5e4-c4e2f889cb82-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-8szns\" (UID: \"1fc394d2-c901-4c55-b5e4-c4e2f889cb82\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8szns" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.417645 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ad543716-1bbb-4a16-9670-084660959961-apiservice-cert\") pod \"packageserver-d55dfcdfc-ng8gw\" (UID: \"ad543716-1bbb-4a16-9670-084660959961\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ng8gw" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.417779 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/de18af96-27c1-4d28-acfe-e0317de38dba-default-certificate\") pod \"router-default-5444994796-cjdjc\" (UID: \"de18af96-27c1-4d28-acfe-e0317de38dba\") " pod="openshift-ingress/router-default-5444994796-cjdjc" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.418337 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4g6hs\" (UniqueName: \"kubernetes.io/projected/8f077d38-7967-4ef5-ba75-d833279fdd96-kube-api-access-4g6hs\") pod \"console-operator-58897d9998-hcmrp\" (UID: \"8f077d38-7967-4ef5-ba75-d833279fdd96\") " pod="openshift-console-operator/console-operator-58897d9998-hcmrp" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.418517 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f0a85bf1-837f-4175-803a-c09fde56e5d9-images\") pod \"machine-config-operator-74547568cd-dsplm\" (UID: \"f0a85bf1-837f-4175-803a-c09fde56e5d9\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dsplm" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.418628 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-665kr\" (UniqueName: \"kubernetes.io/projected/b002710e-eb24-40e3-ba04-74fd233f4def-kube-api-access-665kr\") pod \"cluster-samples-operator-665b6dd947-vv85r\" (UID: \"b002710e-eb24-40e3-ba04-74fd233f4def\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vv85r" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.418784 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/95fb5258-8faf-4e0a-ba69-319222cca40a-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-6hpw7\" (UID: \"95fb5258-8faf-4e0a-ba69-319222cca40a\") " pod="openshift-authentication/oauth-openshift-558db77b4-6hpw7" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.418935 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f3c962d1-1735-4e31-9b41-246c2876d628-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-rpbfz\" (UID: \"f3c962d1-1735-4e31-9b41-246c2876d628\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-rpbfz" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.419094 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8x9r\" (UniqueName: \"kubernetes.io/projected/c4a6920d-8199-478b-bc08-a96dbc58d236-kube-api-access-j8x9r\") pod \"openshift-config-operator-7777fb866f-5tsxq\" (UID: \"c4a6920d-8199-478b-bc08-a96dbc58d236\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5tsxq" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.419276 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80ff55a4-f343-4795-9c6f-4ff56a52ea82-config\") pod \"kube-apiserver-operator-766d6c64bb-llbbl\" (UID: \"80ff55a4-f343-4795-9c6f-4ff56a52ea82\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-llbbl" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.419465 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f0a85bf1-837f-4175-803a-c09fde56e5d9-images\") pod \"machine-config-operator-74547568cd-dsplm\" (UID: \"f0a85bf1-837f-4175-803a-c09fde56e5d9\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dsplm" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.419123 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/95fb5258-8faf-4e0a-ba69-319222cca40a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-6hpw7\" (UID: \"95fb5258-8faf-4e0a-ba69-319222cca40a\") " pod="openshift-authentication/oauth-openshift-558db77b4-6hpw7" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.417491 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de18af96-27c1-4d28-acfe-e0317de38dba-service-ca-bundle\") pod \"router-default-5444994796-cjdjc\" (UID: \"de18af96-27c1-4d28-acfe-e0317de38dba\") " pod="openshift-ingress/router-default-5444994796-cjdjc" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.420517 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80ff55a4-f343-4795-9c6f-4ff56a52ea82-config\") pod \"kube-apiserver-operator-766d6c64bb-llbbl\" (UID: \"80ff55a4-f343-4795-9c6f-4ff56a52ea82\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-llbbl" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.420810 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c7a93dcd-9a01-4ae7-bf05-d9e13e83d6aa-profile-collector-cert\") pod \"catalog-operator-68c6474976-tf7x5\" (UID: \"c7a93dcd-9a01-4ae7-bf05-d9e13e83d6aa\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tf7x5" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.420959 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/95fb5258-8faf-4e0a-ba69-319222cca40a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-6hpw7\" (UID: \"95fb5258-8faf-4e0a-ba69-319222cca40a\") " pod="openshift-authentication/oauth-openshift-558db77b4-6hpw7" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.421109 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/95fb5258-8faf-4e0a-ba69-319222cca40a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-6hpw7\" (UID: \"95fb5258-8faf-4e0a-ba69-319222cca40a\") " pod="openshift-authentication/oauth-openshift-558db77b4-6hpw7" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.421324 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ae21c58e-0b2b-450e-980d-c2d839fda11b-bound-sa-token\") pod \"ingress-operator-5b745b69d9-kzgkb\" (UID: \"ae21c58e-0b2b-450e-980d-c2d839fda11b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kzgkb" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.421561 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c4a6920d-8199-478b-bc08-a96dbc58d236-serving-cert\") pod \"openshift-config-operator-7777fb866f-5tsxq\" (UID: \"c4a6920d-8199-478b-bc08-a96dbc58d236\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5tsxq" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.421668 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/edf79140-82dd-4511-a921-d2a8de8635bf-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-9d4qd\" (UID: \"edf79140-82dd-4511-a921-d2a8de8635bf\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9d4qd" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.421768 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ad543716-1bbb-4a16-9670-084660959961-apiservice-cert\") pod \"packageserver-d55dfcdfc-ng8gw\" (UID: \"ad543716-1bbb-4a16-9670-084660959961\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ng8gw" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.421741 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/95fb5258-8faf-4e0a-ba69-319222cca40a-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-6hpw7\" (UID: \"95fb5258-8faf-4e0a-ba69-319222cca40a\") " pod="openshift-authentication/oauth-openshift-558db77b4-6hpw7" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.421575 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f3c962d1-1735-4e31-9b41-246c2876d628-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-rpbfz\" (UID: \"f3c962d1-1735-4e31-9b41-246c2876d628\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-rpbfz" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.421153 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f077d38-7967-4ef5-ba75-d833279fdd96-serving-cert\") pod \"console-operator-58897d9998-hcmrp\" (UID: \"8f077d38-7967-4ef5-ba75-d833279fdd96\") " pod="openshift-console-operator/console-operator-58897d9998-hcmrp" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.421898 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/95fb5258-8faf-4e0a-ba69-319222cca40a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-6hpw7\" (UID: \"95fb5258-8faf-4e0a-ba69-319222cca40a\") " pod="openshift-authentication/oauth-openshift-558db77b4-6hpw7" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.421782 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/b002710e-eb24-40e3-ba04-74fd233f4def-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-vv85r\" (UID: \"b002710e-eb24-40e3-ba04-74fd233f4def\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vv85r" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.422172 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5668acde-421e-4a2c-8172-0030b25db0f6-serving-cert\") pod \"controller-manager-879f6c89f-2xlcr\" (UID: \"5668acde-421e-4a2c-8172-0030b25db0f6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2xlcr" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.422278 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcqr8\" (UniqueName: \"kubernetes.io/projected/68775d5a-a682-4965-ac8a-74216e8471fb-kube-api-access-pcqr8\") pod \"openshift-apiserver-operator-796bbdcf4f-dh6tp\" (UID: \"68775d5a-a682-4965-ac8a-74216e8471fb\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dh6tp" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.422375 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkn9k\" (UniqueName: \"kubernetes.io/projected/ad543716-1bbb-4a16-9670-084660959961-kube-api-access-tkn9k\") pod \"packageserver-d55dfcdfc-ng8gw\" (UID: \"ad543716-1bbb-4a16-9670-084660959961\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ng8gw" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.422979 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7qts\" (UniqueName: \"kubernetes.io/projected/1fc394d2-c901-4c55-b5e4-c4e2f889cb82-kube-api-access-h7qts\") pod \"kube-storage-version-migrator-operator-b67b599dd-8szns\" (UID: \"1fc394d2-c901-4c55-b5e4-c4e2f889cb82\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8szns" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.424439 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/68775d5a-a682-4965-ac8a-74216e8471fb-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-dh6tp\" (UID: \"68775d5a-a682-4965-ac8a-74216e8471fb\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dh6tp" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.424621 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c4a6920d-8199-478b-bc08-a96dbc58d236-serving-cert\") pod \"openshift-config-operator-7777fb866f-5tsxq\" (UID: \"c4a6920d-8199-478b-bc08-a96dbc58d236\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5tsxq" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.424627 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/049a6f1a-4b47-421d-bfd6-f1c89a5c0a80-config-volume\") pod \"collect-profiles-29421360-wvnqz\" (UID: \"049a6f1a-4b47-421d-bfd6-f1c89a5c0a80\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421360-wvnqz" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.422911 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/80ff55a4-f343-4795-9c6f-4ff56a52ea82-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-llbbl\" (UID: \"80ff55a4-f343-4795-9c6f-4ff56a52ea82\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-llbbl" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.423975 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/95fb5258-8faf-4e0a-ba69-319222cca40a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-6hpw7\" (UID: \"95fb5258-8faf-4e0a-ba69-319222cca40a\") " pod="openshift-authentication/oauth-openshift-558db77b4-6hpw7" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.424112 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/edf79140-82dd-4511-a921-d2a8de8635bf-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-9d4qd\" (UID: \"edf79140-82dd-4511-a921-d2a8de8635bf\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9d4qd" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.424688 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/95fb5258-8faf-4e0a-ba69-319222cca40a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-6hpw7\" (UID: \"95fb5258-8faf-4e0a-ba69-319222cca40a\") " pod="openshift-authentication/oauth-openshift-558db77b4-6hpw7" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.425179 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/95fb5258-8faf-4e0a-ba69-319222cca40a-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-6hpw7\" (UID: \"95fb5258-8faf-4e0a-ba69-319222cca40a\") " pod="openshift-authentication/oauth-openshift-558db77b4-6hpw7" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.425239 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzx62\" (UniqueName: \"kubernetes.io/projected/66e2c60b-b67f-4256-b15b-d987c05f3ea8-kube-api-access-jzx62\") pod \"migrator-59844c95c7-82x8s\" (UID: \"66e2c60b-b67f-4256-b15b-d987c05f3ea8\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-82x8s" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.425263 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/95fb5258-8faf-4e0a-ba69-319222cca40a-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-6hpw7\" (UID: \"95fb5258-8faf-4e0a-ba69-319222cca40a\") " pod="openshift-authentication/oauth-openshift-558db77b4-6hpw7" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.425299 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ae21c58e-0b2b-450e-980d-c2d839fda11b-metrics-tls\") pod \"ingress-operator-5b745b69d9-kzgkb\" (UID: \"ae21c58e-0b2b-450e-980d-c2d839fda11b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kzgkb" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.425319 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5668acde-421e-4a2c-8172-0030b25db0f6-client-ca\") pod \"controller-manager-879f6c89f-2xlcr\" (UID: \"5668acde-421e-4a2c-8172-0030b25db0f6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2xlcr" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.425357 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/de18af96-27c1-4d28-acfe-e0317de38dba-stats-auth\") pod \"router-default-5444994796-cjdjc\" (UID: \"de18af96-27c1-4d28-acfe-e0317de38dba\") " pod="openshift-ingress/router-default-5444994796-cjdjc" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.425397 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf95fba6-ebed-4f09-be10-b8d67bb51752-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rsr7z\" (UID: \"bf95fba6-ebed-4f09-be10-b8d67bb51752\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rsr7z" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.425418 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42f7caf7-af08-4406-b189-3e4ce5fa6819-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-6cpwj\" (UID: \"42f7caf7-af08-4406-b189-3e4ce5fa6819\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6cpwj" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.425449 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtpsz\" (UniqueName: \"kubernetes.io/projected/c7a93dcd-9a01-4ae7-bf05-d9e13e83d6aa-kube-api-access-jtpsz\") pod \"catalog-operator-68c6474976-tf7x5\" (UID: \"c7a93dcd-9a01-4ae7-bf05-d9e13e83d6aa\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tf7x5" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.425469 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/de18af96-27c1-4d28-acfe-e0317de38dba-metrics-certs\") pod \"router-default-5444994796-cjdjc\" (UID: \"de18af96-27c1-4d28-acfe-e0317de38dba\") " pod="openshift-ingress/router-default-5444994796-cjdjc" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.425494 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/95fb5258-8faf-4e0a-ba69-319222cca40a-audit-policies\") pod \"oauth-openshift-558db77b4-6hpw7\" (UID: \"95fb5258-8faf-4e0a-ba69-319222cca40a\") " pod="openshift-authentication/oauth-openshift-558db77b4-6hpw7" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.425514 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/c4a6920d-8199-478b-bc08-a96dbc58d236-available-featuregates\") pod \"openshift-config-operator-7777fb866f-5tsxq\" (UID: \"c4a6920d-8199-478b-bc08-a96dbc58d236\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5tsxq" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.425560 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68775d5a-a682-4965-ac8a-74216e8471fb-config\") pod \"openshift-apiserver-operator-796bbdcf4f-dh6tp\" (UID: \"68775d5a-a682-4965-ac8a-74216e8471fb\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dh6tp" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.425582 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bf95fba6-ebed-4f09-be10-b8d67bb51752-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rsr7z\" (UID: \"bf95fba6-ebed-4f09-be10-b8d67bb51752\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rsr7z" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.425601 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5668acde-421e-4a2c-8172-0030b25db0f6-config\") pod \"controller-manager-879f6c89f-2xlcr\" (UID: \"5668acde-421e-4a2c-8172-0030b25db0f6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2xlcr" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.425628 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wx5qx\" (UniqueName: \"kubernetes.io/projected/f3c962d1-1735-4e31-9b41-246c2876d628-kube-api-access-wx5qx\") pod \"multus-admission-controller-857f4d67dd-rpbfz\" (UID: \"f3c962d1-1735-4e31-9b41-246c2876d628\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-rpbfz" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.425656 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f077d38-7967-4ef5-ba75-d833279fdd96-config\") pod \"console-operator-58897d9998-hcmrp\" (UID: \"8f077d38-7967-4ef5-ba75-d833279fdd96\") " pod="openshift-console-operator/console-operator-58897d9998-hcmrp" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.425692 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ztdz\" (UniqueName: \"kubernetes.io/projected/049a6f1a-4b47-421d-bfd6-f1c89a5c0a80-kube-api-access-9ztdz\") pod \"collect-profiles-29421360-wvnqz\" (UID: \"049a6f1a-4b47-421d-bfd6-f1c89a5c0a80\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421360-wvnqz" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.425717 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/95fb5258-8faf-4e0a-ba69-319222cca40a-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-6hpw7\" (UID: \"95fb5258-8faf-4e0a-ba69-319222cca40a\") " pod="openshift-authentication/oauth-openshift-558db77b4-6hpw7" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.425740 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98t5l\" (UniqueName: \"kubernetes.io/projected/f0a85bf1-837f-4175-803a-c09fde56e5d9-kube-api-access-98t5l\") pod \"machine-config-operator-74547568cd-dsplm\" (UID: \"f0a85bf1-837f-4175-803a-c09fde56e5d9\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dsplm" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.425779 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/ad543716-1bbb-4a16-9670-084660959961-tmpfs\") pod \"packageserver-d55dfcdfc-ng8gw\" (UID: \"ad543716-1bbb-4a16-9670-084660959961\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ng8gw" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.425793 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/049a6f1a-4b47-421d-bfd6-f1c89a5c0a80-config-volume\") pod \"collect-profiles-29421360-wvnqz\" (UID: \"049a6f1a-4b47-421d-bfd6-f1c89a5c0a80\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421360-wvnqz" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.425802 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bf95fba6-ebed-4f09-be10-b8d67bb51752-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rsr7z\" (UID: \"bf95fba6-ebed-4f09-be10-b8d67bb51752\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rsr7z" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.425844 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/95fb5258-8faf-4e0a-ba69-319222cca40a-audit-dir\") pod \"oauth-openshift-558db77b4-6hpw7\" (UID: \"95fb5258-8faf-4e0a-ba69-319222cca40a\") " pod="openshift-authentication/oauth-openshift-558db77b4-6hpw7" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.425872 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/95fb5258-8faf-4e0a-ba69-319222cca40a-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-6hpw7\" (UID: \"95fb5258-8faf-4e0a-ba69-319222cca40a\") " pod="openshift-authentication/oauth-openshift-558db77b4-6hpw7" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.425908 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/049a6f1a-4b47-421d-bfd6-f1c89a5c0a80-secret-volume\") pod \"collect-profiles-29421360-wvnqz\" (UID: \"049a6f1a-4b47-421d-bfd6-f1c89a5c0a80\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421360-wvnqz" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.425958 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfkgm\" (UniqueName: \"kubernetes.io/projected/42f7caf7-af08-4406-b189-3e4ce5fa6819-kube-api-access-xfkgm\") pod \"control-plane-machine-set-operator-78cbb6b69f-6cpwj\" (UID: \"42f7caf7-af08-4406-b189-3e4ce5fa6819\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6cpwj" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.425977 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ad543716-1bbb-4a16-9670-084660959961-webhook-cert\") pod \"packageserver-d55dfcdfc-ng8gw\" (UID: \"ad543716-1bbb-4a16-9670-084660959961\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ng8gw" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.426003 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f0a85bf1-837f-4175-803a-c09fde56e5d9-auth-proxy-config\") pod \"machine-config-operator-74547568cd-dsplm\" (UID: \"f0a85bf1-837f-4175-803a-c09fde56e5d9\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dsplm" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.426026 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jr2rq\" (UniqueName: \"kubernetes.io/projected/de18af96-27c1-4d28-acfe-e0317de38dba-kube-api-access-jr2rq\") pod \"router-default-5444994796-cjdjc\" (UID: \"de18af96-27c1-4d28-acfe-e0317de38dba\") " pod="openshift-ingress/router-default-5444994796-cjdjc" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.426046 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f077d38-7967-4ef5-ba75-d833279fdd96-trusted-ca\") pod \"console-operator-58897d9998-hcmrp\" (UID: \"8f077d38-7967-4ef5-ba75-d833279fdd96\") " pod="openshift-console-operator/console-operator-58897d9998-hcmrp" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.426066 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edf79140-82dd-4511-a921-d2a8de8635bf-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-9d4qd\" (UID: \"edf79140-82dd-4511-a921-d2a8de8635bf\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9d4qd" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.426082 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5668acde-421e-4a2c-8172-0030b25db0f6-client-ca\") pod \"controller-manager-879f6c89f-2xlcr\" (UID: \"5668acde-421e-4a2c-8172-0030b25db0f6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2xlcr" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.426127 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/95fb5258-8faf-4e0a-ba69-319222cca40a-audit-dir\") pod \"oauth-openshift-558db77b4-6hpw7\" (UID: \"95fb5258-8faf-4e0a-ba69-319222cca40a\") " pod="openshift-authentication/oauth-openshift-558db77b4-6hpw7" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.426089 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c7a93dcd-9a01-4ae7-bf05-d9e13e83d6aa-srv-cert\") pod \"catalog-operator-68c6474976-tf7x5\" (UID: \"c7a93dcd-9a01-4ae7-bf05-d9e13e83d6aa\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tf7x5" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.426352 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5668acde-421e-4a2c-8172-0030b25db0f6-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-2xlcr\" (UID: \"5668acde-421e-4a2c-8172-0030b25db0f6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2xlcr" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.426412 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f0a85bf1-837f-4175-803a-c09fde56e5d9-proxy-tls\") pod \"machine-config-operator-74547568cd-dsplm\" (UID: \"f0a85bf1-837f-4175-803a-c09fde56e5d9\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dsplm" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.426445 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwd2g\" (UniqueName: \"kubernetes.io/projected/edf79140-82dd-4511-a921-d2a8de8635bf-kube-api-access-mwd2g\") pod \"openshift-controller-manager-operator-756b6f6bc6-9d4qd\" (UID: \"edf79140-82dd-4511-a921-d2a8de8635bf\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9d4qd" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.426476 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/95fb5258-8faf-4e0a-ba69-319222cca40a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-6hpw7\" (UID: \"95fb5258-8faf-4e0a-ba69-319222cca40a\") " pod="openshift-authentication/oauth-openshift-558db77b4-6hpw7" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.426497 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwfbs\" (UniqueName: \"kubernetes.io/projected/95fb5258-8faf-4e0a-ba69-319222cca40a-kube-api-access-dwfbs\") pod \"oauth-openshift-558db77b4-6hpw7\" (UID: \"95fb5258-8faf-4e0a-ba69-319222cca40a\") " pod="openshift-authentication/oauth-openshift-558db77b4-6hpw7" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.426515 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/95fb5258-8faf-4e0a-ba69-319222cca40a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-6hpw7\" (UID: \"95fb5258-8faf-4e0a-ba69-319222cca40a\") " pod="openshift-authentication/oauth-openshift-558db77b4-6hpw7" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.426515 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5668acde-421e-4a2c-8172-0030b25db0f6-serving-cert\") pod \"controller-manager-879f6c89f-2xlcr\" (UID: \"5668acde-421e-4a2c-8172-0030b25db0f6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2xlcr" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.427601 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/de18af96-27c1-4d28-acfe-e0317de38dba-default-certificate\") pod \"router-default-5444994796-cjdjc\" (UID: \"de18af96-27c1-4d28-acfe-e0317de38dba\") " pod="openshift-ingress/router-default-5444994796-cjdjc" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.427998 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/95fb5258-8faf-4e0a-ba69-319222cca40a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-6hpw7\" (UID: \"95fb5258-8faf-4e0a-ba69-319222cca40a\") " pod="openshift-authentication/oauth-openshift-558db77b4-6hpw7" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.428096 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/95fb5258-8faf-4e0a-ba69-319222cca40a-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-6hpw7\" (UID: \"95fb5258-8faf-4e0a-ba69-319222cca40a\") " pod="openshift-authentication/oauth-openshift-558db77b4-6hpw7" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.428104 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f077d38-7967-4ef5-ba75-d833279fdd96-config\") pod \"console-operator-58897d9998-hcmrp\" (UID: \"8f077d38-7967-4ef5-ba75-d833279fdd96\") " pod="openshift-console-operator/console-operator-58897d9998-hcmrp" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.428216 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf95fba6-ebed-4f09-be10-b8d67bb51752-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rsr7z\" (UID: \"bf95fba6-ebed-4f09-be10-b8d67bb51752\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rsr7z" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.428667 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/95fb5258-8faf-4e0a-ba69-319222cca40a-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-6hpw7\" (UID: \"95fb5258-8faf-4e0a-ba69-319222cca40a\") " pod="openshift-authentication/oauth-openshift-558db77b4-6hpw7" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.428767 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edf79140-82dd-4511-a921-d2a8de8635bf-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-9d4qd\" (UID: \"edf79140-82dd-4511-a921-d2a8de8635bf\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9d4qd" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.429401 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f077d38-7967-4ef5-ba75-d833279fdd96-trusted-ca\") pod \"console-operator-58897d9998-hcmrp\" (UID: \"8f077d38-7967-4ef5-ba75-d833279fdd96\") " pod="openshift-console-operator/console-operator-58897d9998-hcmrp" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.429442 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f0a85bf1-837f-4175-803a-c09fde56e5d9-auth-proxy-config\") pod \"machine-config-operator-74547568cd-dsplm\" (UID: \"f0a85bf1-837f-4175-803a-c09fde56e5d9\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dsplm" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.429597 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/ad543716-1bbb-4a16-9670-084660959961-tmpfs\") pod \"packageserver-d55dfcdfc-ng8gw\" (UID: \"ad543716-1bbb-4a16-9670-084660959961\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ng8gw" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.429614 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/c4a6920d-8199-478b-bc08-a96dbc58d236-available-featuregates\") pod \"openshift-config-operator-7777fb866f-5tsxq\" (UID: \"c4a6920d-8199-478b-bc08-a96dbc58d236\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5tsxq" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.430349 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/95fb5258-8faf-4e0a-ba69-319222cca40a-audit-policies\") pod \"oauth-openshift-558db77b4-6hpw7\" (UID: \"95fb5258-8faf-4e0a-ba69-319222cca40a\") " pod="openshift-authentication/oauth-openshift-558db77b4-6hpw7" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.430455 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/de18af96-27c1-4d28-acfe-e0317de38dba-stats-auth\") pod \"router-default-5444994796-cjdjc\" (UID: \"de18af96-27c1-4d28-acfe-e0317de38dba\") " pod="openshift-ingress/router-default-5444994796-cjdjc" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.430560 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.430748 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c7a93dcd-9a01-4ae7-bf05-d9e13e83d6aa-profile-collector-cert\") pod \"catalog-operator-68c6474976-tf7x5\" (UID: \"c7a93dcd-9a01-4ae7-bf05-d9e13e83d6aa\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tf7x5" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.431361 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5668acde-421e-4a2c-8172-0030b25db0f6-config\") pod \"controller-manager-879f6c89f-2xlcr\" (UID: \"5668acde-421e-4a2c-8172-0030b25db0f6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2xlcr" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.431453 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5668acde-421e-4a2c-8172-0030b25db0f6-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-2xlcr\" (UID: \"5668acde-421e-4a2c-8172-0030b25db0f6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2xlcr" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.431551 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/de18af96-27c1-4d28-acfe-e0317de38dba-metrics-certs\") pod \"router-default-5444994796-cjdjc\" (UID: \"de18af96-27c1-4d28-acfe-e0317de38dba\") " pod="openshift-ingress/router-default-5444994796-cjdjc" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.432013 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/95fb5258-8faf-4e0a-ba69-319222cca40a-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-6hpw7\" (UID: \"95fb5258-8faf-4e0a-ba69-319222cca40a\") " pod="openshift-authentication/oauth-openshift-558db77b4-6hpw7" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.432138 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/95fb5258-8faf-4e0a-ba69-319222cca40a-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-6hpw7\" (UID: \"95fb5258-8faf-4e0a-ba69-319222cca40a\") " pod="openshift-authentication/oauth-openshift-558db77b4-6hpw7" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.432338 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bf95fba6-ebed-4f09-be10-b8d67bb51752-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rsr7z\" (UID: \"bf95fba6-ebed-4f09-be10-b8d67bb51752\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rsr7z" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.432798 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/049a6f1a-4b47-421d-bfd6-f1c89a5c0a80-secret-volume\") pod \"collect-profiles-29421360-wvnqz\" (UID: \"049a6f1a-4b47-421d-bfd6-f1c89a5c0a80\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421360-wvnqz" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.432816 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ad543716-1bbb-4a16-9670-084660959961-webhook-cert\") pod \"packageserver-d55dfcdfc-ng8gw\" (UID: \"ad543716-1bbb-4a16-9670-084660959961\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ng8gw" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.433147 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/95fb5258-8faf-4e0a-ba69-319222cca40a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-6hpw7\" (UID: \"95fb5258-8faf-4e0a-ba69-319222cca40a\") " pod="openshift-authentication/oauth-openshift-558db77b4-6hpw7" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.433931 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f0a85bf1-837f-4175-803a-c09fde56e5d9-proxy-tls\") pod \"machine-config-operator-74547568cd-dsplm\" (UID: \"f0a85bf1-837f-4175-803a-c09fde56e5d9\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dsplm" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.434555 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/95fb5258-8faf-4e0a-ba69-319222cca40a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-6hpw7\" (UID: \"95fb5258-8faf-4e0a-ba69-319222cca40a\") " pod="openshift-authentication/oauth-openshift-558db77b4-6hpw7" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.434902 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c7a93dcd-9a01-4ae7-bf05-d9e13e83d6aa-srv-cert\") pod \"catalog-operator-68c6474976-tf7x5\" (UID: \"c7a93dcd-9a01-4ae7-bf05-d9e13e83d6aa\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tf7x5" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.438052 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/b7d07c12-8633-4029-89e3-9298dd140444-etcd-ca\") pod \"etcd-operator-b45778765-w2z24\" (UID: \"b7d07c12-8633-4029-89e3-9298dd140444\") " pod="openshift-etcd-operator/etcd-operator-b45778765-w2z24" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.456130 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.460280 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b5760fc2-3dd8-4966-a906-41bebd96de5d-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-92q4r\" (UID: \"b5760fc2-3dd8-4966-a906-41bebd96de5d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-92q4r" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.470418 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.490291 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.509758 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.518573 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b7d07c12-8633-4029-89e3-9298dd140444-serving-cert\") pod \"etcd-operator-b45778765-w2z24\" (UID: \"b7d07c12-8633-4029-89e3-9298dd140444\") " pod="openshift-etcd-operator/etcd-operator-b45778765-w2z24" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.530928 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.541145 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b7d07c12-8633-4029-89e3-9298dd140444-etcd-client\") pod \"etcd-operator-b45778765-w2z24\" (UID: \"b7d07c12-8633-4029-89e3-9298dd140444\") " pod="openshift-etcd-operator/etcd-operator-b45778765-w2z24" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.550566 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.554584 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/b7d07c12-8633-4029-89e3-9298dd140444-etcd-service-ca\") pod \"etcd-operator-b45778765-w2z24\" (UID: \"b7d07c12-8633-4029-89e3-9298dd140444\") " pod="openshift-etcd-operator/etcd-operator-b45778765-w2z24" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.570104 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.590610 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.610074 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.611911 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7d07c12-8633-4029-89e3-9298dd140444-config\") pod \"etcd-operator-b45778765-w2z24\" (UID: \"b7d07c12-8633-4029-89e3-9298dd140444\") " pod="openshift-etcd-operator/etcd-operator-b45778765-w2z24" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.629975 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.649680 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.659339 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b5760fc2-3dd8-4966-a906-41bebd96de5d-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-92q4r\" (UID: \"b5760fc2-3dd8-4966-a906-41bebd96de5d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-92q4r" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.670435 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.678784 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/68775d5a-a682-4965-ac8a-74216e8471fb-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-dh6tp\" (UID: \"68775d5a-a682-4965-ac8a-74216e8471fb\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dh6tp" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.690975 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.710046 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.719418 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68775d5a-a682-4965-ac8a-74216e8471fb-config\") pod \"openshift-apiserver-operator-796bbdcf4f-dh6tp\" (UID: \"68775d5a-a682-4965-ac8a-74216e8471fb\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dh6tp" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.730222 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.749898 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.769739 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.772036 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/558df921-8b36-45ba-8cc8-b25a1a2f6172-config\") pod \"machine-api-operator-5694c8668f-czvkf\" (UID: \"558df921-8b36-45ba-8cc8-b25a1a2f6172\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-czvkf" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.789840 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.794168 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/558df921-8b36-45ba-8cc8-b25a1a2f6172-images\") pod \"machine-api-operator-5694c8668f-czvkf\" (UID: \"558df921-8b36-45ba-8cc8-b25a1a2f6172\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-czvkf" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.809887 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.830411 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.833250 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/558df921-8b36-45ba-8cc8-b25a1a2f6172-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-czvkf\" (UID: \"558df921-8b36-45ba-8cc8-b25a1a2f6172\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-czvkf" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.850254 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.869390 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.878799 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/fd6b0396-1528-45bf-a713-67e7e20b3e96-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-fh878\" (UID: \"fd6b0396-1528-45bf-a713-67e7e20b3e96\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fh878" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.889728 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.910069 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.929224 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.935128 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/b002710e-eb24-40e3-ba04-74fd233f4def-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-vv85r\" (UID: \"b002710e-eb24-40e3-ba04-74fd233f4def\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vv85r" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.950689 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.969264 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 09 12:07:24 crc kubenswrapper[4703]: I1209 12:07:24.989976 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 09 12:07:25 crc kubenswrapper[4703]: I1209 12:07:25.009504 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 09 12:07:25 crc kubenswrapper[4703]: I1209 12:07:25.030239 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Dec 09 12:07:25 crc kubenswrapper[4703]: I1209 12:07:25.050486 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 09 12:07:25 crc kubenswrapper[4703]: I1209 12:07:25.070319 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 09 12:07:25 crc kubenswrapper[4703]: I1209 12:07:25.078145 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ae21c58e-0b2b-450e-980d-c2d839fda11b-metrics-tls\") pod \"ingress-operator-5b745b69d9-kzgkb\" (UID: \"ae21c58e-0b2b-450e-980d-c2d839fda11b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kzgkb" Dec 09 12:07:25 crc kubenswrapper[4703]: I1209 12:07:25.089500 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Dec 09 12:07:25 crc kubenswrapper[4703]: I1209 12:07:25.116296 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 09 12:07:25 crc kubenswrapper[4703]: I1209 12:07:25.118357 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ae21c58e-0b2b-450e-980d-c2d839fda11b-trusted-ca\") pod \"ingress-operator-5b745b69d9-kzgkb\" (UID: \"ae21c58e-0b2b-450e-980d-c2d839fda11b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kzgkb" Dec 09 12:07:25 crc kubenswrapper[4703]: I1209 12:07:25.130039 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 09 12:07:25 crc kubenswrapper[4703]: I1209 12:07:25.150180 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Dec 09 12:07:25 crc kubenswrapper[4703]: I1209 12:07:25.170701 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 09 12:07:25 crc kubenswrapper[4703]: I1209 12:07:25.181940 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4491d82e-ecc9-4873-b1dd-412889079392-metrics-tls\") pod \"dns-operator-744455d44c-rsr8p\" (UID: \"4491d82e-ecc9-4873-b1dd-412889079392\") " pod="openshift-dns-operator/dns-operator-744455d44c-rsr8p" Dec 09 12:07:25 crc kubenswrapper[4703]: I1209 12:07:25.188281 4703 request.go:700] Waited for 1.007372073s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Dec 09 12:07:25 crc kubenswrapper[4703]: I1209 12:07:25.189865 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 09 12:07:25 crc kubenswrapper[4703]: I1209 12:07:25.230453 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 09 12:07:25 crc kubenswrapper[4703]: I1209 12:07:25.250832 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Dec 09 12:07:25 crc kubenswrapper[4703]: I1209 12:07:25.269749 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 09 12:07:25 crc kubenswrapper[4703]: I1209 12:07:25.282919 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1fc394d2-c901-4c55-b5e4-c4e2f889cb82-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-8szns\" (UID: \"1fc394d2-c901-4c55-b5e4-c4e2f889cb82\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8szns" Dec 09 12:07:25 crc kubenswrapper[4703]: I1209 12:07:25.290015 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 09 12:07:25 crc kubenswrapper[4703]: I1209 12:07:25.298528 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fc394d2-c901-4c55-b5e4-c4e2f889cb82-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-8szns\" (UID: \"1fc394d2-c901-4c55-b5e4-c4e2f889cb82\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8szns" Dec 09 12:07:25 crc kubenswrapper[4703]: I1209 12:07:25.310632 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 09 12:07:25 crc kubenswrapper[4703]: E1209 12:07:25.321011 4703 secret.go:188] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: failed to sync secret cache: timed out waiting for the condition Dec 09 12:07:25 crc kubenswrapper[4703]: E1209 12:07:25.321087 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a37abfc-17ea-4aac-bbb9-9650cb15a2f6-proxy-tls podName:8a37abfc-17ea-4aac-bbb9-9650cb15a2f6 nodeName:}" failed. No retries permitted until 2025-12-09 12:07:25.821064106 +0000 UTC m=+145.069827615 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/8a37abfc-17ea-4aac-bbb9-9650cb15a2f6-proxy-tls") pod "machine-config-controller-84d6567774-psc6r" (UID: "8a37abfc-17ea-4aac-bbb9-9650cb15a2f6") : failed to sync secret cache: timed out waiting for the condition Dec 09 12:07:25 crc kubenswrapper[4703]: I1209 12:07:25.330959 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 09 12:07:25 crc kubenswrapper[4703]: I1209 12:07:25.339110 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42f7caf7-af08-4406-b189-3e4ce5fa6819-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-6cpwj\" (UID: \"42f7caf7-af08-4406-b189-3e4ce5fa6819\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6cpwj" Dec 09 12:07:25 crc kubenswrapper[4703]: I1209 12:07:25.349821 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Dec 09 12:07:25 crc kubenswrapper[4703]: I1209 12:07:25.370319 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 09 12:07:25 crc kubenswrapper[4703]: I1209 12:07:25.390449 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Dec 09 12:07:25 crc kubenswrapper[4703]: I1209 12:07:25.409958 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 09 12:07:25 crc kubenswrapper[4703]: I1209 12:07:25.430596 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 09 12:07:25 crc kubenswrapper[4703]: I1209 12:07:25.449642 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 09 12:07:25 crc kubenswrapper[4703]: I1209 12:07:25.469909 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 09 12:07:25 crc kubenswrapper[4703]: I1209 12:07:25.489804 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 09 12:07:25 crc kubenswrapper[4703]: I1209 12:07:25.510254 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Dec 09 12:07:25 crc kubenswrapper[4703]: I1209 12:07:25.529540 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 09 12:07:25 crc kubenswrapper[4703]: I1209 12:07:25.554897 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 09 12:07:25 crc kubenswrapper[4703]: I1209 12:07:25.584368 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hbjx\" (UniqueName: \"kubernetes.io/projected/a58cf20a-3255-44f0-b38a-2017d8cce4e0-kube-api-access-9hbjx\") pod \"apiserver-76f77b778f-mdtkr\" (UID: \"a58cf20a-3255-44f0-b38a-2017d8cce4e0\") " pod="openshift-apiserver/apiserver-76f77b778f-mdtkr" Dec 09 12:07:25 crc kubenswrapper[4703]: I1209 12:07:25.595257 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 09 12:07:25 crc kubenswrapper[4703]: I1209 12:07:25.599075 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-mdtkr" Dec 09 12:07:25 crc kubenswrapper[4703]: I1209 12:07:25.610945 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 09 12:07:25 crc kubenswrapper[4703]: I1209 12:07:25.630181 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Dec 09 12:07:25 crc kubenswrapper[4703]: I1209 12:07:25.650263 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 09 12:07:25 crc kubenswrapper[4703]: I1209 12:07:25.669449 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 09 12:07:25 crc kubenswrapper[4703]: I1209 12:07:25.709754 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdvj9\" (UniqueName: \"kubernetes.io/projected/5cd2dc72-d636-4b53-94ee-f759caaf76e0-kube-api-access-tdvj9\") pod \"authentication-operator-69f744f599-8cpwb\" (UID: \"5cd2dc72-d636-4b53-94ee-f759caaf76e0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8cpwb" Dec 09 12:07:25 crc kubenswrapper[4703]: I1209 12:07:25.711654 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 09 12:07:25 crc kubenswrapper[4703]: I1209 12:07:25.729473 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 09 12:07:25 crc kubenswrapper[4703]: I1209 12:07:25.749858 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 09 12:07:25 crc kubenswrapper[4703]: I1209 12:07:25.770074 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 09 12:07:25 crc kubenswrapper[4703]: I1209 12:07:25.775914 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-mdtkr"] Dec 09 12:07:25 crc kubenswrapper[4703]: I1209 12:07:25.789268 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 09 12:07:25 crc kubenswrapper[4703]: I1209 12:07:25.810097 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Dec 09 12:07:25 crc kubenswrapper[4703]: I1209 12:07:25.829989 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 09 12:07:25 crc kubenswrapper[4703]: I1209 12:07:25.845332 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8a37abfc-17ea-4aac-bbb9-9650cb15a2f6-proxy-tls\") pod \"machine-config-controller-84d6567774-psc6r\" (UID: \"8a37abfc-17ea-4aac-bbb9-9650cb15a2f6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-psc6r" Dec 09 12:07:25 crc kubenswrapper[4703]: I1209 12:07:25.849507 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8a37abfc-17ea-4aac-bbb9-9650cb15a2f6-proxy-tls\") pod \"machine-config-controller-84d6567774-psc6r\" (UID: \"8a37abfc-17ea-4aac-bbb9-9650cb15a2f6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-psc6r" Dec 09 12:07:25 crc kubenswrapper[4703]: I1209 12:07:25.849743 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 09 12:07:25 crc kubenswrapper[4703]: I1209 12:07:25.869594 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 09 12:07:25 crc kubenswrapper[4703]: I1209 12:07:25.889672 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 09 12:07:25 crc kubenswrapper[4703]: I1209 12:07:25.910177 4703 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Dec 09 12:07:25 crc kubenswrapper[4703]: I1209 12:07:25.923737 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-8cpwb" Dec 09 12:07:25 crc kubenswrapper[4703]: I1209 12:07:25.929972 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 09 12:07:25 crc kubenswrapper[4703]: I1209 12:07:25.950392 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Dec 09 12:07:25 crc kubenswrapper[4703]: I1209 12:07:25.970638 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 09 12:07:25 crc kubenswrapper[4703]: I1209 12:07:25.989930 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.045582 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b5760fc2-3dd8-4966-a906-41bebd96de5d-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-92q4r\" (UID: \"b5760fc2-3dd8-4966-a906-41bebd96de5d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-92q4r" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.065028 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8fx2\" (UniqueName: \"kubernetes.io/projected/b7d07c12-8633-4029-89e3-9298dd140444-kube-api-access-w8fx2\") pod \"etcd-operator-b45778765-w2z24\" (UID: \"b7d07c12-8633-4029-89e3-9298dd140444\") " pod="openshift-etcd-operator/etcd-operator-b45778765-w2z24" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.090624 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwwnd\" (UniqueName: \"kubernetes.io/projected/4491d82e-ecc9-4873-b1dd-412889079392-kube-api-access-bwwnd\") pod \"dns-operator-744455d44c-rsr8p\" (UID: \"4491d82e-ecc9-4873-b1dd-412889079392\") " pod="openshift-dns-operator/dns-operator-744455d44c-rsr8p" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.104646 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-8cpwb"] Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.108821 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzr6k\" (UniqueName: \"kubernetes.io/projected/2b57ba45-1dc5-4f41-93f1-94c5d07cebaa-kube-api-access-fzr6k\") pod \"machine-approver-56656f9798-zg8pp\" (UID: \"2b57ba45-1dc5-4f41-93f1-94c5d07cebaa\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zg8pp" Dec 09 12:07:26 crc kubenswrapper[4703]: W1209 12:07:26.112883 4703 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5cd2dc72_d636_4b53_94ee_f759caaf76e0.slice/crio-7a3e8edfac19dd6f20999a4a7b76cc6f44092ed5d6fb8b882b6cf2c5985376e9 WatchSource:0}: Error finding container 7a3e8edfac19dd6f20999a4a7b76cc6f44092ed5d6fb8b882b6cf2c5985376e9: Status 404 returned error can't find the container with id 7a3e8edfac19dd6f20999a4a7b76cc6f44092ed5d6fb8b882b6cf2c5985376e9 Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.122951 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7a1193b2-a3d3-4480-8a79-a56b722e062e-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-vqd7s\" (UID: \"7a1193b2-a3d3-4480-8a79-a56b722e062e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vqd7s" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.146454 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rdhh\" (UniqueName: \"kubernetes.io/projected/558df921-8b36-45ba-8cc8-b25a1a2f6172-kube-api-access-2rdhh\") pod \"machine-api-operator-5694c8668f-czvkf\" (UID: \"558df921-8b36-45ba-8cc8-b25a1a2f6172\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-czvkf" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.163149 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnfnb\" (UniqueName: \"kubernetes.io/projected/fd6b0396-1528-45bf-a713-67e7e20b3e96-kube-api-access-bnfnb\") pod \"apiserver-7bbb656c7d-fh878\" (UID: \"fd6b0396-1528-45bf-a713-67e7e20b3e96\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fh878" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.179543 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vqd7s" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.183355 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9492w\" (UniqueName: \"kubernetes.io/projected/b5760fc2-3dd8-4966-a906-41bebd96de5d-kube-api-access-9492w\") pod \"cluster-image-registry-operator-dc59b4c8b-92q4r\" (UID: \"b5760fc2-3dd8-4966-a906-41bebd96de5d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-92q4r" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.188977 4703 request.go:700] Waited for 1.8686526s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/serviceaccounts/route-controller-manager-sa/token Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.207327 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nsgr4\" (UniqueName: \"kubernetes.io/projected/296d291a-3dbf-46c2-a60c-8646965dcbdc-kube-api-access-nsgr4\") pod \"route-controller-manager-6576b87f9c-2zgjr\" (UID: \"296d291a-3dbf-46c2-a60c-8646965dcbdc\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2zgjr" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.228892 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4kjb\" (UniqueName: \"kubernetes.io/projected/8a37abfc-17ea-4aac-bbb9-9650cb15a2f6-kube-api-access-w4kjb\") pod \"machine-config-controller-84d6567774-psc6r\" (UID: \"8a37abfc-17ea-4aac-bbb9-9650cb15a2f6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-psc6r" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.246381 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99nfl\" (UniqueName: \"kubernetes.io/projected/d353402b-aa82-4c6a-aec3-4253503dfe34-kube-api-access-99nfl\") pod \"service-ca-operator-777779d784-mktpn\" (UID: \"d353402b-aa82-4c6a-aec3-4253503dfe34\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-mktpn" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.251588 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2zgjr" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.254535 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.257477 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fh878" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.267495 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-w2z24" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.269842 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.276355 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-92q4r" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.290122 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.294263 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-czvkf" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.311450 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.330454 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-rsr8p" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.333552 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.353699 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.356411 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-psc6r" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.369861 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vqd7s"] Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.370955 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zg8pp" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.371665 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.415738 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bjnj\" (UniqueName: \"kubernetes.io/projected/ae21c58e-0b2b-450e-980d-c2d839fda11b-kube-api-access-2bjnj\") pod \"ingress-operator-5b745b69d9-kzgkb\" (UID: \"ae21c58e-0b2b-450e-980d-c2d839fda11b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kzgkb" Dec 09 12:07:26 crc kubenswrapper[4703]: W1209 12:07:26.431661 4703 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2b57ba45_1dc5_4f41_93f1_94c5d07cebaa.slice/crio-7ced9c3c22b5ea1c745ee233c9b7de74c91d9005b0432cfb7c34668974f63ee6 WatchSource:0}: Error finding container 7ced9c3c22b5ea1c745ee233c9b7de74c91d9005b0432cfb7c34668974f63ee6: Status 404 returned error can't find the container with id 7ced9c3c22b5ea1c745ee233c9b7de74c91d9005b0432cfb7c34668974f63ee6 Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.432306 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/80ff55a4-f343-4795-9c6f-4ff56a52ea82-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-llbbl\" (UID: \"80ff55a4-f343-4795-9c6f-4ff56a52ea82\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-llbbl" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.449824 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4n8p\" (UniqueName: \"kubernetes.io/projected/5668acde-421e-4a2c-8172-0030b25db0f6-kube-api-access-j4n8p\") pod \"controller-manager-879f6c89f-2xlcr\" (UID: \"5668acde-421e-4a2c-8172-0030b25db0f6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2xlcr" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.468562 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pspm\" (UniqueName: \"kubernetes.io/projected/36981648-d6b7-4c08-96ec-622d069c4c19-kube-api-access-5pspm\") pod \"downloads-7954f5f757-hqx9m\" (UID: \"36981648-d6b7-4c08-96ec-622d069c4c19\") " pod="openshift-console/downloads-7954f5f757-hqx9m" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.499543 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4g6hs\" (UniqueName: \"kubernetes.io/projected/8f077d38-7967-4ef5-ba75-d833279fdd96-kube-api-access-4g6hs\") pod \"console-operator-58897d9998-hcmrp\" (UID: \"8f077d38-7967-4ef5-ba75-d833279fdd96\") " pod="openshift-console-operator/console-operator-58897d9998-hcmrp" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.507587 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-665kr\" (UniqueName: \"kubernetes.io/projected/b002710e-eb24-40e3-ba04-74fd233f4def-kube-api-access-665kr\") pod \"cluster-samples-operator-665b6dd947-vv85r\" (UID: \"b002710e-eb24-40e3-ba04-74fd233f4def\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vv85r" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.512778 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-mktpn" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.533827 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8x9r\" (UniqueName: \"kubernetes.io/projected/c4a6920d-8199-478b-bc08-a96dbc58d236-kube-api-access-j8x9r\") pod \"openshift-config-operator-7777fb866f-5tsxq\" (UID: \"c4a6920d-8199-478b-bc08-a96dbc58d236\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5tsxq" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.544392 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5tsxq" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.547171 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ae21c58e-0b2b-450e-980d-c2d839fda11b-bound-sa-token\") pod \"ingress-operator-5b745b69d9-kzgkb\" (UID: \"ae21c58e-0b2b-450e-980d-c2d839fda11b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kzgkb" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.566717 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcqr8\" (UniqueName: \"kubernetes.io/projected/68775d5a-a682-4965-ac8a-74216e8471fb-kube-api-access-pcqr8\") pod \"openshift-apiserver-operator-796bbdcf4f-dh6tp\" (UID: \"68775d5a-a682-4965-ac8a-74216e8471fb\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dh6tp" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.587537 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dh6tp" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.592855 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkn9k\" (UniqueName: \"kubernetes.io/projected/ad543716-1bbb-4a16-9670-084660959961-kube-api-access-tkn9k\") pod \"packageserver-d55dfcdfc-ng8gw\" (UID: \"ad543716-1bbb-4a16-9670-084660959961\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ng8gw" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.602810 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7qts\" (UniqueName: \"kubernetes.io/projected/1fc394d2-c901-4c55-b5e4-c4e2f889cb82-kube-api-access-h7qts\") pod \"kube-storage-version-migrator-operator-b67b599dd-8szns\" (UID: \"1fc394d2-c901-4c55-b5e4-c4e2f889cb82\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8szns" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.603057 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vv85r" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.620652 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kzgkb" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.633058 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzx62\" (UniqueName: \"kubernetes.io/projected/66e2c60b-b67f-4256-b15b-d987c05f3ea8-kube-api-access-jzx62\") pod \"migrator-59844c95c7-82x8s\" (UID: \"66e2c60b-b67f-4256-b15b-d987c05f3ea8\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-82x8s" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.636801 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8szns" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.643698 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-2xlcr" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.647432 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ztdz\" (UniqueName: \"kubernetes.io/projected/049a6f1a-4b47-421d-bfd6-f1c89a5c0a80-kube-api-access-9ztdz\") pod \"collect-profiles-29421360-wvnqz\" (UID: \"049a6f1a-4b47-421d-bfd6-f1c89a5c0a80\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421360-wvnqz" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.657495 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-llbbl" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.674481 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wx5qx\" (UniqueName: \"kubernetes.io/projected/f3c962d1-1735-4e31-9b41-246c2876d628-kube-api-access-wx5qx\") pod \"multus-admission-controller-857f4d67dd-rpbfz\" (UID: \"f3c962d1-1735-4e31-9b41-246c2876d628\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-rpbfz" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.694667 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtpsz\" (UniqueName: \"kubernetes.io/projected/c7a93dcd-9a01-4ae7-bf05-d9e13e83d6aa-kube-api-access-jtpsz\") pod \"catalog-operator-68c6474976-tf7x5\" (UID: \"c7a93dcd-9a01-4ae7-bf05-d9e13e83d6aa\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tf7x5" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.707074 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zg8pp" event={"ID":"2b57ba45-1dc5-4f41-93f1-94c5d07cebaa","Type":"ContainerStarted","Data":"7ced9c3c22b5ea1c745ee233c9b7de74c91d9005b0432cfb7c34668974f63ee6"} Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.708075 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vqd7s" event={"ID":"7a1193b2-a3d3-4480-8a79-a56b722e062e","Type":"ContainerStarted","Data":"7b73d86fc1d4591e46402ce7375c4f53bc34fc9b62ec397316d4f1197b1677bb"} Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.710608 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jr2rq\" (UniqueName: \"kubernetes.io/projected/de18af96-27c1-4d28-acfe-e0317de38dba-kube-api-access-jr2rq\") pod \"router-default-5444994796-cjdjc\" (UID: \"de18af96-27c1-4d28-acfe-e0317de38dba\") " pod="openshift-ingress/router-default-5444994796-cjdjc" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.711887 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-hcmrp" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.722052 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-fh878"] Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.724080 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-psc6r"] Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.730454 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bf95fba6-ebed-4f09-be10-b8d67bb51752-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rsr7z\" (UID: \"bf95fba6-ebed-4f09-be10-b8d67bb51752\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rsr7z" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.748625 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-cjdjc" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.749191 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-2zgjr"] Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.757534 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-hqx9m" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.759982 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-rsr8p"] Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.767985 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-8cpwb" event={"ID":"5cd2dc72-d636-4b53-94ee-f759caaf76e0","Type":"ContainerStarted","Data":"58fb5c5c43c579a711a77c9fd4a28e484ad69c9039f73a818f4675067e6da4f2"} Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.768031 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-8cpwb" event={"ID":"5cd2dc72-d636-4b53-94ee-f759caaf76e0","Type":"ContainerStarted","Data":"7a3e8edfac19dd6f20999a4a7b76cc6f44092ed5d6fb8b882b6cf2c5985376e9"} Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.770551 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-rpbfz" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.776519 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98t5l\" (UniqueName: \"kubernetes.io/projected/f0a85bf1-837f-4175-803a-c09fde56e5d9-kube-api-access-98t5l\") pod \"machine-config-operator-74547568cd-dsplm\" (UID: \"f0a85bf1-837f-4175-803a-c09fde56e5d9\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dsplm" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.783885 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwfbs\" (UniqueName: \"kubernetes.io/projected/95fb5258-8faf-4e0a-ba69-319222cca40a-kube-api-access-dwfbs\") pod \"oauth-openshift-558db77b4-6hpw7\" (UID: \"95fb5258-8faf-4e0a-ba69-319222cca40a\") " pod="openshift-authentication/oauth-openshift-558db77b4-6hpw7" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.787285 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-6hpw7" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.791995 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfkgm\" (UniqueName: \"kubernetes.io/projected/42f7caf7-af08-4406-b189-3e4ce5fa6819-kube-api-access-xfkgm\") pod \"control-plane-machine-set-operator-78cbb6b69f-6cpwj\" (UID: \"42f7caf7-af08-4406-b189-3e4ce5fa6819\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6cpwj" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.793775 4703 generic.go:334] "Generic (PLEG): container finished" podID="a58cf20a-3255-44f0-b38a-2017d8cce4e0" containerID="de451b2a84dd06178570e438c6ba74553a58a18a6bf5ad0fa03a6f9d011bf664" exitCode=0 Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.794367 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-czvkf"] Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.794419 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-mdtkr" event={"ID":"a58cf20a-3255-44f0-b38a-2017d8cce4e0","Type":"ContainerDied","Data":"de451b2a84dd06178570e438c6ba74553a58a18a6bf5ad0fa03a6f9d011bf664"} Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.794444 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-92q4r"] Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.794454 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-mdtkr" event={"ID":"a58cf20a-3255-44f0-b38a-2017d8cce4e0","Type":"ContainerStarted","Data":"36daf268c76938bcf3ace31e0c87ed0fe10c96534e14ebbbfcdff9527e8902f8"} Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.797395 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ng8gw" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.801263 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-w2z24"] Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.807417 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rsr7z" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.808449 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwd2g\" (UniqueName: \"kubernetes.io/projected/edf79140-82dd-4511-a921-d2a8de8635bf-kube-api-access-mwd2g\") pod \"openshift-controller-manager-operator-756b6f6bc6-9d4qd\" (UID: \"edf79140-82dd-4511-a921-d2a8de8635bf\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9d4qd" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.826517 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dsplm" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.835167 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tf7x5" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.850233 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421360-wvnqz" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.857744 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-mktpn"] Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.861895 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2078d397-e8a5-4dbf-8573-360a9c373084-console-serving-cert\") pod \"console-f9d7485db-w77wb\" (UID: \"2078d397-e8a5-4dbf-8573-360a9c373084\") " pod="openshift-console/console-f9d7485db-w77wb" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.861926 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/e2de5e19-e8f3-4721-96c3-943bf64a7dab-csi-data-dir\") pod \"csi-hostpathplugin-hdphr\" (UID: \"e2de5e19-e8f3-4721-96c3-943bf64a7dab\") " pod="hostpath-provisioner/csi-hostpathplugin-hdphr" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.861962 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/352f59bb-69ee-46d0-862c-d839ac334b35-trusted-ca\") pod \"image-registry-697d97f7c8-q57qh\" (UID: \"352f59bb-69ee-46d0-862c-d839ac334b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-q57qh" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.861989 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/352f59bb-69ee-46d0-862c-d839ac334b35-registry-tls\") pod \"image-registry-697d97f7c8-q57qh\" (UID: \"352f59bb-69ee-46d0-862c-d839ac334b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-q57qh" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.862049 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d8699089-c9ff-4389-8a8e-72b5c976b5ae-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-vlkrh\" (UID: \"d8699089-c9ff-4389-8a8e-72b5c976b5ae\") " pod="openshift-marketplace/marketplace-operator-79b997595-vlkrh" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.862099 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8b1bfa15-e14a-489c-9eb9-f9774dfca6b4-srv-cert\") pod \"olm-operator-6b444d44fb-99nm5\" (UID: \"8b1bfa15-e14a-489c-9eb9-f9774dfca6b4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-99nm5" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.862131 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2078d397-e8a5-4dbf-8573-360a9c373084-console-config\") pod \"console-f9d7485db-w77wb\" (UID: \"2078d397-e8a5-4dbf-8573-360a9c373084\") " pod="openshift-console/console-f9d7485db-w77wb" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.862167 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/e2de5e19-e8f3-4721-96c3-943bf64a7dab-mountpoint-dir\") pod \"csi-hostpathplugin-hdphr\" (UID: \"e2de5e19-e8f3-4721-96c3-943bf64a7dab\") " pod="hostpath-provisioner/csi-hostpathplugin-hdphr" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.862189 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e2de5e19-e8f3-4721-96c3-943bf64a7dab-registration-dir\") pod \"csi-hostpathplugin-hdphr\" (UID: \"e2de5e19-e8f3-4721-96c3-943bf64a7dab\") " pod="hostpath-provisioner/csi-hostpathplugin-hdphr" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.862224 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2078d397-e8a5-4dbf-8573-360a9c373084-oauth-serving-cert\") pod \"console-f9d7485db-w77wb\" (UID: \"2078d397-e8a5-4dbf-8573-360a9c373084\") " pod="openshift-console/console-f9d7485db-w77wb" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.862265 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8b1bfa15-e14a-489c-9eb9-f9774dfca6b4-profile-collector-cert\") pod \"olm-operator-6b444d44fb-99nm5\" (UID: \"8b1bfa15-e14a-489c-9eb9-f9774dfca6b4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-99nm5" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.862329 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/352f59bb-69ee-46d0-862c-d839ac334b35-bound-sa-token\") pod \"image-registry-697d97f7c8-q57qh\" (UID: \"352f59bb-69ee-46d0-862c-d839ac334b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-q57qh" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.862347 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/f22fe25c-091e-4ed7-b58c-092e4c75df3e-signing-key\") pod \"service-ca-9c57cc56f-b4s68\" (UID: \"f22fe25c-091e-4ed7-b58c-092e4c75df3e\") " pod="openshift-service-ca/service-ca-9c57cc56f-b4s68" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.862395 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2078d397-e8a5-4dbf-8573-360a9c373084-service-ca\") pod \"console-f9d7485db-w77wb\" (UID: \"2078d397-e8a5-4dbf-8573-360a9c373084\") " pod="openshift-console/console-f9d7485db-w77wb" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.862426 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/e2de5e19-e8f3-4721-96c3-943bf64a7dab-plugins-dir\") pod \"csi-hostpathplugin-hdphr\" (UID: \"e2de5e19-e8f3-4721-96c3-943bf64a7dab\") " pod="hostpath-provisioner/csi-hostpathplugin-hdphr" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.862456 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2078d397-e8a5-4dbf-8573-360a9c373084-console-oauth-config\") pod \"console-f9d7485db-w77wb\" (UID: \"2078d397-e8a5-4dbf-8573-360a9c373084\") " pod="openshift-console/console-f9d7485db-w77wb" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.862509 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2078d397-e8a5-4dbf-8573-360a9c373084-trusted-ca-bundle\") pod \"console-f9d7485db-w77wb\" (UID: \"2078d397-e8a5-4dbf-8573-360a9c373084\") " pod="openshift-console/console-f9d7485db-w77wb" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.862584 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/f22fe25c-091e-4ed7-b58c-092e4c75df3e-signing-cabundle\") pod \"service-ca-9c57cc56f-b4s68\" (UID: \"f22fe25c-091e-4ed7-b58c-092e4c75df3e\") " pod="openshift-service-ca/service-ca-9c57cc56f-b4s68" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.862632 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hm7kn\" (UniqueName: \"kubernetes.io/projected/8b1bfa15-e14a-489c-9eb9-f9774dfca6b4-kube-api-access-hm7kn\") pod \"olm-operator-6b444d44fb-99nm5\" (UID: \"8b1bfa15-e14a-489c-9eb9-f9774dfca6b4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-99nm5" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.862683 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snfpv\" (UniqueName: \"kubernetes.io/projected/2078d397-e8a5-4dbf-8573-360a9c373084-kube-api-access-snfpv\") pod \"console-f9d7485db-w77wb\" (UID: \"2078d397-e8a5-4dbf-8573-360a9c373084\") " pod="openshift-console/console-f9d7485db-w77wb" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.862734 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/352f59bb-69ee-46d0-862c-d839ac334b35-ca-trust-extracted\") pod \"image-registry-697d97f7c8-q57qh\" (UID: \"352f59bb-69ee-46d0-862c-d839ac334b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-q57qh" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.862771 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbcc8\" (UniqueName: \"kubernetes.io/projected/352f59bb-69ee-46d0-862c-d839ac334b35-kube-api-access-bbcc8\") pod \"image-registry-697d97f7c8-q57qh\" (UID: \"352f59bb-69ee-46d0-862c-d839ac334b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-q57qh" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.862786 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqklm\" (UniqueName: \"kubernetes.io/projected/f22fe25c-091e-4ed7-b58c-092e4c75df3e-kube-api-access-xqklm\") pod \"service-ca-9c57cc56f-b4s68\" (UID: \"f22fe25c-091e-4ed7-b58c-092e4c75df3e\") " pod="openshift-service-ca/service-ca-9c57cc56f-b4s68" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.862800 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8e44c4b9-79cf-44c7-ad73-6aaeac691e86-metrics-tls\") pod \"dns-default-7phbd\" (UID: \"8e44c4b9-79cf-44c7-ad73-6aaeac691e86\") " pod="openshift-dns/dns-default-7phbd" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.862841 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5qkw\" (UniqueName: \"kubernetes.io/projected/3dfa5db2-7657-47b6-804c-878fd45694e5-kube-api-access-t5qkw\") pod \"package-server-manager-789f6589d5-knw7d\" (UID: \"3dfa5db2-7657-47b6-804c-878fd45694e5\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-knw7d" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.862861 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e2de5e19-e8f3-4721-96c3-943bf64a7dab-socket-dir\") pod \"csi-hostpathplugin-hdphr\" (UID: \"e2de5e19-e8f3-4721-96c3-943bf64a7dab\") " pod="hostpath-provisioner/csi-hostpathplugin-hdphr" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.862998 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/352f59bb-69ee-46d0-862c-d839ac334b35-installation-pull-secrets\") pod \"image-registry-697d97f7c8-q57qh\" (UID: \"352f59bb-69ee-46d0-862c-d839ac334b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-q57qh" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.863021 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67ddz\" (UniqueName: \"kubernetes.io/projected/d8699089-c9ff-4389-8a8e-72b5c976b5ae-kube-api-access-67ddz\") pod \"marketplace-operator-79b997595-vlkrh\" (UID: \"d8699089-c9ff-4389-8a8e-72b5c976b5ae\") " pod="openshift-marketplace/marketplace-operator-79b997595-vlkrh" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.863109 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8e44c4b9-79cf-44c7-ad73-6aaeac691e86-config-volume\") pod \"dns-default-7phbd\" (UID: \"8e44c4b9-79cf-44c7-ad73-6aaeac691e86\") " pod="openshift-dns/dns-default-7phbd" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.863182 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q57qh\" (UID: \"352f59bb-69ee-46d0-862c-d839ac334b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-q57qh" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.863220 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/352f59bb-69ee-46d0-862c-d839ac334b35-registry-certificates\") pod \"image-registry-697d97f7c8-q57qh\" (UID: \"352f59bb-69ee-46d0-862c-d839ac334b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-q57qh" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.863271 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3dfa5db2-7657-47b6-804c-878fd45694e5-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-knw7d\" (UID: \"3dfa5db2-7657-47b6-804c-878fd45694e5\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-knw7d" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.863363 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d8699089-c9ff-4389-8a8e-72b5c976b5ae-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-vlkrh\" (UID: \"d8699089-c9ff-4389-8a8e-72b5c976b5ae\") " pod="openshift-marketplace/marketplace-operator-79b997595-vlkrh" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.863655 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5bm5\" (UniqueName: \"kubernetes.io/projected/e2de5e19-e8f3-4721-96c3-943bf64a7dab-kube-api-access-f5bm5\") pod \"csi-hostpathplugin-hdphr\" (UID: \"e2de5e19-e8f3-4721-96c3-943bf64a7dab\") " pod="hostpath-provisioner/csi-hostpathplugin-hdphr" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.863824 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grvpw\" (UniqueName: \"kubernetes.io/projected/8e44c4b9-79cf-44c7-ad73-6aaeac691e86-kube-api-access-grvpw\") pod \"dns-default-7phbd\" (UID: \"8e44c4b9-79cf-44c7-ad73-6aaeac691e86\") " pod="openshift-dns/dns-default-7phbd" Dec 09 12:07:26 crc kubenswrapper[4703]: E1209 12:07:26.869837 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:07:27.369817108 +0000 UTC m=+146.618580707 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q57qh" (UID: "352f59bb-69ee-46d0-862c-d839ac334b35") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:07:26 crc kubenswrapper[4703]: W1209 12:07:26.875681 4703 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb5760fc2_3dd8_4966_a906_41bebd96de5d.slice/crio-f39a1c6dea5904996f518ca849685b2b03eb698f1d2cc03c8592f0d595ae9e54 WatchSource:0}: Error finding container f39a1c6dea5904996f518ca849685b2b03eb698f1d2cc03c8592f0d595ae9e54: Status 404 returned error can't find the container with id f39a1c6dea5904996f518ca849685b2b03eb698f1d2cc03c8592f0d595ae9e54 Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.894611 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9d4qd" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.911138 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-82x8s" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.912256 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-5tsxq"] Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.950086 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6cpwj" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.965753 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.965943 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5qkw\" (UniqueName: \"kubernetes.io/projected/3dfa5db2-7657-47b6-804c-878fd45694e5-kube-api-access-t5qkw\") pod \"package-server-manager-789f6589d5-knw7d\" (UID: \"3dfa5db2-7657-47b6-804c-878fd45694e5\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-knw7d" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.965973 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e2de5e19-e8f3-4721-96c3-943bf64a7dab-socket-dir\") pod \"csi-hostpathplugin-hdphr\" (UID: \"e2de5e19-e8f3-4721-96c3-943bf64a7dab\") " pod="hostpath-provisioner/csi-hostpathplugin-hdphr" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.966015 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67ddz\" (UniqueName: \"kubernetes.io/projected/d8699089-c9ff-4389-8a8e-72b5c976b5ae-kube-api-access-67ddz\") pod \"marketplace-operator-79b997595-vlkrh\" (UID: \"d8699089-c9ff-4389-8a8e-72b5c976b5ae\") " pod="openshift-marketplace/marketplace-operator-79b997595-vlkrh" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.966041 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/352f59bb-69ee-46d0-862c-d839ac334b35-installation-pull-secrets\") pod \"image-registry-697d97f7c8-q57qh\" (UID: \"352f59bb-69ee-46d0-862c-d839ac334b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-q57qh" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.966059 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8e44c4b9-79cf-44c7-ad73-6aaeac691e86-config-volume\") pod \"dns-default-7phbd\" (UID: \"8e44c4b9-79cf-44c7-ad73-6aaeac691e86\") " pod="openshift-dns/dns-default-7phbd" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.966081 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/352f59bb-69ee-46d0-862c-d839ac334b35-registry-certificates\") pod \"image-registry-697d97f7c8-q57qh\" (UID: \"352f59bb-69ee-46d0-862c-d839ac334b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-q57qh" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.966099 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3dfa5db2-7657-47b6-804c-878fd45694e5-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-knw7d\" (UID: \"3dfa5db2-7657-47b6-804c-878fd45694e5\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-knw7d" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.966120 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d8699089-c9ff-4389-8a8e-72b5c976b5ae-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-vlkrh\" (UID: \"d8699089-c9ff-4389-8a8e-72b5c976b5ae\") " pod="openshift-marketplace/marketplace-operator-79b997595-vlkrh" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.966156 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdc5x\" (UniqueName: \"kubernetes.io/projected/51f058fc-be40-4ed7-a028-eeced6eec26a-kube-api-access-fdc5x\") pod \"ingress-canary-trptt\" (UID: \"51f058fc-be40-4ed7-a028-eeced6eec26a\") " pod="openshift-ingress-canary/ingress-canary-trptt" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.966179 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5bm5\" (UniqueName: \"kubernetes.io/projected/e2de5e19-e8f3-4721-96c3-943bf64a7dab-kube-api-access-f5bm5\") pod \"csi-hostpathplugin-hdphr\" (UID: \"e2de5e19-e8f3-4721-96c3-943bf64a7dab\") " pod="hostpath-provisioner/csi-hostpathplugin-hdphr" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.966218 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grvpw\" (UniqueName: \"kubernetes.io/projected/8e44c4b9-79cf-44c7-ad73-6aaeac691e86-kube-api-access-grvpw\") pod \"dns-default-7phbd\" (UID: \"8e44c4b9-79cf-44c7-ad73-6aaeac691e86\") " pod="openshift-dns/dns-default-7phbd" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.966247 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/9cbfe36d-d2b8-47cc-ad69-742d1cd0c77a-node-bootstrap-token\") pod \"machine-config-server-428j9\" (UID: \"9cbfe36d-d2b8-47cc-ad69-742d1cd0c77a\") " pod="openshift-machine-config-operator/machine-config-server-428j9" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.966270 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2078d397-e8a5-4dbf-8573-360a9c373084-console-serving-cert\") pod \"console-f9d7485db-w77wb\" (UID: \"2078d397-e8a5-4dbf-8573-360a9c373084\") " pod="openshift-console/console-f9d7485db-w77wb" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.966285 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/e2de5e19-e8f3-4721-96c3-943bf64a7dab-csi-data-dir\") pod \"csi-hostpathplugin-hdphr\" (UID: \"e2de5e19-e8f3-4721-96c3-943bf64a7dab\") " pod="hostpath-provisioner/csi-hostpathplugin-hdphr" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.966300 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bvz9\" (UniqueName: \"kubernetes.io/projected/9cbfe36d-d2b8-47cc-ad69-742d1cd0c77a-kube-api-access-5bvz9\") pod \"machine-config-server-428j9\" (UID: \"9cbfe36d-d2b8-47cc-ad69-742d1cd0c77a\") " pod="openshift-machine-config-operator/machine-config-server-428j9" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.966318 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/352f59bb-69ee-46d0-862c-d839ac334b35-trusted-ca\") pod \"image-registry-697d97f7c8-q57qh\" (UID: \"352f59bb-69ee-46d0-862c-d839ac334b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-q57qh" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.966341 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/352f59bb-69ee-46d0-862c-d839ac334b35-registry-tls\") pod \"image-registry-697d97f7c8-q57qh\" (UID: \"352f59bb-69ee-46d0-862c-d839ac334b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-q57qh" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.966367 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d8699089-c9ff-4389-8a8e-72b5c976b5ae-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-vlkrh\" (UID: \"d8699089-c9ff-4389-8a8e-72b5c976b5ae\") " pod="openshift-marketplace/marketplace-operator-79b997595-vlkrh" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.966387 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8b1bfa15-e14a-489c-9eb9-f9774dfca6b4-srv-cert\") pod \"olm-operator-6b444d44fb-99nm5\" (UID: \"8b1bfa15-e14a-489c-9eb9-f9774dfca6b4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-99nm5" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.966401 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/9cbfe36d-d2b8-47cc-ad69-742d1cd0c77a-certs\") pod \"machine-config-server-428j9\" (UID: \"9cbfe36d-d2b8-47cc-ad69-742d1cd0c77a\") " pod="openshift-machine-config-operator/machine-config-server-428j9" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.966418 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2078d397-e8a5-4dbf-8573-360a9c373084-console-config\") pod \"console-f9d7485db-w77wb\" (UID: \"2078d397-e8a5-4dbf-8573-360a9c373084\") " pod="openshift-console/console-f9d7485db-w77wb" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.966435 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/e2de5e19-e8f3-4721-96c3-943bf64a7dab-mountpoint-dir\") pod \"csi-hostpathplugin-hdphr\" (UID: \"e2de5e19-e8f3-4721-96c3-943bf64a7dab\") " pod="hostpath-provisioner/csi-hostpathplugin-hdphr" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.966449 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e2de5e19-e8f3-4721-96c3-943bf64a7dab-registration-dir\") pod \"csi-hostpathplugin-hdphr\" (UID: \"e2de5e19-e8f3-4721-96c3-943bf64a7dab\") " pod="hostpath-provisioner/csi-hostpathplugin-hdphr" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.966465 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2078d397-e8a5-4dbf-8573-360a9c373084-oauth-serving-cert\") pod \"console-f9d7485db-w77wb\" (UID: \"2078d397-e8a5-4dbf-8573-360a9c373084\") " pod="openshift-console/console-f9d7485db-w77wb" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.966481 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8b1bfa15-e14a-489c-9eb9-f9774dfca6b4-profile-collector-cert\") pod \"olm-operator-6b444d44fb-99nm5\" (UID: \"8b1bfa15-e14a-489c-9eb9-f9774dfca6b4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-99nm5" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.966500 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/352f59bb-69ee-46d0-862c-d839ac334b35-bound-sa-token\") pod \"image-registry-697d97f7c8-q57qh\" (UID: \"352f59bb-69ee-46d0-862c-d839ac334b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-q57qh" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.966516 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/f22fe25c-091e-4ed7-b58c-092e4c75df3e-signing-key\") pod \"service-ca-9c57cc56f-b4s68\" (UID: \"f22fe25c-091e-4ed7-b58c-092e4c75df3e\") " pod="openshift-service-ca/service-ca-9c57cc56f-b4s68" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.966541 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2078d397-e8a5-4dbf-8573-360a9c373084-service-ca\") pod \"console-f9d7485db-w77wb\" (UID: \"2078d397-e8a5-4dbf-8573-360a9c373084\") " pod="openshift-console/console-f9d7485db-w77wb" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.966556 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/e2de5e19-e8f3-4721-96c3-943bf64a7dab-plugins-dir\") pod \"csi-hostpathplugin-hdphr\" (UID: \"e2de5e19-e8f3-4721-96c3-943bf64a7dab\") " pod="hostpath-provisioner/csi-hostpathplugin-hdphr" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.966570 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2078d397-e8a5-4dbf-8573-360a9c373084-console-oauth-config\") pod \"console-f9d7485db-w77wb\" (UID: \"2078d397-e8a5-4dbf-8573-360a9c373084\") " pod="openshift-console/console-f9d7485db-w77wb" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.966611 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2078d397-e8a5-4dbf-8573-360a9c373084-trusted-ca-bundle\") pod \"console-f9d7485db-w77wb\" (UID: \"2078d397-e8a5-4dbf-8573-360a9c373084\") " pod="openshift-console/console-f9d7485db-w77wb" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.966641 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/51f058fc-be40-4ed7-a028-eeced6eec26a-cert\") pod \"ingress-canary-trptt\" (UID: \"51f058fc-be40-4ed7-a028-eeced6eec26a\") " pod="openshift-ingress-canary/ingress-canary-trptt" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.966656 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/f22fe25c-091e-4ed7-b58c-092e4c75df3e-signing-cabundle\") pod \"service-ca-9c57cc56f-b4s68\" (UID: \"f22fe25c-091e-4ed7-b58c-092e4c75df3e\") " pod="openshift-service-ca/service-ca-9c57cc56f-b4s68" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.966672 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hm7kn\" (UniqueName: \"kubernetes.io/projected/8b1bfa15-e14a-489c-9eb9-f9774dfca6b4-kube-api-access-hm7kn\") pod \"olm-operator-6b444d44fb-99nm5\" (UID: \"8b1bfa15-e14a-489c-9eb9-f9774dfca6b4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-99nm5" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.966688 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snfpv\" (UniqueName: \"kubernetes.io/projected/2078d397-e8a5-4dbf-8573-360a9c373084-kube-api-access-snfpv\") pod \"console-f9d7485db-w77wb\" (UID: \"2078d397-e8a5-4dbf-8573-360a9c373084\") " pod="openshift-console/console-f9d7485db-w77wb" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.966705 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/352f59bb-69ee-46d0-862c-d839ac334b35-ca-trust-extracted\") pod \"image-registry-697d97f7c8-q57qh\" (UID: \"352f59bb-69ee-46d0-862c-d839ac334b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-q57qh" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.966720 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbcc8\" (UniqueName: \"kubernetes.io/projected/352f59bb-69ee-46d0-862c-d839ac334b35-kube-api-access-bbcc8\") pod \"image-registry-697d97f7c8-q57qh\" (UID: \"352f59bb-69ee-46d0-862c-d839ac334b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-q57qh" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.966738 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqklm\" (UniqueName: \"kubernetes.io/projected/f22fe25c-091e-4ed7-b58c-092e4c75df3e-kube-api-access-xqklm\") pod \"service-ca-9c57cc56f-b4s68\" (UID: \"f22fe25c-091e-4ed7-b58c-092e4c75df3e\") " pod="openshift-service-ca/service-ca-9c57cc56f-b4s68" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.966755 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8e44c4b9-79cf-44c7-ad73-6aaeac691e86-metrics-tls\") pod \"dns-default-7phbd\" (UID: \"8e44c4b9-79cf-44c7-ad73-6aaeac691e86\") " pod="openshift-dns/dns-default-7phbd" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.968484 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d8699089-c9ff-4389-8a8e-72b5c976b5ae-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-vlkrh\" (UID: \"d8699089-c9ff-4389-8a8e-72b5c976b5ae\") " pod="openshift-marketplace/marketplace-operator-79b997595-vlkrh" Dec 09 12:07:26 crc kubenswrapper[4703]: E1209 12:07:26.968697 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:07:27.468682891 +0000 UTC m=+146.717446420 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.968933 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e2de5e19-e8f3-4721-96c3-943bf64a7dab-socket-dir\") pod \"csi-hostpathplugin-hdphr\" (UID: \"e2de5e19-e8f3-4721-96c3-943bf64a7dab\") " pod="hostpath-provisioner/csi-hostpathplugin-hdphr" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.969101 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/e2de5e19-e8f3-4721-96c3-943bf64a7dab-plugins-dir\") pod \"csi-hostpathplugin-hdphr\" (UID: \"e2de5e19-e8f3-4721-96c3-943bf64a7dab\") " pod="hostpath-provisioner/csi-hostpathplugin-hdphr" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.971125 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/352f59bb-69ee-46d0-862c-d839ac334b35-ca-trust-extracted\") pod \"image-registry-697d97f7c8-q57qh\" (UID: \"352f59bb-69ee-46d0-862c-d839ac334b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-q57qh" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.976935 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2078d397-e8a5-4dbf-8573-360a9c373084-trusted-ca-bundle\") pod \"console-f9d7485db-w77wb\" (UID: \"2078d397-e8a5-4dbf-8573-360a9c373084\") " pod="openshift-console/console-f9d7485db-w77wb" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.981878 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/352f59bb-69ee-46d0-862c-d839ac334b35-installation-pull-secrets\") pod \"image-registry-697d97f7c8-q57qh\" (UID: \"352f59bb-69ee-46d0-862c-d839ac334b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-q57qh" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.972632 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/f22fe25c-091e-4ed7-b58c-092e4c75df3e-signing-cabundle\") pod \"service-ca-9c57cc56f-b4s68\" (UID: \"f22fe25c-091e-4ed7-b58c-092e4c75df3e\") " pod="openshift-service-ca/service-ca-9c57cc56f-b4s68" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.987060 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2078d397-e8a5-4dbf-8573-360a9c373084-service-ca\") pod \"console-f9d7485db-w77wb\" (UID: \"2078d397-e8a5-4dbf-8573-360a9c373084\") " pod="openshift-console/console-f9d7485db-w77wb" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.987476 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/e2de5e19-e8f3-4721-96c3-943bf64a7dab-mountpoint-dir\") pod \"csi-hostpathplugin-hdphr\" (UID: \"e2de5e19-e8f3-4721-96c3-943bf64a7dab\") " pod="hostpath-provisioner/csi-hostpathplugin-hdphr" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.987481 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2078d397-e8a5-4dbf-8573-360a9c373084-console-oauth-config\") pod \"console-f9d7485db-w77wb\" (UID: \"2078d397-e8a5-4dbf-8573-360a9c373084\") " pod="openshift-console/console-f9d7485db-w77wb" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.987564 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e2de5e19-e8f3-4721-96c3-943bf64a7dab-registration-dir\") pod \"csi-hostpathplugin-hdphr\" (UID: \"e2de5e19-e8f3-4721-96c3-943bf64a7dab\") " pod="hostpath-provisioner/csi-hostpathplugin-hdphr" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.987627 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/e2de5e19-e8f3-4721-96c3-943bf64a7dab-csi-data-dir\") pod \"csi-hostpathplugin-hdphr\" (UID: \"e2de5e19-e8f3-4721-96c3-943bf64a7dab\") " pod="hostpath-provisioner/csi-hostpathplugin-hdphr" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.987682 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8b1bfa15-e14a-489c-9eb9-f9774dfca6b4-srv-cert\") pod \"olm-operator-6b444d44fb-99nm5\" (UID: \"8b1bfa15-e14a-489c-9eb9-f9774dfca6b4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-99nm5" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.988223 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2078d397-e8a5-4dbf-8573-360a9c373084-console-config\") pod \"console-f9d7485db-w77wb\" (UID: \"2078d397-e8a5-4dbf-8573-360a9c373084\") " pod="openshift-console/console-f9d7485db-w77wb" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.988279 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/352f59bb-69ee-46d0-862c-d839ac334b35-registry-certificates\") pod \"image-registry-697d97f7c8-q57qh\" (UID: \"352f59bb-69ee-46d0-862c-d839ac334b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-q57qh" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.988524 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/352f59bb-69ee-46d0-862c-d839ac334b35-trusted-ca\") pod \"image-registry-697d97f7c8-q57qh\" (UID: \"352f59bb-69ee-46d0-862c-d839ac334b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-q57qh" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.989126 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8b1bfa15-e14a-489c-9eb9-f9774dfca6b4-profile-collector-cert\") pod \"olm-operator-6b444d44fb-99nm5\" (UID: \"8b1bfa15-e14a-489c-9eb9-f9774dfca6b4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-99nm5" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.989248 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8e44c4b9-79cf-44c7-ad73-6aaeac691e86-metrics-tls\") pod \"dns-default-7phbd\" (UID: \"8e44c4b9-79cf-44c7-ad73-6aaeac691e86\") " pod="openshift-dns/dns-default-7phbd" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.991870 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/352f59bb-69ee-46d0-862c-d839ac334b35-registry-tls\") pod \"image-registry-697d97f7c8-q57qh\" (UID: \"352f59bb-69ee-46d0-862c-d839ac334b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-q57qh" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.992475 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3dfa5db2-7657-47b6-804c-878fd45694e5-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-knw7d\" (UID: \"3dfa5db2-7657-47b6-804c-878fd45694e5\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-knw7d" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.993437 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/f22fe25c-091e-4ed7-b58c-092e4c75df3e-signing-key\") pod \"service-ca-9c57cc56f-b4s68\" (UID: \"f22fe25c-091e-4ed7-b58c-092e4c75df3e\") " pod="openshift-service-ca/service-ca-9c57cc56f-b4s68" Dec 09 12:07:26 crc kubenswrapper[4703]: I1209 12:07:26.992496 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2078d397-e8a5-4dbf-8573-360a9c373084-console-serving-cert\") pod \"console-f9d7485db-w77wb\" (UID: \"2078d397-e8a5-4dbf-8573-360a9c373084\") " pod="openshift-console/console-f9d7485db-w77wb" Dec 09 12:07:27 crc kubenswrapper[4703]: I1209 12:07:27.006783 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8e44c4b9-79cf-44c7-ad73-6aaeac691e86-config-volume\") pod \"dns-default-7phbd\" (UID: \"8e44c4b9-79cf-44c7-ad73-6aaeac691e86\") " pod="openshift-dns/dns-default-7phbd" Dec 09 12:07:27 crc kubenswrapper[4703]: I1209 12:07:27.007149 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2078d397-e8a5-4dbf-8573-360a9c373084-oauth-serving-cert\") pod \"console-f9d7485db-w77wb\" (UID: \"2078d397-e8a5-4dbf-8573-360a9c373084\") " pod="openshift-console/console-f9d7485db-w77wb" Dec 09 12:07:27 crc kubenswrapper[4703]: I1209 12:07:27.008269 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5qkw\" (UniqueName: \"kubernetes.io/projected/3dfa5db2-7657-47b6-804c-878fd45694e5-kube-api-access-t5qkw\") pod \"package-server-manager-789f6589d5-knw7d\" (UID: \"3dfa5db2-7657-47b6-804c-878fd45694e5\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-knw7d" Dec 09 12:07:27 crc kubenswrapper[4703]: I1209 12:07:27.015446 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d8699089-c9ff-4389-8a8e-72b5c976b5ae-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-vlkrh\" (UID: \"d8699089-c9ff-4389-8a8e-72b5c976b5ae\") " pod="openshift-marketplace/marketplace-operator-79b997595-vlkrh" Dec 09 12:07:27 crc kubenswrapper[4703]: I1209 12:07:27.045146 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67ddz\" (UniqueName: \"kubernetes.io/projected/d8699089-c9ff-4389-8a8e-72b5c976b5ae-kube-api-access-67ddz\") pod \"marketplace-operator-79b997595-vlkrh\" (UID: \"d8699089-c9ff-4389-8a8e-72b5c976b5ae\") " pod="openshift-marketplace/marketplace-operator-79b997595-vlkrh" Dec 09 12:07:27 crc kubenswrapper[4703]: I1209 12:07:27.067653 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grvpw\" (UniqueName: \"kubernetes.io/projected/8e44c4b9-79cf-44c7-ad73-6aaeac691e86-kube-api-access-grvpw\") pod \"dns-default-7phbd\" (UID: \"8e44c4b9-79cf-44c7-ad73-6aaeac691e86\") " pod="openshift-dns/dns-default-7phbd" Dec 09 12:07:27 crc kubenswrapper[4703]: I1209 12:07:27.072411 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/51f058fc-be40-4ed7-a028-eeced6eec26a-cert\") pod \"ingress-canary-trptt\" (UID: \"51f058fc-be40-4ed7-a028-eeced6eec26a\") " pod="openshift-ingress-canary/ingress-canary-trptt" Dec 09 12:07:27 crc kubenswrapper[4703]: I1209 12:07:27.072506 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:07:27 crc kubenswrapper[4703]: I1209 12:07:27.075076 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q57qh\" (UID: \"352f59bb-69ee-46d0-862c-d839ac334b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-q57qh" Dec 09 12:07:27 crc kubenswrapper[4703]: I1209 12:07:27.075172 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdc5x\" (UniqueName: \"kubernetes.io/projected/51f058fc-be40-4ed7-a028-eeced6eec26a-kube-api-access-fdc5x\") pod \"ingress-canary-trptt\" (UID: \"51f058fc-be40-4ed7-a028-eeced6eec26a\") " pod="openshift-ingress-canary/ingress-canary-trptt" Dec 09 12:07:27 crc kubenswrapper[4703]: I1209 12:07:27.075275 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:07:27 crc kubenswrapper[4703]: I1209 12:07:27.075307 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/9cbfe36d-d2b8-47cc-ad69-742d1cd0c77a-node-bootstrap-token\") pod \"machine-config-server-428j9\" (UID: \"9cbfe36d-d2b8-47cc-ad69-742d1cd0c77a\") " pod="openshift-machine-config-operator/machine-config-server-428j9" Dec 09 12:07:27 crc kubenswrapper[4703]: I1209 12:07:27.075336 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:07:27 crc kubenswrapper[4703]: I1209 12:07:27.075361 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bvz9\" (UniqueName: \"kubernetes.io/projected/9cbfe36d-d2b8-47cc-ad69-742d1cd0c77a-kube-api-access-5bvz9\") pod \"machine-config-server-428j9\" (UID: \"9cbfe36d-d2b8-47cc-ad69-742d1cd0c77a\") " pod="openshift-machine-config-operator/machine-config-server-428j9" Dec 09 12:07:27 crc kubenswrapper[4703]: I1209 12:07:27.075669 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:07:27 crc kubenswrapper[4703]: I1209 12:07:27.075713 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/9cbfe36d-d2b8-47cc-ad69-742d1cd0c77a-certs\") pod \"machine-config-server-428j9\" (UID: \"9cbfe36d-d2b8-47cc-ad69-742d1cd0c77a\") " pod="openshift-machine-config-operator/machine-config-server-428j9" Dec 09 12:07:27 crc kubenswrapper[4703]: E1209 12:07:27.075938 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:07:27.575920426 +0000 UTC m=+146.824683935 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q57qh" (UID: "352f59bb-69ee-46d0-862c-d839ac334b35") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:07:27 crc kubenswrapper[4703]: I1209 12:07:27.078386 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/51f058fc-be40-4ed7-a028-eeced6eec26a-cert\") pod \"ingress-canary-trptt\" (UID: \"51f058fc-be40-4ed7-a028-eeced6eec26a\") " pod="openshift-ingress-canary/ingress-canary-trptt" Dec 09 12:07:27 crc kubenswrapper[4703]: I1209 12:07:27.079058 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:07:27 crc kubenswrapper[4703]: I1209 12:07:27.081538 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:07:27 crc kubenswrapper[4703]: I1209 12:07:27.084804 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/9cbfe36d-d2b8-47cc-ad69-742d1cd0c77a-certs\") pod \"machine-config-server-428j9\" (UID: \"9cbfe36d-d2b8-47cc-ad69-742d1cd0c77a\") " pod="openshift-machine-config-operator/machine-config-server-428j9" Dec 09 12:07:27 crc kubenswrapper[4703]: I1209 12:07:27.085420 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:07:27 crc kubenswrapper[4703]: I1209 12:07:27.087102 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/9cbfe36d-d2b8-47cc-ad69-742d1cd0c77a-node-bootstrap-token\") pod \"machine-config-server-428j9\" (UID: \"9cbfe36d-d2b8-47cc-ad69-742d1cd0c77a\") " pod="openshift-machine-config-operator/machine-config-server-428j9" Dec 09 12:07:27 crc kubenswrapper[4703]: I1209 12:07:27.088058 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:07:27 crc kubenswrapper[4703]: I1209 12:07:27.093544 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snfpv\" (UniqueName: \"kubernetes.io/projected/2078d397-e8a5-4dbf-8573-360a9c373084-kube-api-access-snfpv\") pod \"console-f9d7485db-w77wb\" (UID: \"2078d397-e8a5-4dbf-8573-360a9c373084\") " pod="openshift-console/console-f9d7485db-w77wb" Dec 09 12:07:27 crc kubenswrapper[4703]: I1209 12:07:27.115336 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hm7kn\" (UniqueName: \"kubernetes.io/projected/8b1bfa15-e14a-489c-9eb9-f9774dfca6b4-kube-api-access-hm7kn\") pod \"olm-operator-6b444d44fb-99nm5\" (UID: \"8b1bfa15-e14a-489c-9eb9-f9774dfca6b4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-99nm5" Dec 09 12:07:27 crc kubenswrapper[4703]: I1209 12:07:27.141818 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbcc8\" (UniqueName: \"kubernetes.io/projected/352f59bb-69ee-46d0-862c-d839ac334b35-kube-api-access-bbcc8\") pod \"image-registry-697d97f7c8-q57qh\" (UID: \"352f59bb-69ee-46d0-862c-d839ac334b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-q57qh" Dec 09 12:07:27 crc kubenswrapper[4703]: I1209 12:07:27.155492 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5bm5\" (UniqueName: \"kubernetes.io/projected/e2de5e19-e8f3-4721-96c3-943bf64a7dab-kube-api-access-f5bm5\") pod \"csi-hostpathplugin-hdphr\" (UID: \"e2de5e19-e8f3-4721-96c3-943bf64a7dab\") " pod="hostpath-provisioner/csi-hostpathplugin-hdphr" Dec 09 12:07:27 crc kubenswrapper[4703]: I1209 12:07:27.183689 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:07:27 crc kubenswrapper[4703]: E1209 12:07:27.184637 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:07:27.684614754 +0000 UTC m=+146.933378273 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:07:27 crc kubenswrapper[4703]: I1209 12:07:27.185954 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqklm\" (UniqueName: \"kubernetes.io/projected/f22fe25c-091e-4ed7-b58c-092e4c75df3e-kube-api-access-xqklm\") pod \"service-ca-9c57cc56f-b4s68\" (UID: \"f22fe25c-091e-4ed7-b58c-092e4c75df3e\") " pod="openshift-service-ca/service-ca-9c57cc56f-b4s68" Dec 09 12:07:27 crc kubenswrapper[4703]: I1209 12:07:27.197882 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/352f59bb-69ee-46d0-862c-d839ac334b35-bound-sa-token\") pod \"image-registry-697d97f7c8-q57qh\" (UID: \"352f59bb-69ee-46d0-862c-d839ac334b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-q57qh" Dec 09 12:07:27 crc kubenswrapper[4703]: I1209 12:07:27.199686 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-llbbl"] Dec 09 12:07:27 crc kubenswrapper[4703]: I1209 12:07:27.199816 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-kzgkb"] Dec 09 12:07:27 crc kubenswrapper[4703]: I1209 12:07:27.206144 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vv85r"] Dec 09 12:07:27 crc kubenswrapper[4703]: I1209 12:07:27.228133 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bvz9\" (UniqueName: \"kubernetes.io/projected/9cbfe36d-d2b8-47cc-ad69-742d1cd0c77a-kube-api-access-5bvz9\") pod \"machine-config-server-428j9\" (UID: \"9cbfe36d-d2b8-47cc-ad69-742d1cd0c77a\") " pod="openshift-machine-config-operator/machine-config-server-428j9" Dec 09 12:07:27 crc kubenswrapper[4703]: I1209 12:07:27.249298 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdc5x\" (UniqueName: \"kubernetes.io/projected/51f058fc-be40-4ed7-a028-eeced6eec26a-kube-api-access-fdc5x\") pod \"ingress-canary-trptt\" (UID: \"51f058fc-be40-4ed7-a028-eeced6eec26a\") " pod="openshift-ingress-canary/ingress-canary-trptt" Dec 09 12:07:27 crc kubenswrapper[4703]: I1209 12:07:27.265377 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-knw7d" Dec 09 12:07:27 crc kubenswrapper[4703]: I1209 12:07:27.273377 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-w77wb" Dec 09 12:07:27 crc kubenswrapper[4703]: I1209 12:07:27.281610 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-vlkrh" Dec 09 12:07:27 crc kubenswrapper[4703]: I1209 12:07:27.281710 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dh6tp"] Dec 09 12:07:27 crc kubenswrapper[4703]: I1209 12:07:27.286266 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q57qh\" (UID: \"352f59bb-69ee-46d0-862c-d839ac334b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-q57qh" Dec 09 12:07:27 crc kubenswrapper[4703]: E1209 12:07:27.286771 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:07:27.786754136 +0000 UTC m=+147.035517655 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q57qh" (UID: "352f59bb-69ee-46d0-862c-d839ac334b35") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:07:27 crc kubenswrapper[4703]: I1209 12:07:27.287814 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:07:27 crc kubenswrapper[4703]: I1209 12:07:27.291273 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-99nm5" Dec 09 12:07:27 crc kubenswrapper[4703]: I1209 12:07:27.304747 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:07:27 crc kubenswrapper[4703]: I1209 12:07:27.304976 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-b4s68" Dec 09 12:07:27 crc kubenswrapper[4703]: I1209 12:07:27.309874 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:07:27 crc kubenswrapper[4703]: I1209 12:07:27.315790 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-2xlcr"] Dec 09 12:07:27 crc kubenswrapper[4703]: I1209 12:07:27.328914 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hdphr" Dec 09 12:07:27 crc kubenswrapper[4703]: I1209 12:07:27.337569 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-7phbd" Dec 09 12:07:27 crc kubenswrapper[4703]: I1209 12:07:27.345255 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8szns"] Dec 09 12:07:27 crc kubenswrapper[4703]: I1209 12:07:27.345475 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-428j9" Dec 09 12:07:27 crc kubenswrapper[4703]: I1209 12:07:27.352032 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-trptt" Dec 09 12:07:27 crc kubenswrapper[4703]: I1209 12:07:27.387367 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:07:27 crc kubenswrapper[4703]: E1209 12:07:27.387686 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:07:27.887670051 +0000 UTC m=+147.136433570 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:07:27 crc kubenswrapper[4703]: I1209 12:07:27.449521 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421360-wvnqz"] Dec 09 12:07:27 crc kubenswrapper[4703]: I1209 12:07:27.469101 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-rpbfz"] Dec 09 12:07:27 crc kubenswrapper[4703]: I1209 12:07:27.471724 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-hcmrp"] Dec 09 12:07:27 crc kubenswrapper[4703]: I1209 12:07:27.489137 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q57qh\" (UID: \"352f59bb-69ee-46d0-862c-d839ac334b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-q57qh" Dec 09 12:07:27 crc kubenswrapper[4703]: E1209 12:07:27.489507 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:07:27.989484434 +0000 UTC m=+147.238247953 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q57qh" (UID: "352f59bb-69ee-46d0-862c-d839ac334b35") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:07:27 crc kubenswrapper[4703]: I1209 12:07:27.505236 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-6hpw7"] Dec 09 12:07:27 crc kubenswrapper[4703]: I1209 12:07:27.556308 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-hqx9m"] Dec 09 12:07:27 crc kubenswrapper[4703]: W1209 12:07:27.585527 4703 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod049a6f1a_4b47_421d_bfd6_f1c89a5c0a80.slice/crio-86adb7ef60f42e2269e050cbe3f4c302652145a80459721cb0c076435b86ddc7 WatchSource:0}: Error finding container 86adb7ef60f42e2269e050cbe3f4c302652145a80459721cb0c076435b86ddc7: Status 404 returned error can't find the container with id 86adb7ef60f42e2269e050cbe3f4c302652145a80459721cb0c076435b86ddc7 Dec 09 12:07:27 crc kubenswrapper[4703]: I1209 12:07:27.592763 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:07:27 crc kubenswrapper[4703]: E1209 12:07:27.593206 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:07:28.093165242 +0000 UTC m=+147.341928761 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:07:27 crc kubenswrapper[4703]: I1209 12:07:27.614791 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tf7x5"] Dec 09 12:07:27 crc kubenswrapper[4703]: I1209 12:07:27.696650 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q57qh\" (UID: \"352f59bb-69ee-46d0-862c-d839ac334b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-q57qh" Dec 09 12:07:27 crc kubenswrapper[4703]: E1209 12:07:27.697635 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:07:28.197618204 +0000 UTC m=+147.446381723 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q57qh" (UID: "352f59bb-69ee-46d0-862c-d839ac334b35") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:07:27 crc kubenswrapper[4703]: I1209 12:07:27.800364 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:07:27 crc kubenswrapper[4703]: E1209 12:07:27.800430 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:07:28.300403795 +0000 UTC m=+147.549167314 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:07:27 crc kubenswrapper[4703]: I1209 12:07:27.800626 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q57qh\" (UID: \"352f59bb-69ee-46d0-862c-d839ac334b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-q57qh" Dec 09 12:07:27 crc kubenswrapper[4703]: E1209 12:07:27.808156 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:07:28.308123336 +0000 UTC m=+147.556886875 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q57qh" (UID: "352f59bb-69ee-46d0-862c-d839ac334b35") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:07:27 crc kubenswrapper[4703]: I1209 12:07:27.819065 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-82x8s"] Dec 09 12:07:27 crc kubenswrapper[4703]: I1209 12:07:27.832041 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ng8gw"] Dec 09 12:07:27 crc kubenswrapper[4703]: I1209 12:07:27.838983 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-6hpw7" event={"ID":"95fb5258-8faf-4e0a-ba69-319222cca40a","Type":"ContainerStarted","Data":"1474dbedbad30a504a63675f2b650563e1ee4f58cbeed97c5592d1dbad800a2f"} Dec 09 12:07:27 crc kubenswrapper[4703]: I1209 12:07:27.849682 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fh878" event={"ID":"fd6b0396-1528-45bf-a713-67e7e20b3e96","Type":"ContainerStarted","Data":"70ba4f08448312977b2247d7d8e61488dc2bc26d6ced25f82c5d09ef38c46e59"} Dec 09 12:07:27 crc kubenswrapper[4703]: I1209 12:07:27.859699 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-llbbl" event={"ID":"80ff55a4-f343-4795-9c6f-4ff56a52ea82","Type":"ContainerStarted","Data":"80c2947996da9cce5ac898ca01237430d4e279f49faefe096d844b28ef2be889"} Dec 09 12:07:27 crc kubenswrapper[4703]: I1209 12:07:27.867047 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421360-wvnqz" event={"ID":"049a6f1a-4b47-421d-bfd6-f1c89a5c0a80","Type":"ContainerStarted","Data":"86adb7ef60f42e2269e050cbe3f4c302652145a80459721cb0c076435b86ddc7"} Dec 09 12:07:27 crc kubenswrapper[4703]: I1209 12:07:27.871127 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vqd7s" event={"ID":"7a1193b2-a3d3-4480-8a79-a56b722e062e","Type":"ContainerStarted","Data":"e1b98a45735847aac29f0b0d2a5d9f33f84cc33a6dd4d5d76725e511ea3d30bb"} Dec 09 12:07:27 crc kubenswrapper[4703]: I1209 12:07:27.877172 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-rpbfz" event={"ID":"f3c962d1-1735-4e31-9b41-246c2876d628","Type":"ContainerStarted","Data":"320ef7dcfb630098ae023d76fe71698f95903bb5479c31a519fb93eb5872b1f3"} Dec 09 12:07:27 crc kubenswrapper[4703]: I1209 12:07:27.896015 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zg8pp" event={"ID":"2b57ba45-1dc5-4f41-93f1-94c5d07cebaa","Type":"ContainerStarted","Data":"cb57121c1bc8f0f23878272ceb9fa712423a773c86f8a1897f568424a2e69bb2"} Dec 09 12:07:27 crc kubenswrapper[4703]: I1209 12:07:27.897508 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-rsr8p" event={"ID":"4491d82e-ecc9-4873-b1dd-412889079392","Type":"ContainerStarted","Data":"a5216bdda3e3b6057dcf3cf48e9f299ffd3f932e7c758c86770af7ae9dd715ad"} Dec 09 12:07:27 crc kubenswrapper[4703]: I1209 12:07:27.900985 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kzgkb" event={"ID":"ae21c58e-0b2b-450e-980d-c2d839fda11b","Type":"ContainerStarted","Data":"ed3081291888e22d2f6070c06cb86683921ae834055f5a2dc8bb849f1718ca41"} Dec 09 12:07:27 crc kubenswrapper[4703]: I1209 12:07:27.901371 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:07:27 crc kubenswrapper[4703]: E1209 12:07:27.902509 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:07:28.402490076 +0000 UTC m=+147.651253605 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:07:27 crc kubenswrapper[4703]: I1209 12:07:27.928773 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-hcmrp" event={"ID":"8f077d38-7967-4ef5-ba75-d833279fdd96","Type":"ContainerStarted","Data":"f8695d2817e95b5d61ce077a4293c8d33a60f16b539dd83d8c5b4eb29a660b05"} Dec 09 12:07:27 crc kubenswrapper[4703]: I1209 12:07:27.962524 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-cjdjc" event={"ID":"de18af96-27c1-4d28-acfe-e0317de38dba","Type":"ContainerStarted","Data":"4e79d78c33bd0fa376ef8be4ca4acdcc8541dded508d529aa2551df705f24269"} Dec 09 12:07:27 crc kubenswrapper[4703]: I1209 12:07:27.963926 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-czvkf" event={"ID":"558df921-8b36-45ba-8cc8-b25a1a2f6172","Type":"ContainerStarted","Data":"5b6fc6505e3e9d4c79b5a560ed8e294a9cd186381c591d1a3bd6e1c3b4b7c8a0"} Dec 09 12:07:27 crc kubenswrapper[4703]: I1209 12:07:27.970752 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-mktpn" event={"ID":"d353402b-aa82-4c6a-aec3-4253503dfe34","Type":"ContainerStarted","Data":"da4f46279a559b9679ea636baf65c08a73fef26690160e1c5187506c19bf5d44"} Dec 09 12:07:27 crc kubenswrapper[4703]: I1209 12:07:27.971804 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-2xlcr" event={"ID":"5668acde-421e-4a2c-8172-0030b25db0f6","Type":"ContainerStarted","Data":"49c045d7eb6f6a7b5584a7c9fd58ca72ebc2e5f8a80d297c8ecd5502ab36cd1f"} Dec 09 12:07:27 crc kubenswrapper[4703]: I1209 12:07:27.972696 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8szns" event={"ID":"1fc394d2-c901-4c55-b5e4-c4e2f889cb82","Type":"ContainerStarted","Data":"29d869413d88a8eda1653b46cf50f0bc062851a9a9c76f2ed8c92151bc9c0ec1"} Dec 09 12:07:27 crc kubenswrapper[4703]: I1209 12:07:27.974058 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2zgjr" event={"ID":"296d291a-3dbf-46c2-a60c-8646965dcbdc","Type":"ContainerStarted","Data":"4c4b4407e821a709599a2a2c765dfbafbfa33a0a0cca416a6029bb2250d64a1b"} Dec 09 12:07:27 crc kubenswrapper[4703]: I1209 12:07:27.974272 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2zgjr" Dec 09 12:07:27 crc kubenswrapper[4703]: I1209 12:07:27.989689 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-92q4r" event={"ID":"b5760fc2-3dd8-4966-a906-41bebd96de5d","Type":"ContainerStarted","Data":"f39a1c6dea5904996f518ca849685b2b03eb698f1d2cc03c8592f0d595ae9e54"} Dec 09 12:07:27 crc kubenswrapper[4703]: W1209 12:07:27.995712 4703 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod36981648_d6b7_4c08_96ec_622d069c4c19.slice/crio-a5c92fa00b29d25bd363fd1e8010503a5443b52be92c187e7fd422e760f941f3 WatchSource:0}: Error finding container a5c92fa00b29d25bd363fd1e8010503a5443b52be92c187e7fd422e760f941f3: Status 404 returned error can't find the container with id a5c92fa00b29d25bd363fd1e8010503a5443b52be92c187e7fd422e760f941f3 Dec 09 12:07:27 crc kubenswrapper[4703]: I1209 12:07:27.999772 4703 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-2zgjr container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Dec 09 12:07:28 crc kubenswrapper[4703]: I1209 12:07:28.001986 4703 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2zgjr" podUID="296d291a-3dbf-46c2-a60c-8646965dcbdc" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Dec 09 12:07:28 crc kubenswrapper[4703]: I1209 12:07:28.003019 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q57qh\" (UID: \"352f59bb-69ee-46d0-862c-d839ac334b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-q57qh" Dec 09 12:07:28 crc kubenswrapper[4703]: E1209 12:07:28.004611 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:07:28.504593817 +0000 UTC m=+147.753357326 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q57qh" (UID: "352f59bb-69ee-46d0-862c-d839ac334b35") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:07:28 crc kubenswrapper[4703]: I1209 12:07:28.022436 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-w2z24" event={"ID":"b7d07c12-8633-4029-89e3-9298dd140444","Type":"ContainerStarted","Data":"5fe3f5b55b0094ecd3d2d5a00128326cbb14640d6c5b3300aab32927c1bb489c"} Dec 09 12:07:28 crc kubenswrapper[4703]: I1209 12:07:28.026777 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dh6tp" event={"ID":"68775d5a-a682-4965-ac8a-74216e8471fb","Type":"ContainerStarted","Data":"73fa2e24e6e576494c2255fda8c6124ea53fe4335064450217b0dc7a2dc4fa46"} Dec 09 12:07:28 crc kubenswrapper[4703]: I1209 12:07:28.030684 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5tsxq" event={"ID":"c4a6920d-8199-478b-bc08-a96dbc58d236","Type":"ContainerStarted","Data":"c4ab8a2efb967065c1b063458d555f2e73763534a6acf3ec75c9bdc24544e6cb"} Dec 09 12:07:28 crc kubenswrapper[4703]: I1209 12:07:28.049826 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-psc6r" event={"ID":"8a37abfc-17ea-4aac-bbb9-9650cb15a2f6","Type":"ContainerStarted","Data":"38b24cbdf8befc066b79433b5982290ce3783a012f12e829493960a303bd261c"} Dec 09 12:07:28 crc kubenswrapper[4703]: I1209 12:07:28.049861 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-psc6r" event={"ID":"8a37abfc-17ea-4aac-bbb9-9650cb15a2f6","Type":"ContainerStarted","Data":"198c507c1a1eb6fe21b5d2d992c4b97cdbead4477be5423c3730a8b5e38c41d0"} Dec 09 12:07:28 crc kubenswrapper[4703]: W1209 12:07:28.051768 4703 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podad543716_1bbb_4a16_9670_084660959961.slice/crio-075a1b8c42ab1a1b6044aac08c32e09772717e1fed8f3d8e691a7b50918ee6cd WatchSource:0}: Error finding container 075a1b8c42ab1a1b6044aac08c32e09772717e1fed8f3d8e691a7b50918ee6cd: Status 404 returned error can't find the container with id 075a1b8c42ab1a1b6044aac08c32e09772717e1fed8f3d8e691a7b50918ee6cd Dec 09 12:07:28 crc kubenswrapper[4703]: I1209 12:07:28.104794 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:07:28 crc kubenswrapper[4703]: E1209 12:07:28.109351 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:07:28.609331377 +0000 UTC m=+147.858094896 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:07:28 crc kubenswrapper[4703]: I1209 12:07:28.159931 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vlkrh"] Dec 09 12:07:28 crc kubenswrapper[4703]: I1209 12:07:28.164540 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6cpwj"] Dec 09 12:07:28 crc kubenswrapper[4703]: I1209 12:07:28.190221 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9d4qd"] Dec 09 12:07:28 crc kubenswrapper[4703]: I1209 12:07:28.195060 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-dsplm"] Dec 09 12:07:28 crc kubenswrapper[4703]: I1209 12:07:28.205572 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rsr7z"] Dec 09 12:07:28 crc kubenswrapper[4703]: I1209 12:07:28.206250 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q57qh\" (UID: \"352f59bb-69ee-46d0-862c-d839ac334b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-q57qh" Dec 09 12:07:28 crc kubenswrapper[4703]: E1209 12:07:28.206569 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:07:28.706555512 +0000 UTC m=+147.955319031 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q57qh" (UID: "352f59bb-69ee-46d0-862c-d839ac334b35") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:07:28 crc kubenswrapper[4703]: W1209 12:07:28.272091 4703 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd8699089_c9ff_4389_8a8e_72b5c976b5ae.slice/crio-94e01773e15f34b8c49f65171fae43bac88817c80fb70a0d8317cfdff376b1e7 WatchSource:0}: Error finding container 94e01773e15f34b8c49f65171fae43bac88817c80fb70a0d8317cfdff376b1e7: Status 404 returned error can't find the container with id 94e01773e15f34b8c49f65171fae43bac88817c80fb70a0d8317cfdff376b1e7 Dec 09 12:07:28 crc kubenswrapper[4703]: I1209 12:07:28.312985 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:07:28 crc kubenswrapper[4703]: E1209 12:07:28.313443 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:07:28.813424326 +0000 UTC m=+148.062187845 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:07:28 crc kubenswrapper[4703]: I1209 12:07:28.417378 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q57qh\" (UID: \"352f59bb-69ee-46d0-862c-d839ac334b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-q57qh" Dec 09 12:07:28 crc kubenswrapper[4703]: E1209 12:07:28.417934 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:07:28.917922898 +0000 UTC m=+148.166686407 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q57qh" (UID: "352f59bb-69ee-46d0-862c-d839ac334b35") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:07:28 crc kubenswrapper[4703]: I1209 12:07:28.436475 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2zgjr" podStartSLOduration=128.436455203 podStartE2EDuration="2m8.436455203s" podCreationTimestamp="2025-12-09 12:05:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:07:28.436089171 +0000 UTC m=+147.684852690" watchObservedRunningTime="2025-12-09 12:07:28.436455203 +0000 UTC m=+147.685218722" Dec 09 12:07:28 crc kubenswrapper[4703]: I1209 12:07:28.518491 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:07:28 crc kubenswrapper[4703]: E1209 12:07:28.518832 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:07:29.018812743 +0000 UTC m=+148.267576262 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:07:28 crc kubenswrapper[4703]: I1209 12:07:28.623364 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q57qh\" (UID: \"352f59bb-69ee-46d0-862c-d839ac334b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-q57qh" Dec 09 12:07:28 crc kubenswrapper[4703]: E1209 12:07:28.623734 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:07:29.123718758 +0000 UTC m=+148.372482267 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q57qh" (UID: "352f59bb-69ee-46d0-862c-d839ac334b35") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:07:28 crc kubenswrapper[4703]: I1209 12:07:28.676568 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vqd7s" podStartSLOduration=129.676537096 podStartE2EDuration="2m9.676537096s" podCreationTimestamp="2025-12-09 12:05:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:07:28.652273072 +0000 UTC m=+147.901036591" watchObservedRunningTime="2025-12-09 12:07:28.676537096 +0000 UTC m=+147.925300615" Dec 09 12:07:28 crc kubenswrapper[4703]: I1209 12:07:28.721630 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-w2z24" podStartSLOduration=129.721610233 podStartE2EDuration="2m9.721610233s" podCreationTimestamp="2025-12-09 12:05:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:07:28.712529782 +0000 UTC m=+147.961293311" watchObservedRunningTime="2025-12-09 12:07:28.721610233 +0000 UTC m=+147.970373752" Dec 09 12:07:28 crc kubenswrapper[4703]: I1209 12:07:28.724318 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:07:28 crc kubenswrapper[4703]: E1209 12:07:28.724698 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:07:29.224684355 +0000 UTC m=+148.473447874 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:07:28 crc kubenswrapper[4703]: I1209 12:07:28.763754 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-8cpwb" podStartSLOduration=129.763733022 podStartE2EDuration="2m9.763733022s" podCreationTimestamp="2025-12-09 12:05:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:07:28.761839055 +0000 UTC m=+148.010602574" watchObservedRunningTime="2025-12-09 12:07:28.763733022 +0000 UTC m=+148.012496541" Dec 09 12:07:28 crc kubenswrapper[4703]: I1209 12:07:28.765910 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-92q4r" podStartSLOduration=129.765897437 podStartE2EDuration="2m9.765897437s" podCreationTimestamp="2025-12-09 12:05:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:07:28.730454288 +0000 UTC m=+147.979217807" watchObservedRunningTime="2025-12-09 12:07:28.765897437 +0000 UTC m=+148.014660956" Dec 09 12:07:28 crc kubenswrapper[4703]: I1209 12:07:28.826072 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q57qh\" (UID: \"352f59bb-69ee-46d0-862c-d839ac334b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-q57qh" Dec 09 12:07:28 crc kubenswrapper[4703]: E1209 12:07:28.826575 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:07:29.326560999 +0000 UTC m=+148.575324518 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q57qh" (UID: "352f59bb-69ee-46d0-862c-d839ac334b35") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:07:28 crc kubenswrapper[4703]: I1209 12:07:28.928887 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:07:28 crc kubenswrapper[4703]: E1209 12:07:28.929032 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:07:29.42901057 +0000 UTC m=+148.677774089 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:07:28 crc kubenswrapper[4703]: I1209 12:07:28.929366 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q57qh\" (UID: \"352f59bb-69ee-46d0-862c-d839ac334b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-q57qh" Dec 09 12:07:28 crc kubenswrapper[4703]: E1209 12:07:28.929723 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:07:29.429711572 +0000 UTC m=+148.678475091 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q57qh" (UID: "352f59bb-69ee-46d0-862c-d839ac334b35") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:07:29 crc kubenswrapper[4703]: I1209 12:07:28.998178 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-99nm5"] Dec 09 12:07:29 crc kubenswrapper[4703]: I1209 12:07:29.031119 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:07:29 crc kubenswrapper[4703]: E1209 12:07:29.031437 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:07:29.53141352 +0000 UTC m=+148.780177039 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:07:29 crc kubenswrapper[4703]: I1209 12:07:29.066992 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rsr7z" event={"ID":"bf95fba6-ebed-4f09-be10-b8d67bb51752","Type":"ContainerStarted","Data":"d88ef69d034298d067de504916d62485b43af80a465c933b857c7b8772c5e60d"} Dec 09 12:07:29 crc kubenswrapper[4703]: I1209 12:07:29.067902 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vlkrh" event={"ID":"d8699089-c9ff-4389-8a8e-72b5c976b5ae","Type":"ContainerStarted","Data":"94e01773e15f34b8c49f65171fae43bac88817c80fb70a0d8317cfdff376b1e7"} Dec 09 12:07:29 crc kubenswrapper[4703]: I1209 12:07:29.081746 4703 generic.go:334] "Generic (PLEG): container finished" podID="c4a6920d-8199-478b-bc08-a96dbc58d236" containerID="c57c56ba14bed638fa6671177df602d2bff8ae0f3e1f1b2643298e27dcc902ff" exitCode=0 Dec 09 12:07:29 crc kubenswrapper[4703]: I1209 12:07:29.141390 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q57qh\" (UID: \"352f59bb-69ee-46d0-862c-d839ac334b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-q57qh" Dec 09 12:07:29 crc kubenswrapper[4703]: E1209 12:07:29.141733 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:07:29.641715917 +0000 UTC m=+148.890479446 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q57qh" (UID: "352f59bb-69ee-46d0-862c-d839ac334b35") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:07:29 crc kubenswrapper[4703]: W1209 12:07:29.145514 4703 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8b1bfa15_e14a_489c_9eb9_f9774dfca6b4.slice/crio-b3981b7a7fa9b725841e1c0fa9babe9e4113747bc760605ee3bae9aee9b60bfb WatchSource:0}: Error finding container b3981b7a7fa9b725841e1c0fa9babe9e4113747bc760605ee3bae9aee9b60bfb: Status 404 returned error can't find the container with id b3981b7a7fa9b725841e1c0fa9babe9e4113747bc760605ee3bae9aee9b60bfb Dec 09 12:07:29 crc kubenswrapper[4703]: I1209 12:07:29.145849 4703 generic.go:334] "Generic (PLEG): container finished" podID="fd6b0396-1528-45bf-a713-67e7e20b3e96" containerID="e0387a5dbb7f619eeb74aace7e7a43fc5d479e316e3cd41d8c7f6fad75753241" exitCode=0 Dec 09 12:07:29 crc kubenswrapper[4703]: I1209 12:07:29.233008 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9d4qd" event={"ID":"edf79140-82dd-4511-a921-d2a8de8635bf","Type":"ContainerStarted","Data":"f83d08ba597598338b77141499c3670b9e7f4bb4706152c487d63c80b079c5ef"} Dec 09 12:07:29 crc kubenswrapper[4703]: I1209 12:07:29.233048 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-82x8s" event={"ID":"66e2c60b-b67f-4256-b15b-d987c05f3ea8","Type":"ContainerStarted","Data":"7e871adbfd78968ad4d63a41dc2735894d47061b60bdc5cb78d57a780aefbf45"} Dec 09 12:07:29 crc kubenswrapper[4703]: I1209 12:07:29.233071 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6cpwj" event={"ID":"42f7caf7-af08-4406-b189-3e4ce5fa6819","Type":"ContainerStarted","Data":"3879617a45f5a0cb046faf5063d9d0b0dc11c582128e36b2b9c51b4edf61a511"} Dec 09 12:07:29 crc kubenswrapper[4703]: I1209 12:07:29.233083 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5tsxq" event={"ID":"c4a6920d-8199-478b-bc08-a96dbc58d236","Type":"ContainerDied","Data":"c57c56ba14bed638fa6671177df602d2bff8ae0f3e1f1b2643298e27dcc902ff"} Dec 09 12:07:29 crc kubenswrapper[4703]: I1209 12:07:29.233097 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tf7x5" event={"ID":"c7a93dcd-9a01-4ae7-bf05-d9e13e83d6aa","Type":"ContainerStarted","Data":"2bcfefe7158f07497ce89fbc7cb8b6a9e5ff1311e2fdcb3a0bfccd46a72a812c"} Dec 09 12:07:29 crc kubenswrapper[4703]: I1209 12:07:29.233110 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-428j9" event={"ID":"9cbfe36d-d2b8-47cc-ad69-742d1cd0c77a","Type":"ContainerStarted","Data":"fc72921a5cb8784c2af5c286f56760ea5307c549e263d0dbb45e7c7e8f1644c4"} Dec 09 12:07:29 crc kubenswrapper[4703]: I1209 12:07:29.233122 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2zgjr" event={"ID":"296d291a-3dbf-46c2-a60c-8646965dcbdc","Type":"ContainerStarted","Data":"3005c287e8529b78364654c9c89fa69c3b777d16c80d42428c7e2a667603f355"} Dec 09 12:07:29 crc kubenswrapper[4703]: I1209 12:07:29.233136 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vv85r" event={"ID":"b002710e-eb24-40e3-ba04-74fd233f4def","Type":"ContainerStarted","Data":"2a4bea5328a2b1ef0d7ac0b7f57eeabc3dd089c4f340ee5bc93f841b2829084e"} Dec 09 12:07:29 crc kubenswrapper[4703]: I1209 12:07:29.232992 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-psc6r" podStartSLOduration=130.232971993 podStartE2EDuration="2m10.232971993s" podCreationTimestamp="2025-12-09 12:05:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:07:29.230932253 +0000 UTC m=+148.479695772" watchObservedRunningTime="2025-12-09 12:07:29.232971993 +0000 UTC m=+148.481735512" Dec 09 12:07:29 crc kubenswrapper[4703]: I1209 12:07:29.233148 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-w2z24" event={"ID":"b7d07c12-8633-4029-89e3-9298dd140444","Type":"ContainerStarted","Data":"048b1c5b1e69d50c0ed3699d8ee1d9a3586d4bf55ee172648da3c2cca2f82f0d"} Dec 09 12:07:29 crc kubenswrapper[4703]: I1209 12:07:29.233251 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kzgkb" event={"ID":"ae21c58e-0b2b-450e-980d-c2d839fda11b","Type":"ContainerStarted","Data":"25ebbfa24f2ad5deb5f47337ade1beb6a55071a60604ede799ba25a3cf74eba6"} Dec 09 12:07:29 crc kubenswrapper[4703]: I1209 12:07:29.233267 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-92q4r" event={"ID":"b5760fc2-3dd8-4966-a906-41bebd96de5d","Type":"ContainerStarted","Data":"e2477ad4f9d122337a566024c4d8c6ca344ac82c2bc309bac178867ea3b96a45"} Dec 09 12:07:29 crc kubenswrapper[4703]: I1209 12:07:29.233279 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fh878" event={"ID":"fd6b0396-1528-45bf-a713-67e7e20b3e96","Type":"ContainerDied","Data":"e0387a5dbb7f619eeb74aace7e7a43fc5d479e316e3cd41d8c7f6fad75753241"} Dec 09 12:07:29 crc kubenswrapper[4703]: I1209 12:07:29.233291 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ng8gw" event={"ID":"ad543716-1bbb-4a16-9670-084660959961","Type":"ContainerStarted","Data":"075a1b8c42ab1a1b6044aac08c32e09772717e1fed8f3d8e691a7b50918ee6cd"} Dec 09 12:07:29 crc kubenswrapper[4703]: I1209 12:07:29.233302 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-mdtkr" event={"ID":"a58cf20a-3255-44f0-b38a-2017d8cce4e0","Type":"ContainerStarted","Data":"599e3dd6a608cddba737386a65c7f24b42b03cb97283aedcc205ab95445753bc"} Dec 09 12:07:29 crc kubenswrapper[4703]: I1209 12:07:29.233314 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-psc6r" event={"ID":"8a37abfc-17ea-4aac-bbb9-9650cb15a2f6","Type":"ContainerStarted","Data":"8993a7ef79491096212090d1f1d72d43b4fa63b7568cb15b2f55c5f9fd63f8ea"} Dec 09 12:07:29 crc kubenswrapper[4703]: I1209 12:07:29.233325 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-hqx9m" event={"ID":"36981648-d6b7-4c08-96ec-622d069c4c19","Type":"ContainerStarted","Data":"a5c92fa00b29d25bd363fd1e8010503a5443b52be92c187e7fd422e760f941f3"} Dec 09 12:07:29 crc kubenswrapper[4703]: I1209 12:07:29.239527 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-czvkf" event={"ID":"558df921-8b36-45ba-8cc8-b25a1a2f6172","Type":"ContainerStarted","Data":"15c01b87de4ac88fd86d8776a4d609a24fb3af2548a6d1797ecbdaf20c8d8a0c"} Dec 09 12:07:29 crc kubenswrapper[4703]: I1209 12:07:29.241717 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dsplm" event={"ID":"f0a85bf1-837f-4175-803a-c09fde56e5d9","Type":"ContainerStarted","Data":"8787091afc24fb697d325de7b10e0cf750229c060a92ff338732d791d6e5d890"} Dec 09 12:07:29 crc kubenswrapper[4703]: I1209 12:07:29.242217 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:07:29 crc kubenswrapper[4703]: E1209 12:07:29.243577 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:07:29.74355545 +0000 UTC m=+148.992318989 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:07:29 crc kubenswrapper[4703]: I1209 12:07:29.259634 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-cjdjc" event={"ID":"de18af96-27c1-4d28-acfe-e0317de38dba","Type":"ContainerStarted","Data":"446101c781b3685f533b3bb04b40d846e78147f6538122c8e299d99999952f65"} Dec 09 12:07:29 crc kubenswrapper[4703]: I1209 12:07:29.305910 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-cjdjc" podStartSLOduration=130.305894122 podStartE2EDuration="2m10.305894122s" podCreationTimestamp="2025-12-09 12:05:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:07:29.304719968 +0000 UTC m=+148.553483487" watchObservedRunningTime="2025-12-09 12:07:29.305894122 +0000 UTC m=+148.554657641" Dec 09 12:07:29 crc kubenswrapper[4703]: I1209 12:07:29.310002 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-mktpn" event={"ID":"d353402b-aa82-4c6a-aec3-4253503dfe34","Type":"ContainerStarted","Data":"9c68cb6f3847926d093f285a2e79e50bdccea9fbd384aa505343a4b45a929d83"} Dec 09 12:07:29 crc kubenswrapper[4703]: I1209 12:07:29.344372 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-rsr8p" event={"ID":"4491d82e-ecc9-4873-b1dd-412889079392","Type":"ContainerStarted","Data":"9528fb8600d4cb2747d0925b4e157c766d3a71d269ab94a448815b5b9c5a39c9"} Dec 09 12:07:29 crc kubenswrapper[4703]: I1209 12:07:29.345645 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q57qh\" (UID: \"352f59bb-69ee-46d0-862c-d839ac334b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-q57qh" Dec 09 12:07:29 crc kubenswrapper[4703]: E1209 12:07:29.350466 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:07:29.850430153 +0000 UTC m=+149.099193672 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q57qh" (UID: "352f59bb-69ee-46d0-862c-d839ac334b35") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:07:29 crc kubenswrapper[4703]: I1209 12:07:29.366226 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-mktpn" podStartSLOduration=129.366168474 podStartE2EDuration="2m9.366168474s" podCreationTimestamp="2025-12-09 12:05:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:07:29.351266339 +0000 UTC m=+148.600029868" watchObservedRunningTime="2025-12-09 12:07:29.366168474 +0000 UTC m=+148.614932003" Dec 09 12:07:29 crc kubenswrapper[4703]: I1209 12:07:29.441756 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-w77wb"] Dec 09 12:07:29 crc kubenswrapper[4703]: I1209 12:07:29.454230 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:07:29 crc kubenswrapper[4703]: E1209 12:07:29.460171 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:07:29.960144072 +0000 UTC m=+149.208907591 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:07:29 crc kubenswrapper[4703]: I1209 12:07:29.508086 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-b4s68"] Dec 09 12:07:29 crc kubenswrapper[4703]: I1209 12:07:29.551536 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2zgjr" Dec 09 12:07:29 crc kubenswrapper[4703]: I1209 12:07:29.557177 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-trptt"] Dec 09 12:07:29 crc kubenswrapper[4703]: I1209 12:07:29.557322 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-knw7d"] Dec 09 12:07:29 crc kubenswrapper[4703]: I1209 12:07:29.558311 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q57qh\" (UID: \"352f59bb-69ee-46d0-862c-d839ac334b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-q57qh" Dec 09 12:07:29 crc kubenswrapper[4703]: E1209 12:07:29.558680 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:07:30.058663386 +0000 UTC m=+149.307426905 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q57qh" (UID: "352f59bb-69ee-46d0-862c-d839ac334b35") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:07:29 crc kubenswrapper[4703]: I1209 12:07:29.659640 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:07:29 crc kubenswrapper[4703]: E1209 12:07:29.660251 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:07:30.16017117 +0000 UTC m=+149.408934689 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:07:29 crc kubenswrapper[4703]: I1209 12:07:29.752275 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-cjdjc" Dec 09 12:07:29 crc kubenswrapper[4703]: I1209 12:07:29.762939 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q57qh\" (UID: \"352f59bb-69ee-46d0-862c-d839ac334b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-q57qh" Dec 09 12:07:29 crc kubenswrapper[4703]: E1209 12:07:29.763263 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:07:30.263242699 +0000 UTC m=+149.512006218 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q57qh" (UID: "352f59bb-69ee-46d0-862c-d839ac334b35") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:07:29 crc kubenswrapper[4703]: I1209 12:07:29.774496 4703 patch_prober.go:28] interesting pod/router-default-5444994796-cjdjc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 12:07:29 crc kubenswrapper[4703]: [-]has-synced failed: reason withheld Dec 09 12:07:29 crc kubenswrapper[4703]: [+]process-running ok Dec 09 12:07:29 crc kubenswrapper[4703]: healthz check failed Dec 09 12:07:29 crc kubenswrapper[4703]: I1209 12:07:29.774538 4703 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-cjdjc" podUID="de18af96-27c1-4d28-acfe-e0317de38dba" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 12:07:29 crc kubenswrapper[4703]: I1209 12:07:29.848937 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-hdphr"] Dec 09 12:07:29 crc kubenswrapper[4703]: I1209 12:07:29.866107 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:07:29 crc kubenswrapper[4703]: E1209 12:07:29.866533 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:07:30.366510725 +0000 UTC m=+149.615274244 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:07:29 crc kubenswrapper[4703]: I1209 12:07:29.878397 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-7phbd"] Dec 09 12:07:29 crc kubenswrapper[4703]: W1209 12:07:29.917432 4703 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2078d397_e8a5_4dbf_8573_360a9c373084.slice/crio-e11cbdcb9eaeb83f21e728183ff63cfb3227504496e5bb4a5a249049888f13a1 WatchSource:0}: Error finding container e11cbdcb9eaeb83f21e728183ff63cfb3227504496e5bb4a5a249049888f13a1: Status 404 returned error can't find the container with id e11cbdcb9eaeb83f21e728183ff63cfb3227504496e5bb4a5a249049888f13a1 Dec 09 12:07:29 crc kubenswrapper[4703]: I1209 12:07:29.967055 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q57qh\" (UID: \"352f59bb-69ee-46d0-862c-d839ac334b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-q57qh" Dec 09 12:07:29 crc kubenswrapper[4703]: E1209 12:07:29.967385 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:07:30.467374109 +0000 UTC m=+149.716137628 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q57qh" (UID: "352f59bb-69ee-46d0-862c-d839ac334b35") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:07:30 crc kubenswrapper[4703]: W1209 12:07:30.034630 4703 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-ea2c181da996929ff4470c2d58b95ff56589da671ed881901d43638611950a7f WatchSource:0}: Error finding container ea2c181da996929ff4470c2d58b95ff56589da671ed881901d43638611950a7f: Status 404 returned error can't find the container with id ea2c181da996929ff4470c2d58b95ff56589da671ed881901d43638611950a7f Dec 09 12:07:30 crc kubenswrapper[4703]: I1209 12:07:30.067897 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:07:30 crc kubenswrapper[4703]: E1209 12:07:30.068290 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:07:30.568266054 +0000 UTC m=+149.817029593 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:07:30 crc kubenswrapper[4703]: I1209 12:07:30.083915 4703 patch_prober.go:28] interesting pod/machine-config-daemon-q8sfk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 12:07:30 crc kubenswrapper[4703]: I1209 12:07:30.083999 4703 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 12:07:30 crc kubenswrapper[4703]: W1209 12:07:30.104354 4703 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-19fb2ce32a79a6c582bc4b8c80bd236fbbbf27290976a95b45037937a70723ac WatchSource:0}: Error finding container 19fb2ce32a79a6c582bc4b8c80bd236fbbbf27290976a95b45037937a70723ac: Status 404 returned error can't find the container with id 19fb2ce32a79a6c582bc4b8c80bd236fbbbf27290976a95b45037937a70723ac Dec 09 12:07:30 crc kubenswrapper[4703]: W1209 12:07:30.162306 4703 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e44c4b9_79cf_44c7_ad73_6aaeac691e86.slice/crio-b73c888b4166daddd8c5cf8dd382a52f9fba72fb6fa950f973ddc04e35ae5b71 WatchSource:0}: Error finding container b73c888b4166daddd8c5cf8dd382a52f9fba72fb6fa950f973ddc04e35ae5b71: Status 404 returned error can't find the container with id b73c888b4166daddd8c5cf8dd382a52f9fba72fb6fa950f973ddc04e35ae5b71 Dec 09 12:07:30 crc kubenswrapper[4703]: I1209 12:07:30.169733 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q57qh\" (UID: \"352f59bb-69ee-46d0-862c-d839ac334b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-q57qh" Dec 09 12:07:30 crc kubenswrapper[4703]: E1209 12:07:30.170255 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:07:30.670238441 +0000 UTC m=+149.919001960 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q57qh" (UID: "352f59bb-69ee-46d0-862c-d839ac334b35") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:07:30 crc kubenswrapper[4703]: I1209 12:07:30.270473 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:07:30 crc kubenswrapper[4703]: E1209 12:07:30.271421 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:07:30.771403454 +0000 UTC m=+150.020166973 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:07:30 crc kubenswrapper[4703]: I1209 12:07:30.372821 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q57qh\" (UID: \"352f59bb-69ee-46d0-862c-d839ac334b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-q57qh" Dec 09 12:07:30 crc kubenswrapper[4703]: E1209 12:07:30.373322 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:07:30.873306459 +0000 UTC m=+150.122069978 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q57qh" (UID: "352f59bb-69ee-46d0-862c-d839ac334b35") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:07:30 crc kubenswrapper[4703]: I1209 12:07:30.396427 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vv85r" event={"ID":"b002710e-eb24-40e3-ba04-74fd233f4def","Type":"ContainerStarted","Data":"27da40880ec38f28a16357fd3b8ab852ab03cc421650f2171b85c4bb6b415fe9"} Dec 09 12:07:30 crc kubenswrapper[4703]: I1209 12:07:30.455909 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-b4s68" event={"ID":"f22fe25c-091e-4ed7-b58c-092e4c75df3e","Type":"ContainerStarted","Data":"6453ae9ca803ba77eb0dde2f1c14b405db1fdd306576823c8540d75ba872ef2f"} Dec 09 12:07:30 crc kubenswrapper[4703]: I1209 12:07:30.457468 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-7phbd" event={"ID":"8e44c4b9-79cf-44c7-ad73-6aaeac691e86","Type":"ContainerStarted","Data":"b73c888b4166daddd8c5cf8dd382a52f9fba72fb6fa950f973ddc04e35ae5b71"} Dec 09 12:07:30 crc kubenswrapper[4703]: I1209 12:07:30.460489 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-428j9" event={"ID":"9cbfe36d-d2b8-47cc-ad69-742d1cd0c77a","Type":"ContainerStarted","Data":"769b901440e7db9db09eb1893c51048a0527d18519f4e16a2ec11fb443759b10"} Dec 09 12:07:30 crc kubenswrapper[4703]: I1209 12:07:30.463310 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"19fb2ce32a79a6c582bc4b8c80bd236fbbbf27290976a95b45037937a70723ac"} Dec 09 12:07:30 crc kubenswrapper[4703]: I1209 12:07:30.464618 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-82x8s" event={"ID":"66e2c60b-b67f-4256-b15b-d987c05f3ea8","Type":"ContainerStarted","Data":"1e25fb0d48a8229bdb74b6de505684a98ac2090e16a404e39b36506aaf6800d4"} Dec 09 12:07:30 crc kubenswrapper[4703]: I1209 12:07:30.465609 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-2xlcr" event={"ID":"5668acde-421e-4a2c-8172-0030b25db0f6","Type":"ContainerStarted","Data":"024049faa8f130fe0cd0e94c171c0d3f53da21224f21dadb0086203a08df224e"} Dec 09 12:07:30 crc kubenswrapper[4703]: I1209 12:07:30.466541 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-2xlcr" Dec 09 12:07:30 crc kubenswrapper[4703]: I1209 12:07:30.474048 4703 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-2xlcr container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Dec 09 12:07:30 crc kubenswrapper[4703]: I1209 12:07:30.474105 4703 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-2xlcr" podUID="5668acde-421e-4a2c-8172-0030b25db0f6" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Dec 09 12:07:30 crc kubenswrapper[4703]: I1209 12:07:30.474418 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:07:30 crc kubenswrapper[4703]: E1209 12:07:30.474588 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:07:30.974569955 +0000 UTC m=+150.223333484 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:07:30 crc kubenswrapper[4703]: I1209 12:07:30.474614 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q57qh\" (UID: \"352f59bb-69ee-46d0-862c-d839ac334b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-q57qh" Dec 09 12:07:30 crc kubenswrapper[4703]: E1209 12:07:30.474901 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:07:30.974894565 +0000 UTC m=+150.223658074 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q57qh" (UID: "352f59bb-69ee-46d0-862c-d839ac334b35") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:07:30 crc kubenswrapper[4703]: I1209 12:07:30.475347 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tf7x5" event={"ID":"c7a93dcd-9a01-4ae7-bf05-d9e13e83d6aa","Type":"ContainerStarted","Data":"8d1207bf9c3c7c3f1df35680ba368ecd6efa3e75b892940b04e23e88fe3b445b"} Dec 09 12:07:30 crc kubenswrapper[4703]: I1209 12:07:30.476149 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tf7x5" Dec 09 12:07:30 crc kubenswrapper[4703]: I1209 12:07:30.476960 4703 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-tf7x5 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Dec 09 12:07:30 crc kubenswrapper[4703]: I1209 12:07:30.476989 4703 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tf7x5" podUID="c7a93dcd-9a01-4ae7-bf05-d9e13e83d6aa" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" Dec 09 12:07:30 crc kubenswrapper[4703]: I1209 12:07:30.478412 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8szns" event={"ID":"1fc394d2-c901-4c55-b5e4-c4e2f889cb82","Type":"ContainerStarted","Data":"76dd3bbfbea0b0fb102928ff0bfeba2b30a2447e277a58700ea430a58e2b181e"} Dec 09 12:07:30 crc kubenswrapper[4703]: I1209 12:07:30.493972 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-428j9" podStartSLOduration=6.493934104 podStartE2EDuration="6.493934104s" podCreationTimestamp="2025-12-09 12:07:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:07:30.481986856 +0000 UTC m=+149.730750385" watchObservedRunningTime="2025-12-09 12:07:30.493934104 +0000 UTC m=+149.742697613" Dec 09 12:07:30 crc kubenswrapper[4703]: I1209 12:07:30.500760 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tf7x5" podStartSLOduration=131.500736427 podStartE2EDuration="2m11.500736427s" podCreationTimestamp="2025-12-09 12:05:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:07:30.499651535 +0000 UTC m=+149.748415054" watchObservedRunningTime="2025-12-09 12:07:30.500736427 +0000 UTC m=+149.749499946" Dec 09 12:07:30 crc kubenswrapper[4703]: I1209 12:07:30.505099 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-mdtkr" event={"ID":"a58cf20a-3255-44f0-b38a-2017d8cce4e0","Type":"ContainerStarted","Data":"01dffdd7682be80408cc0490a162e1ced4e7748f5689454497506385ce858be3"} Dec 09 12:07:30 crc kubenswrapper[4703]: I1209 12:07:30.507095 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6cpwj" event={"ID":"42f7caf7-af08-4406-b189-3e4ce5fa6819","Type":"ContainerStarted","Data":"9250826ae135d49902674493ee0917bd5f3d3d8a264c0e1a626be71194aea996"} Dec 09 12:07:30 crc kubenswrapper[4703]: I1209 12:07:30.508537 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-knw7d" event={"ID":"3dfa5db2-7657-47b6-804c-878fd45694e5","Type":"ContainerStarted","Data":"4a3e419e860d8f2a4be892df65d7cf7fd698b7100681d593148021c559bdaa3c"} Dec 09 12:07:30 crc kubenswrapper[4703]: I1209 12:07:30.520110 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-2xlcr" podStartSLOduration=131.520091696 podStartE2EDuration="2m11.520091696s" podCreationTimestamp="2025-12-09 12:05:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:07:30.517270002 +0000 UTC m=+149.766033521" watchObservedRunningTime="2025-12-09 12:07:30.520091696 +0000 UTC m=+149.768855215" Dec 09 12:07:30 crc kubenswrapper[4703]: I1209 12:07:30.527324 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vlkrh" event={"ID":"d8699089-c9ff-4389-8a8e-72b5c976b5ae","Type":"ContainerStarted","Data":"a017079e8d9babf21e6a2956dc97b734c4fd3227220478313cf0a7b1ee0ecc20"} Dec 09 12:07:30 crc kubenswrapper[4703]: I1209 12:07:30.528279 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-vlkrh" Dec 09 12:07:30 crc kubenswrapper[4703]: I1209 12:07:30.529806 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zg8pp" event={"ID":"2b57ba45-1dc5-4f41-93f1-94c5d07cebaa","Type":"ContainerStarted","Data":"6e050b292e2f1f5c333b115b19648877fa0251c2dd2980cef91b59760c5d3ae7"} Dec 09 12:07:30 crc kubenswrapper[4703]: I1209 12:07:30.532294 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dh6tp" event={"ID":"68775d5a-a682-4965-ac8a-74216e8471fb","Type":"ContainerStarted","Data":"7ead7e6efe53ef194bc223578b49ad106255800daecb0e4f50df137c12a6e1f8"} Dec 09 12:07:30 crc kubenswrapper[4703]: I1209 12:07:30.532770 4703 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-vlkrh container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.30:8080/healthz\": dial tcp 10.217.0.30:8080: connect: connection refused" start-of-body= Dec 09 12:07:30 crc kubenswrapper[4703]: I1209 12:07:30.532803 4703 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-vlkrh" podUID="d8699089-c9ff-4389-8a8e-72b5c976b5ae" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.30:8080/healthz\": dial tcp 10.217.0.30:8080: connect: connection refused" Dec 09 12:07:30 crc kubenswrapper[4703]: I1209 12:07:30.538167 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"6bf132282271105e459b707aea775998445c8482298abc2c9495cd368eba36cc"} Dec 09 12:07:30 crc kubenswrapper[4703]: I1209 12:07:30.543279 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-w77wb" event={"ID":"2078d397-e8a5-4dbf-8573-360a9c373084","Type":"ContainerStarted","Data":"e11cbdcb9eaeb83f21e728183ff63cfb3227504496e5bb4a5a249049888f13a1"} Dec 09 12:07:30 crc kubenswrapper[4703]: I1209 12:07:30.544373 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-trptt" event={"ID":"51f058fc-be40-4ed7-a028-eeced6eec26a","Type":"ContainerStarted","Data":"4df2f004d04e31d3a0f70a3d1bf64d68fae53306f86c97e7d2b52d3f44939bf5"} Dec 09 12:07:30 crc kubenswrapper[4703]: I1209 12:07:30.547838 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ng8gw" event={"ID":"ad543716-1bbb-4a16-9670-084660959961","Type":"ContainerStarted","Data":"e2dd2b9d7bbe90c749e128f3f48fa1de6cfe2f6d643d6100ae7ae6ccc650f7f0"} Dec 09 12:07:30 crc kubenswrapper[4703]: I1209 12:07:30.548238 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ng8gw" Dec 09 12:07:30 crc kubenswrapper[4703]: I1209 12:07:30.553176 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-rsr8p" event={"ID":"4491d82e-ecc9-4873-b1dd-412889079392","Type":"ContainerStarted","Data":"5c5c89ad46461a2f44beee92ba31b7b75a31c1706c3ba699710a5ea693ebf08b"} Dec 09 12:07:30 crc kubenswrapper[4703]: I1209 12:07:30.554409 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8szns" podStartSLOduration=131.55439066 podStartE2EDuration="2m11.55439066s" podCreationTimestamp="2025-12-09 12:05:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:07:30.545890186 +0000 UTC m=+149.794653705" watchObservedRunningTime="2025-12-09 12:07:30.55439066 +0000 UTC m=+149.803154169" Dec 09 12:07:30 crc kubenswrapper[4703]: I1209 12:07:30.555946 4703 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-ng8gw container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.13:5443/healthz\": dial tcp 10.217.0.13:5443: connect: connection refused" start-of-body= Dec 09 12:07:30 crc kubenswrapper[4703]: I1209 12:07:30.556013 4703 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ng8gw" podUID="ad543716-1bbb-4a16-9670-084660959961" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.13:5443/healthz\": dial tcp 10.217.0.13:5443: connect: connection refused" Dec 09 12:07:30 crc kubenswrapper[4703]: I1209 12:07:30.559759 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-hdphr" event={"ID":"e2de5e19-e8f3-4721-96c3-943bf64a7dab","Type":"ContainerStarted","Data":"cff22c2a3ae157b389cc7ec3393fcfad78e6878592ad7727f10ae8e48d9c4c57"} Dec 09 12:07:30 crc kubenswrapper[4703]: I1209 12:07:30.560660 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-99nm5" event={"ID":"8b1bfa15-e14a-489c-9eb9-f9774dfca6b4","Type":"ContainerStarted","Data":"b3981b7a7fa9b725841e1c0fa9babe9e4113747bc760605ee3bae9aee9b60bfb"} Dec 09 12:07:30 crc kubenswrapper[4703]: I1209 12:07:30.566938 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kzgkb" event={"ID":"ae21c58e-0b2b-450e-980d-c2d839fda11b","Type":"ContainerStarted","Data":"520772fe6aedc1cd62a92b7534a1f53f5ed924d5c8e5d6965f052ab6444965ff"} Dec 09 12:07:30 crc kubenswrapper[4703]: I1209 12:07:30.570203 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-mdtkr" podStartSLOduration=131.570172412 podStartE2EDuration="2m11.570172412s" podCreationTimestamp="2025-12-09 12:05:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:07:30.569183283 +0000 UTC m=+149.817946802" watchObservedRunningTime="2025-12-09 12:07:30.570172412 +0000 UTC m=+149.818935931" Dec 09 12:07:30 crc kubenswrapper[4703]: I1209 12:07:30.575492 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dsplm" event={"ID":"f0a85bf1-837f-4175-803a-c09fde56e5d9","Type":"ContainerStarted","Data":"272b08d9d6af2938e43cbbee21bb99f47964dbad1bdb6a116be06b86e2cfd3a0"} Dec 09 12:07:30 crc kubenswrapper[4703]: I1209 12:07:30.580100 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-rpbfz" event={"ID":"f3c962d1-1735-4e31-9b41-246c2876d628","Type":"ContainerStarted","Data":"36df0b726be79c5927aa01cc20c555ff1483b772f09479a7ec8997136f8d8d71"} Dec 09 12:07:30 crc kubenswrapper[4703]: I1209 12:07:30.581641 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"ea2c181da996929ff4470c2d58b95ff56589da671ed881901d43638611950a7f"} Dec 09 12:07:30 crc kubenswrapper[4703]: I1209 12:07:30.584252 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421360-wvnqz" event={"ID":"049a6f1a-4b47-421d-bfd6-f1c89a5c0a80","Type":"ContainerStarted","Data":"a43783aa0bf25bc8bb1b96389392d0c658ad806ff8f5c10d27ea3c0be3cf38a7"} Dec 09 12:07:30 crc kubenswrapper[4703]: I1209 12:07:30.584473 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:07:30 crc kubenswrapper[4703]: E1209 12:07:30.584674 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:07:31.084646955 +0000 UTC m=+150.333410474 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:07:30 crc kubenswrapper[4703]: I1209 12:07:30.588240 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q57qh\" (UID: \"352f59bb-69ee-46d0-862c-d839ac334b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-q57qh" Dec 09 12:07:30 crc kubenswrapper[4703]: E1209 12:07:30.593344 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:07:31.093329434 +0000 UTC m=+150.342093023 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q57qh" (UID: "352f59bb-69ee-46d0-862c-d839ac334b35") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:07:30 crc kubenswrapper[4703]: I1209 12:07:30.598540 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dh6tp" podStartSLOduration=131.59852845 podStartE2EDuration="2m11.59852845s" podCreationTimestamp="2025-12-09 12:05:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:07:30.597730625 +0000 UTC m=+149.846494144" watchObservedRunningTime="2025-12-09 12:07:30.59852845 +0000 UTC m=+149.847291959" Dec 09 12:07:30 crc kubenswrapper[4703]: I1209 12:07:30.599555 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-mdtkr" Dec 09 12:07:30 crc kubenswrapper[4703]: I1209 12:07:30.599605 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-mdtkr" Dec 09 12:07:30 crc kubenswrapper[4703]: I1209 12:07:30.613387 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-hcmrp" event={"ID":"8f077d38-7967-4ef5-ba75-d833279fdd96","Type":"ContainerStarted","Data":"3c3f571c9d10e74107bb6d355908487d99cd1fec3d90923e78dcba3372fc0951"} Dec 09 12:07:30 crc kubenswrapper[4703]: I1209 12:07:30.614277 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-hcmrp" Dec 09 12:07:30 crc kubenswrapper[4703]: I1209 12:07:30.616521 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-llbbl" event={"ID":"80ff55a4-f343-4795-9c6f-4ff56a52ea82","Type":"ContainerStarted","Data":"8061164018ca07972ac34a5963cfeaaed0f9bbb7f9269e7272248998e4a62afb"} Dec 09 12:07:30 crc kubenswrapper[4703]: I1209 12:07:30.619050 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-hqx9m" event={"ID":"36981648-d6b7-4c08-96ec-622d069c4c19","Type":"ContainerStarted","Data":"ce6527d69f718f0e589069c927a6344d14223f84b6905f1520edd05974536749"} Dec 09 12:07:30 crc kubenswrapper[4703]: I1209 12:07:30.619094 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-hqx9m" Dec 09 12:07:30 crc kubenswrapper[4703]: I1209 12:07:30.622643 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zg8pp" podStartSLOduration=132.62262784 podStartE2EDuration="2m12.62262784s" podCreationTimestamp="2025-12-09 12:05:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:07:30.620871707 +0000 UTC m=+149.869635226" watchObservedRunningTime="2025-12-09 12:07:30.62262784 +0000 UTC m=+149.871391359" Dec 09 12:07:30 crc kubenswrapper[4703]: I1209 12:07:30.635700 4703 patch_prober.go:28] interesting pod/downloads-7954f5f757-hqx9m container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Dec 09 12:07:30 crc kubenswrapper[4703]: I1209 12:07:30.635758 4703 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hqx9m" podUID="36981648-d6b7-4c08-96ec-622d069c4c19" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Dec 09 12:07:30 crc kubenswrapper[4703]: I1209 12:07:30.635828 4703 patch_prober.go:28] interesting pod/console-operator-58897d9998-hcmrp container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.17:8443/readyz\": dial tcp 10.217.0.17:8443: connect: connection refused" start-of-body= Dec 09 12:07:30 crc kubenswrapper[4703]: I1209 12:07:30.635944 4703 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-hcmrp" podUID="8f077d38-7967-4ef5-ba75-d833279fdd96" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.17:8443/readyz\": dial tcp 10.217.0.17:8443: connect: connection refused" Dec 09 12:07:30 crc kubenswrapper[4703]: I1209 12:07:30.655359 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-vlkrh" podStartSLOduration=131.655341647 podStartE2EDuration="2m11.655341647s" podCreationTimestamp="2025-12-09 12:05:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:07:30.646533494 +0000 UTC m=+149.895297023" watchObservedRunningTime="2025-12-09 12:07:30.655341647 +0000 UTC m=+149.904105166" Dec 09 12:07:30 crc kubenswrapper[4703]: I1209 12:07:30.668938 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6cpwj" podStartSLOduration=131.668921783 podStartE2EDuration="2m11.668921783s" podCreationTimestamp="2025-12-09 12:05:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:07:30.667120059 +0000 UTC m=+149.915883598" watchObservedRunningTime="2025-12-09 12:07:30.668921783 +0000 UTC m=+149.917685302" Dec 09 12:07:30 crc kubenswrapper[4703]: I1209 12:07:30.686555 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-hcmrp" podStartSLOduration=131.68653916 podStartE2EDuration="2m11.68653916s" podCreationTimestamp="2025-12-09 12:05:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:07:30.686413665 +0000 UTC m=+149.935177194" watchObservedRunningTime="2025-12-09 12:07:30.68653916 +0000 UTC m=+149.935302679" Dec 09 12:07:30 crc kubenswrapper[4703]: I1209 12:07:30.692153 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:07:30 crc kubenswrapper[4703]: E1209 12:07:30.692860 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:07:31.192833337 +0000 UTC m=+150.441596856 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:07:30 crc kubenswrapper[4703]: I1209 12:07:30.695141 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q57qh\" (UID: \"352f59bb-69ee-46d0-862c-d839ac334b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-q57qh" Dec 09 12:07:30 crc kubenswrapper[4703]: E1209 12:07:30.695785 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:07:31.195770545 +0000 UTC m=+150.444534104 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q57qh" (UID: "352f59bb-69ee-46d0-862c-d839ac334b35") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:07:30 crc kubenswrapper[4703]: I1209 12:07:30.773283 4703 patch_prober.go:28] interesting pod/router-default-5444994796-cjdjc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 12:07:30 crc kubenswrapper[4703]: [-]has-synced failed: reason withheld Dec 09 12:07:30 crc kubenswrapper[4703]: [+]process-running ok Dec 09 12:07:30 crc kubenswrapper[4703]: healthz check failed Dec 09 12:07:30 crc kubenswrapper[4703]: I1209 12:07:30.773354 4703 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-cjdjc" podUID="de18af96-27c1-4d28-acfe-e0317de38dba" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 12:07:30 crc kubenswrapper[4703]: I1209 12:07:30.787781 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ng8gw" podStartSLOduration=131.787765264 podStartE2EDuration="2m11.787765264s" podCreationTimestamp="2025-12-09 12:05:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:07:30.783848207 +0000 UTC m=+150.032611726" watchObservedRunningTime="2025-12-09 12:07:30.787765264 +0000 UTC m=+150.036528783" Dec 09 12:07:30 crc kubenswrapper[4703]: I1209 12:07:30.790646 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29421360-wvnqz" podStartSLOduration=131.79062953 podStartE2EDuration="2m11.79062953s" podCreationTimestamp="2025-12-09 12:05:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:07:30.745593394 +0000 UTC m=+149.994356933" watchObservedRunningTime="2025-12-09 12:07:30.79062953 +0000 UTC m=+150.039393049" Dec 09 12:07:30 crc kubenswrapper[4703]: I1209 12:07:30.797713 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:07:30 crc kubenswrapper[4703]: E1209 12:07:30.798060 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:07:31.298044541 +0000 UTC m=+150.546808070 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:07:30 crc kubenswrapper[4703]: I1209 12:07:30.871955 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-llbbl" podStartSLOduration=131.871902809 podStartE2EDuration="2m11.871902809s" podCreationTimestamp="2025-12-09 12:05:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:07:30.822513823 +0000 UTC m=+150.071277342" watchObservedRunningTime="2025-12-09 12:07:30.871902809 +0000 UTC m=+150.120666328" Dec 09 12:07:30 crc kubenswrapper[4703]: I1209 12:07:30.872412 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-rsr8p" podStartSLOduration=131.872405963 podStartE2EDuration="2m11.872405963s" podCreationTimestamp="2025-12-09 12:05:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:07:30.865777476 +0000 UTC m=+150.114540995" watchObservedRunningTime="2025-12-09 12:07:30.872405963 +0000 UTC m=+150.121169482" Dec 09 12:07:30 crc kubenswrapper[4703]: I1209 12:07:30.898648 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q57qh\" (UID: \"352f59bb-69ee-46d0-862c-d839ac334b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-q57qh" Dec 09 12:07:30 crc kubenswrapper[4703]: E1209 12:07:30.899010 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:07:31.398982787 +0000 UTC m=+150.647746296 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q57qh" (UID: "352f59bb-69ee-46d0-862c-d839ac334b35") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:07:30 crc kubenswrapper[4703]: I1209 12:07:30.985715 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kzgkb" podStartSLOduration=131.985700379 podStartE2EDuration="2m11.985700379s" podCreationTimestamp="2025-12-09 12:05:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:07:30.984943907 +0000 UTC m=+150.233707426" watchObservedRunningTime="2025-12-09 12:07:30.985700379 +0000 UTC m=+150.234463898" Dec 09 12:07:30 crc kubenswrapper[4703]: I1209 12:07:30.986619 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-hqx9m" podStartSLOduration=131.986612427 podStartE2EDuration="2m11.986612427s" podCreationTimestamp="2025-12-09 12:05:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:07:30.944407725 +0000 UTC m=+150.193171244" watchObservedRunningTime="2025-12-09 12:07:30.986612427 +0000 UTC m=+150.235375946" Dec 09 12:07:31 crc kubenswrapper[4703]: I1209 12:07:31.002834 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:07:31 crc kubenswrapper[4703]: E1209 12:07:31.003105 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:07:31.503088589 +0000 UTC m=+150.751852108 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:07:31 crc kubenswrapper[4703]: I1209 12:07:31.063320 4703 patch_prober.go:28] interesting pod/apiserver-76f77b778f-mdtkr container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 09 12:07:31 crc kubenswrapper[4703]: [+]log ok Dec 09 12:07:31 crc kubenswrapper[4703]: [+]etcd ok Dec 09 12:07:31 crc kubenswrapper[4703]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 09 12:07:31 crc kubenswrapper[4703]: [+]poststarthook/generic-apiserver-start-informers ok Dec 09 12:07:31 crc kubenswrapper[4703]: [+]poststarthook/max-in-flight-filter ok Dec 09 12:07:31 crc kubenswrapper[4703]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 09 12:07:31 crc kubenswrapper[4703]: [+]poststarthook/image.openshift.io-apiserver-caches ok Dec 09 12:07:31 crc kubenswrapper[4703]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Dec 09 12:07:31 crc kubenswrapper[4703]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Dec 09 12:07:31 crc kubenswrapper[4703]: [+]poststarthook/project.openshift.io-projectcache ok Dec 09 12:07:31 crc kubenswrapper[4703]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Dec 09 12:07:31 crc kubenswrapper[4703]: [+]poststarthook/openshift.io-startinformers ok Dec 09 12:07:31 crc kubenswrapper[4703]: [+]poststarthook/openshift.io-restmapperupdater ok Dec 09 12:07:31 crc kubenswrapper[4703]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Dec 09 12:07:31 crc kubenswrapper[4703]: livez check failed Dec 09 12:07:31 crc kubenswrapper[4703]: I1209 12:07:31.063410 4703 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-mdtkr" podUID="a58cf20a-3255-44f0-b38a-2017d8cce4e0" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 12:07:31 crc kubenswrapper[4703]: I1209 12:07:31.105100 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q57qh\" (UID: \"352f59bb-69ee-46d0-862c-d839ac334b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-q57qh" Dec 09 12:07:31 crc kubenswrapper[4703]: E1209 12:07:31.105444 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:07:31.605428147 +0000 UTC m=+150.854191666 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q57qh" (UID: "352f59bb-69ee-46d0-862c-d839ac334b35") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:07:31 crc kubenswrapper[4703]: I1209 12:07:31.206421 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:07:31 crc kubenswrapper[4703]: E1209 12:07:31.206728 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:07:31.706713343 +0000 UTC m=+150.955476862 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:07:31 crc kubenswrapper[4703]: I1209 12:07:31.307366 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q57qh\" (UID: \"352f59bb-69ee-46d0-862c-d839ac334b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-q57qh" Dec 09 12:07:31 crc kubenswrapper[4703]: E1209 12:07:31.307806 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:07:31.807789324 +0000 UTC m=+151.056552843 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q57qh" (UID: "352f59bb-69ee-46d0-862c-d839ac334b35") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:07:31 crc kubenswrapper[4703]: I1209 12:07:31.408985 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:07:31 crc kubenswrapper[4703]: E1209 12:07:31.409455 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:07:31.909434541 +0000 UTC m=+151.158198060 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:07:31 crc kubenswrapper[4703]: I1209 12:07:31.511805 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q57qh\" (UID: \"352f59bb-69ee-46d0-862c-d839ac334b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-q57qh" Dec 09 12:07:31 crc kubenswrapper[4703]: E1209 12:07:31.512143 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:07:32.01212751 +0000 UTC m=+151.260891029 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q57qh" (UID: "352f59bb-69ee-46d0-862c-d839ac334b35") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:07:31 crc kubenswrapper[4703]: I1209 12:07:31.612962 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:07:31 crc kubenswrapper[4703]: E1209 12:07:31.613396 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:07:32.113329274 +0000 UTC m=+151.362092793 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:07:31 crc kubenswrapper[4703]: I1209 12:07:31.680288 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rsr7z" event={"ID":"bf95fba6-ebed-4f09-be10-b8d67bb51752","Type":"ContainerStarted","Data":"40f3dc2de4cab441a9ef7dd8eabf1e479e83dee6d6b1f4d73c3f6dcd732f6d8d"} Dec 09 12:07:31 crc kubenswrapper[4703]: I1209 12:07:31.716341 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q57qh\" (UID: \"352f59bb-69ee-46d0-862c-d839ac334b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-q57qh" Dec 09 12:07:31 crc kubenswrapper[4703]: E1209 12:07:31.716645 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:07:32.216634211 +0000 UTC m=+151.465397730 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q57qh" (UID: "352f59bb-69ee-46d0-862c-d839ac334b35") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:07:31 crc kubenswrapper[4703]: I1209 12:07:31.733704 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fh878" event={"ID":"fd6b0396-1528-45bf-a713-67e7e20b3e96","Type":"ContainerStarted","Data":"c33570c2785b911a6ade25a757731b855ed11ba526c81faf864d8e8ae3717383"} Dec 09 12:07:31 crc kubenswrapper[4703]: I1209 12:07:31.761792 4703 patch_prober.go:28] interesting pod/router-default-5444994796-cjdjc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 12:07:31 crc kubenswrapper[4703]: [-]has-synced failed: reason withheld Dec 09 12:07:31 crc kubenswrapper[4703]: [+]process-running ok Dec 09 12:07:31 crc kubenswrapper[4703]: healthz check failed Dec 09 12:07:31 crc kubenswrapper[4703]: I1209 12:07:31.761832 4703 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-cjdjc" podUID="de18af96-27c1-4d28-acfe-e0317de38dba" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 12:07:31 crc kubenswrapper[4703]: I1209 12:07:31.793463 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-czvkf" event={"ID":"558df921-8b36-45ba-8cc8-b25a1a2f6172","Type":"ContainerStarted","Data":"6e04992d230096732c771d593a3aebbd03bd769bf317682fd9798ff37681119f"} Dec 09 12:07:31 crc kubenswrapper[4703]: I1209 12:07:31.796588 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rsr7z" podStartSLOduration=132.796569429 podStartE2EDuration="2m12.796569429s" podCreationTimestamp="2025-12-09 12:05:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:07:31.719247309 +0000 UTC m=+150.968010828" watchObservedRunningTime="2025-12-09 12:07:31.796569429 +0000 UTC m=+151.045332948" Dec 09 12:07:31 crc kubenswrapper[4703]: I1209 12:07:31.798105 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fh878" podStartSLOduration=131.798096765 podStartE2EDuration="2m11.798096765s" podCreationTimestamp="2025-12-09 12:05:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:07:31.793175729 +0000 UTC m=+151.041939248" watchObservedRunningTime="2025-12-09 12:07:31.798096765 +0000 UTC m=+151.046860284" Dec 09 12:07:31 crc kubenswrapper[4703]: I1209 12:07:31.817399 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:07:31 crc kubenswrapper[4703]: E1209 12:07:31.819784 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:07:32.319762303 +0000 UTC m=+151.568525822 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:07:31 crc kubenswrapper[4703]: I1209 12:07:31.874224 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-6hpw7" event={"ID":"95fb5258-8faf-4e0a-ba69-319222cca40a","Type":"ContainerStarted","Data":"2739fe7d949a7f3666fc2e44379bc0d675c8f4cec599a3f2885310c0bb2d2bfd"} Dec 09 12:07:31 crc kubenswrapper[4703]: I1209 12:07:31.881430 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-6hpw7" Dec 09 12:07:31 crc kubenswrapper[4703]: I1209 12:07:31.889635 4703 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-6hpw7 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.15:6443/healthz\": dial tcp 10.217.0.15:6443: connect: connection refused" start-of-body= Dec 09 12:07:31 crc kubenswrapper[4703]: I1209 12:07:31.889676 4703 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-6hpw7" podUID="95fb5258-8faf-4e0a-ba69-319222cca40a" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.15:6443/healthz\": dial tcp 10.217.0.15:6443: connect: connection refused" Dec 09 12:07:31 crc kubenswrapper[4703]: I1209 12:07:31.930781 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q57qh\" (UID: \"352f59bb-69ee-46d0-862c-d839ac334b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-q57qh" Dec 09 12:07:31 crc kubenswrapper[4703]: E1209 12:07:31.931518 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:07:32.431501402 +0000 UTC m=+151.680264921 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q57qh" (UID: "352f59bb-69ee-46d0-862c-d839ac334b35") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:07:31 crc kubenswrapper[4703]: I1209 12:07:31.945093 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5tsxq" event={"ID":"c4a6920d-8199-478b-bc08-a96dbc58d236","Type":"ContainerStarted","Data":"84a5c192833857c5ad6fd1f7cf1686eda3bfd06ec5c3f14b13c97300f12083ac"} Dec 09 12:07:31 crc kubenswrapper[4703]: I1209 12:07:31.951979 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5tsxq" Dec 09 12:07:31 crc kubenswrapper[4703]: I1209 12:07:31.959278 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-b4s68" event={"ID":"f22fe25c-091e-4ed7-b58c-092e4c75df3e","Type":"ContainerStarted","Data":"b8642a21a4b59324bca113beedb774bdb41e12a99acb31878afba0399359ea14"} Dec 09 12:07:31 crc kubenswrapper[4703]: I1209 12:07:31.970938 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9d4qd" event={"ID":"edf79140-82dd-4511-a921-d2a8de8635bf","Type":"ContainerStarted","Data":"5b3d47babbb15e25ba57c855206751a28db1da3eabaee1f56b8993a6560a2738"} Dec 09 12:07:32 crc kubenswrapper[4703]: I1209 12:07:32.020436 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-6hpw7" podStartSLOduration=133.020414589 podStartE2EDuration="2m13.020414589s" podCreationTimestamp="2025-12-09 12:05:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:07:31.971065294 +0000 UTC m=+151.219828813" watchObservedRunningTime="2025-12-09 12:07:32.020414589 +0000 UTC m=+151.269178108" Dec 09 12:07:32 crc kubenswrapper[4703]: I1209 12:07:32.027838 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-czvkf" podStartSLOduration=133.02781453 podStartE2EDuration="2m13.02781453s" podCreationTimestamp="2025-12-09 12:05:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:07:31.858831021 +0000 UTC m=+151.107594540" watchObservedRunningTime="2025-12-09 12:07:32.02781453 +0000 UTC m=+151.276578049" Dec 09 12:07:32 crc kubenswrapper[4703]: I1209 12:07:32.033422 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:07:32 crc kubenswrapper[4703]: E1209 12:07:32.037477 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:07:32.537453528 +0000 UTC m=+151.786217047 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:07:32 crc kubenswrapper[4703]: I1209 12:07:32.054737 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5tsxq" podStartSLOduration=133.054715544 podStartE2EDuration="2m13.054715544s" podCreationTimestamp="2025-12-09 12:05:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:07:32.044655584 +0000 UTC m=+151.293419113" watchObservedRunningTime="2025-12-09 12:07:32.054715544 +0000 UTC m=+151.303479063" Dec 09 12:07:32 crc kubenswrapper[4703]: I1209 12:07:32.064728 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-99nm5" event={"ID":"8b1bfa15-e14a-489c-9eb9-f9774dfca6b4","Type":"ContainerStarted","Data":"f503506aa9b2de7bf8eade17d6cf14843351b52f2fd3c3cc9230f15ac7d7307f"} Dec 09 12:07:32 crc kubenswrapper[4703]: I1209 12:07:32.065451 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-99nm5" Dec 09 12:07:32 crc kubenswrapper[4703]: I1209 12:07:32.065526 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 09 12:07:32 crc kubenswrapper[4703]: I1209 12:07:32.066330 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 09 12:07:32 crc kubenswrapper[4703]: I1209 12:07:32.068464 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Dec 09 12:07:32 crc kubenswrapper[4703]: I1209 12:07:32.068563 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Dec 09 12:07:32 crc kubenswrapper[4703]: I1209 12:07:32.079069 4703 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-tf7x5 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Dec 09 12:07:32 crc kubenswrapper[4703]: I1209 12:07:32.079121 4703 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tf7x5" podUID="c7a93dcd-9a01-4ae7-bf05-d9e13e83d6aa" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" Dec 09 12:07:32 crc kubenswrapper[4703]: I1209 12:07:32.087909 4703 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-vlkrh container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.30:8080/healthz\": dial tcp 10.217.0.30:8080: connect: connection refused" start-of-body= Dec 09 12:07:32 crc kubenswrapper[4703]: I1209 12:07:32.087954 4703 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-vlkrh" podUID="d8699089-c9ff-4389-8a8e-72b5c976b5ae" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.30:8080/healthz\": dial tcp 10.217.0.30:8080: connect: connection refused" Dec 09 12:07:32 crc kubenswrapper[4703]: I1209 12:07:32.089772 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-99nm5" Dec 09 12:07:32 crc kubenswrapper[4703]: I1209 12:07:32.093503 4703 patch_prober.go:28] interesting pod/console-operator-58897d9998-hcmrp container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.17:8443/readyz\": dial tcp 10.217.0.17:8443: connect: connection refused" start-of-body= Dec 09 12:07:32 crc kubenswrapper[4703]: I1209 12:07:32.093547 4703 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-hcmrp" podUID="8f077d38-7967-4ef5-ba75-d833279fdd96" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.17:8443/readyz\": dial tcp 10.217.0.17:8443: connect: connection refused" Dec 09 12:07:32 crc kubenswrapper[4703]: I1209 12:07:32.094062 4703 patch_prober.go:28] interesting pod/downloads-7954f5f757-hqx9m container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Dec 09 12:07:32 crc kubenswrapper[4703]: I1209 12:07:32.094097 4703 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hqx9m" podUID="36981648-d6b7-4c08-96ec-622d069c4c19" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Dec 09 12:07:32 crc kubenswrapper[4703]: I1209 12:07:32.116755 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 09 12:07:32 crc kubenswrapper[4703]: I1209 12:07:32.128415 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-82x8s" podStartSLOduration=133.128393265 podStartE2EDuration="2m13.128393265s" podCreationTimestamp="2025-12-09 12:05:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:07:32.124337124 +0000 UTC m=+151.373100653" watchObservedRunningTime="2025-12-09 12:07:32.128393265 +0000 UTC m=+151.377156784" Dec 09 12:07:32 crc kubenswrapper[4703]: I1209 12:07:32.136869 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q57qh\" (UID: \"352f59bb-69ee-46d0-862c-d839ac334b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-q57qh" Dec 09 12:07:32 crc kubenswrapper[4703]: I1209 12:07:32.137045 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/95adac1b-58b8-4b95-bd5a-e608dd6d3838-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"95adac1b-58b8-4b95-bd5a-e608dd6d3838\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 09 12:07:32 crc kubenswrapper[4703]: I1209 12:07:32.137576 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/95adac1b-58b8-4b95-bd5a-e608dd6d3838-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"95adac1b-58b8-4b95-bd5a-e608dd6d3838\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 09 12:07:32 crc kubenswrapper[4703]: E1209 12:07:32.145518 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:07:32.645500157 +0000 UTC m=+151.894263666 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q57qh" (UID: "352f59bb-69ee-46d0-862c-d839ac334b35") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:07:32 crc kubenswrapper[4703]: I1209 12:07:32.152563 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-2xlcr" Dec 09 12:07:32 crc kubenswrapper[4703]: I1209 12:07:32.248777 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:07:32 crc kubenswrapper[4703]: I1209 12:07:32.248943 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/95adac1b-58b8-4b95-bd5a-e608dd6d3838-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"95adac1b-58b8-4b95-bd5a-e608dd6d3838\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 09 12:07:32 crc kubenswrapper[4703]: I1209 12:07:32.249010 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/95adac1b-58b8-4b95-bd5a-e608dd6d3838-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"95adac1b-58b8-4b95-bd5a-e608dd6d3838\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 09 12:07:32 crc kubenswrapper[4703]: I1209 12:07:32.249091 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/95adac1b-58b8-4b95-bd5a-e608dd6d3838-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"95adac1b-58b8-4b95-bd5a-e608dd6d3838\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 09 12:07:32 crc kubenswrapper[4703]: E1209 12:07:32.249156 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:07:32.749141474 +0000 UTC m=+151.997904993 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:07:32 crc kubenswrapper[4703]: I1209 12:07:32.283294 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-b4s68" podStartSLOduration=132.283279254 podStartE2EDuration="2m12.283279254s" podCreationTimestamp="2025-12-09 12:05:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:07:32.280334656 +0000 UTC m=+151.529098175" watchObservedRunningTime="2025-12-09 12:07:32.283279254 +0000 UTC m=+151.532042773" Dec 09 12:07:32 crc kubenswrapper[4703]: I1209 12:07:32.284086 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9d4qd" podStartSLOduration=133.284080198 podStartE2EDuration="2m13.284080198s" podCreationTimestamp="2025-12-09 12:05:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:07:32.207770397 +0000 UTC m=+151.456533916" watchObservedRunningTime="2025-12-09 12:07:32.284080198 +0000 UTC m=+151.532843707" Dec 09 12:07:32 crc kubenswrapper[4703]: I1209 12:07:32.314316 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/95adac1b-58b8-4b95-bd5a-e608dd6d3838-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"95adac1b-58b8-4b95-bd5a-e608dd6d3838\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 09 12:07:32 crc kubenswrapper[4703]: I1209 12:07:32.348133 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-99nm5" podStartSLOduration=133.348111651 podStartE2EDuration="2m13.348111651s" podCreationTimestamp="2025-12-09 12:05:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:07:32.347504603 +0000 UTC m=+151.596268132" watchObservedRunningTime="2025-12-09 12:07:32.348111651 +0000 UTC m=+151.596875170" Dec 09 12:07:32 crc kubenswrapper[4703]: I1209 12:07:32.355066 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q57qh\" (UID: \"352f59bb-69ee-46d0-862c-d839ac334b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-q57qh" Dec 09 12:07:32 crc kubenswrapper[4703]: E1209 12:07:32.355391 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:07:32.855380558 +0000 UTC m=+152.104144077 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q57qh" (UID: "352f59bb-69ee-46d0-862c-d839ac334b35") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:07:32 crc kubenswrapper[4703]: I1209 12:07:32.405331 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 09 12:07:32 crc kubenswrapper[4703]: I1209 12:07:32.456599 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:07:32 crc kubenswrapper[4703]: E1209 12:07:32.456987 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:07:32.956972124 +0000 UTC m=+152.205735643 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:07:32 crc kubenswrapper[4703]: I1209 12:07:32.565046 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q57qh\" (UID: \"352f59bb-69ee-46d0-862c-d839ac334b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-q57qh" Dec 09 12:07:32 crc kubenswrapper[4703]: E1209 12:07:32.565470 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:07:33.065457237 +0000 UTC m=+152.314220766 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q57qh" (UID: "352f59bb-69ee-46d0-862c-d839ac334b35") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:07:32 crc kubenswrapper[4703]: I1209 12:07:32.641570 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-w77wb" podStartSLOduration=133.6415505 podStartE2EDuration="2m13.6415505s" podCreationTimestamp="2025-12-09 12:05:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:07:32.624590213 +0000 UTC m=+151.873353742" watchObservedRunningTime="2025-12-09 12:07:32.6415505 +0000 UTC m=+151.890314019" Dec 09 12:07:32 crc kubenswrapper[4703]: I1209 12:07:32.669580 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:07:32 crc kubenswrapper[4703]: E1209 12:07:32.670312 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:07:33.170294819 +0000 UTC m=+152.419058338 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:07:32 crc kubenswrapper[4703]: I1209 12:07:32.757778 4703 patch_prober.go:28] interesting pod/router-default-5444994796-cjdjc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 12:07:32 crc kubenswrapper[4703]: [-]has-synced failed: reason withheld Dec 09 12:07:32 crc kubenswrapper[4703]: [+]process-running ok Dec 09 12:07:32 crc kubenswrapper[4703]: healthz check failed Dec 09 12:07:32 crc kubenswrapper[4703]: I1209 12:07:32.757838 4703 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-cjdjc" podUID="de18af96-27c1-4d28-acfe-e0317de38dba" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 12:07:32 crc kubenswrapper[4703]: I1209 12:07:32.771715 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q57qh\" (UID: \"352f59bb-69ee-46d0-862c-d839ac334b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-q57qh" Dec 09 12:07:32 crc kubenswrapper[4703]: E1209 12:07:32.772029 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:07:33.272011308 +0000 UTC m=+152.520774847 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q57qh" (UID: "352f59bb-69ee-46d0-862c-d839ac334b35") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:07:32 crc kubenswrapper[4703]: I1209 12:07:32.874969 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:07:32 crc kubenswrapper[4703]: E1209 12:07:32.875367 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:07:33.375345236 +0000 UTC m=+152.624108745 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:07:32 crc kubenswrapper[4703]: I1209 12:07:32.974583 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ng8gw" Dec 09 12:07:32 crc kubenswrapper[4703]: I1209 12:07:32.979959 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q57qh\" (UID: \"352f59bb-69ee-46d0-862c-d839ac334b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-q57qh" Dec 09 12:07:32 crc kubenswrapper[4703]: E1209 12:07:32.980325 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:07:33.480307823 +0000 UTC m=+152.729071342 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q57qh" (UID: "352f59bb-69ee-46d0-862c-d839ac334b35") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:07:33 crc kubenswrapper[4703]: I1209 12:07:33.081657 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:07:33 crc kubenswrapper[4703]: E1209 12:07:33.082033 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:07:33.582017082 +0000 UTC m=+152.830780601 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:07:33 crc kubenswrapper[4703]: I1209 12:07:33.117991 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-w77wb" event={"ID":"2078d397-e8a5-4dbf-8573-360a9c373084","Type":"ContainerStarted","Data":"7f2220a3c5eef3d8f446b06831ebef88ed7ca8de767af8eae163248b8b6221ad"} Dec 09 12:07:33 crc kubenswrapper[4703]: I1209 12:07:33.147024 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-7phbd" event={"ID":"8e44c4b9-79cf-44c7-ad73-6aaeac691e86","Type":"ContainerStarted","Data":"2adb9066dd4092321a93574d9881a1e9fd2cf2aaa9bce057a0133c8bc36e73ab"} Dec 09 12:07:33 crc kubenswrapper[4703]: I1209 12:07:33.147080 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-7phbd" event={"ID":"8e44c4b9-79cf-44c7-ad73-6aaeac691e86","Type":"ContainerStarted","Data":"85b757a9dd3a2e0f4c6b443abb7d127d9ca4fbcde37d152b5470f07dce60ea62"} Dec 09 12:07:33 crc kubenswrapper[4703]: I1209 12:07:33.147568 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-7phbd" Dec 09 12:07:33 crc kubenswrapper[4703]: I1209 12:07:33.158448 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-rpbfz" event={"ID":"f3c962d1-1735-4e31-9b41-246c2876d628","Type":"ContainerStarted","Data":"7f0e01adaea866eb20933cc8ee41813a784e2263ace30087540ec230db8663db"} Dec 09 12:07:33 crc kubenswrapper[4703]: I1209 12:07:33.162066 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vv85r" event={"ID":"b002710e-eb24-40e3-ba04-74fd233f4def","Type":"ContainerStarted","Data":"c680bd5e23424101ba0c365998e0b54d00b2ea957f03991252dfe06be2231566"} Dec 09 12:07:33 crc kubenswrapper[4703]: I1209 12:07:33.187638 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q57qh\" (UID: \"352f59bb-69ee-46d0-862c-d839ac334b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-q57qh" Dec 09 12:07:33 crc kubenswrapper[4703]: E1209 12:07:33.189223 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:07:33.689208995 +0000 UTC m=+152.937972514 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q57qh" (UID: "352f59bb-69ee-46d0-862c-d839ac334b35") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:07:33 crc kubenswrapper[4703]: I1209 12:07:33.191154 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-trptt" event={"ID":"51f058fc-be40-4ed7-a028-eeced6eec26a","Type":"ContainerStarted","Data":"f081b26922b3582ed70fe7d0efc7149c8ef07243b26377dffb1eda2076b5ca59"} Dec 09 12:07:33 crc kubenswrapper[4703]: I1209 12:07:33.215295 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-7phbd" podStartSLOduration=9.215276895 podStartE2EDuration="9.215276895s" podCreationTimestamp="2025-12-09 12:07:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:07:33.214934604 +0000 UTC m=+152.463698123" watchObservedRunningTime="2025-12-09 12:07:33.215276895 +0000 UTC m=+152.464040414" Dec 09 12:07:33 crc kubenswrapper[4703]: I1209 12:07:33.218549 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"fed492db2a3652374280c068f5ed554d0811495d018abf90d3f573722e7d6a24"} Dec 09 12:07:33 crc kubenswrapper[4703]: I1209 12:07:33.250498 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-82x8s" event={"ID":"66e2c60b-b67f-4256-b15b-d987c05f3ea8","Type":"ContainerStarted","Data":"0e95f62677f05459f29b3bf60aaed27c6bcf70dccfb225f056b158d515d3ad5d"} Dec 09 12:07:33 crc kubenswrapper[4703]: I1209 12:07:33.288934 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:07:33 crc kubenswrapper[4703]: E1209 12:07:33.290078 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:07:33.790052589 +0000 UTC m=+153.038816108 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:07:33 crc kubenswrapper[4703]: I1209 12:07:33.291419 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"eaee0d731aa05dcdc2854d006cdd593bb3620712953128d8acf41287b1cf4c57"} Dec 09 12:07:33 crc kubenswrapper[4703]: I1209 12:07:33.351321 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-rpbfz" podStartSLOduration=134.351306939 podStartE2EDuration="2m14.351306939s" podCreationTimestamp="2025-12-09 12:05:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:07:33.283740081 +0000 UTC m=+152.532503600" watchObservedRunningTime="2025-12-09 12:07:33.351306939 +0000 UTC m=+152.600070458" Dec 09 12:07:33 crc kubenswrapper[4703]: I1209 12:07:33.353101 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-knw7d" event={"ID":"3dfa5db2-7657-47b6-804c-878fd45694e5","Type":"ContainerStarted","Data":"297b2388f515293cc39032def91f8d6987d7d206e008964659675bb339be1523"} Dec 09 12:07:33 crc kubenswrapper[4703]: I1209 12:07:33.353151 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-knw7d" event={"ID":"3dfa5db2-7657-47b6-804c-878fd45694e5","Type":"ContainerStarted","Data":"fce0784ab92403a915e0068540810ae9df52a773570e7adf889fbd73e81fb245"} Dec 09 12:07:33 crc kubenswrapper[4703]: I1209 12:07:33.353651 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-knw7d" Dec 09 12:07:33 crc kubenswrapper[4703]: I1209 12:07:33.367894 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vv85r" podStartSLOduration=134.367872245 podStartE2EDuration="2m14.367872245s" podCreationTimestamp="2025-12-09 12:05:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:07:33.348631789 +0000 UTC m=+152.597395308" watchObservedRunningTime="2025-12-09 12:07:33.367872245 +0000 UTC m=+152.616635764" Dec 09 12:07:33 crc kubenswrapper[4703]: I1209 12:07:33.381119 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"4804f8bbed2272b7491d992d742393b60fcc9066261865e9380917e3b5a061c8"} Dec 09 12:07:33 crc kubenswrapper[4703]: I1209 12:07:33.382165 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:07:33 crc kubenswrapper[4703]: I1209 12:07:33.393505 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dsplm" event={"ID":"f0a85bf1-837f-4175-803a-c09fde56e5d9","Type":"ContainerStarted","Data":"664bdd1575bdbfe585e146e84b8a53b4393c0366c3a21ee9045161d1b38ff910"} Dec 09 12:07:33 crc kubenswrapper[4703]: I1209 12:07:33.394601 4703 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-vlkrh container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.30:8080/healthz\": dial tcp 10.217.0.30:8080: connect: connection refused" start-of-body= Dec 09 12:07:33 crc kubenswrapper[4703]: I1209 12:07:33.394632 4703 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-vlkrh" podUID="d8699089-c9ff-4389-8a8e-72b5c976b5ae" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.30:8080/healthz\": dial tcp 10.217.0.30:8080: connect: connection refused" Dec 09 12:07:33 crc kubenswrapper[4703]: I1209 12:07:33.395538 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q57qh\" (UID: \"352f59bb-69ee-46d0-862c-d839ac334b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-q57qh" Dec 09 12:07:33 crc kubenswrapper[4703]: E1209 12:07:33.395799 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:07:33.895788399 +0000 UTC m=+153.144551918 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q57qh" (UID: "352f59bb-69ee-46d0-862c-d839ac334b35") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:07:33 crc kubenswrapper[4703]: I1209 12:07:33.469541 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-trptt" podStartSLOduration=9.469524662 podStartE2EDuration="9.469524662s" podCreationTimestamp="2025-12-09 12:07:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:07:33.393981284 +0000 UTC m=+152.642744803" watchObservedRunningTime="2025-12-09 12:07:33.469524662 +0000 UTC m=+152.718288181" Dec 09 12:07:33 crc kubenswrapper[4703]: I1209 12:07:33.497018 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:07:33 crc kubenswrapper[4703]: E1209 12:07:33.498677 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:07:33.998643002 +0000 UTC m=+153.247406521 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:07:33 crc kubenswrapper[4703]: I1209 12:07:33.516862 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dsplm" podStartSLOduration=134.516845876 podStartE2EDuration="2m14.516845876s" podCreationTimestamp="2025-12-09 12:05:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:07:33.516245088 +0000 UTC m=+152.765008607" watchObservedRunningTime="2025-12-09 12:07:33.516845876 +0000 UTC m=+152.765609395" Dec 09 12:07:33 crc kubenswrapper[4703]: I1209 12:07:33.604819 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q57qh\" (UID: \"352f59bb-69ee-46d0-862c-d839ac334b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-q57qh" Dec 09 12:07:33 crc kubenswrapper[4703]: E1209 12:07:33.605126 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:07:34.105114894 +0000 UTC m=+153.353878413 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q57qh" (UID: "352f59bb-69ee-46d0-862c-d839ac334b35") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:07:33 crc kubenswrapper[4703]: I1209 12:07:33.708436 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tf7x5" Dec 09 12:07:33 crc kubenswrapper[4703]: I1209 12:07:33.708859 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:07:33 crc kubenswrapper[4703]: E1209 12:07:33.709283 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:07:34.209250215 +0000 UTC m=+153.458013734 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:07:33 crc kubenswrapper[4703]: I1209 12:07:33.719920 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-knw7d" podStartSLOduration=134.719903944 podStartE2EDuration="2m14.719903944s" podCreationTimestamp="2025-12-09 12:05:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:07:33.645551032 +0000 UTC m=+152.894314551" watchObservedRunningTime="2025-12-09 12:07:33.719903944 +0000 UTC m=+152.968667463" Dec 09 12:07:33 crc kubenswrapper[4703]: I1209 12:07:33.720571 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 09 12:07:33 crc kubenswrapper[4703]: W1209 12:07:33.745550 4703 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod95adac1b_58b8_4b95_bd5a_e608dd6d3838.slice/crio-4aa42da3c50f3c9b69b01f337aa848c38d54513460de357137605d6ad6585f42 WatchSource:0}: Error finding container 4aa42da3c50f3c9b69b01f337aa848c38d54513460de357137605d6ad6585f42: Status 404 returned error can't find the container with id 4aa42da3c50f3c9b69b01f337aa848c38d54513460de357137605d6ad6585f42 Dec 09 12:07:33 crc kubenswrapper[4703]: I1209 12:07:33.752676 4703 patch_prober.go:28] interesting pod/router-default-5444994796-cjdjc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 12:07:33 crc kubenswrapper[4703]: [-]has-synced failed: reason withheld Dec 09 12:07:33 crc kubenswrapper[4703]: [+]process-running ok Dec 09 12:07:33 crc kubenswrapper[4703]: healthz check failed Dec 09 12:07:33 crc kubenswrapper[4703]: I1209 12:07:33.752767 4703 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-cjdjc" podUID="de18af96-27c1-4d28-acfe-e0317de38dba" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 12:07:33 crc kubenswrapper[4703]: I1209 12:07:33.810337 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q57qh\" (UID: \"352f59bb-69ee-46d0-862c-d839ac334b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-q57qh" Dec 09 12:07:33 crc kubenswrapper[4703]: E1209 12:07:33.810741 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:07:34.310727908 +0000 UTC m=+153.559491427 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q57qh" (UID: "352f59bb-69ee-46d0-862c-d839ac334b35") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:07:33 crc kubenswrapper[4703]: I1209 12:07:33.911879 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:07:33 crc kubenswrapper[4703]: E1209 12:07:33.912242 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:07:34.41220953 +0000 UTC m=+153.660973049 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:07:33 crc kubenswrapper[4703]: I1209 12:07:33.912558 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q57qh\" (UID: \"352f59bb-69ee-46d0-862c-d839ac334b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-q57qh" Dec 09 12:07:33 crc kubenswrapper[4703]: E1209 12:07:33.913060 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:07:34.413048305 +0000 UTC m=+153.661811824 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q57qh" (UID: "352f59bb-69ee-46d0-862c-d839ac334b35") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:07:33 crc kubenswrapper[4703]: I1209 12:07:33.949827 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-6hpw7" Dec 09 12:07:34 crc kubenswrapper[4703]: I1209 12:07:34.013723 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:07:34 crc kubenswrapper[4703]: E1209 12:07:34.014318 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:07:34.514303721 +0000 UTC m=+153.763067240 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:07:34 crc kubenswrapper[4703]: I1209 12:07:34.115993 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q57qh\" (UID: \"352f59bb-69ee-46d0-862c-d839ac334b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-q57qh" Dec 09 12:07:34 crc kubenswrapper[4703]: E1209 12:07:34.116390 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:07:34.616373811 +0000 UTC m=+153.865137330 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q57qh" (UID: "352f59bb-69ee-46d0-862c-d839ac334b35") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:07:34 crc kubenswrapper[4703]: I1209 12:07:34.217550 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:07:34 crc kubenswrapper[4703]: E1209 12:07:34.217740 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:07:34.71771448 +0000 UTC m=+153.966477999 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:07:34 crc kubenswrapper[4703]: I1209 12:07:34.217944 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q57qh\" (UID: \"352f59bb-69ee-46d0-862c-d839ac334b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-q57qh" Dec 09 12:07:34 crc kubenswrapper[4703]: E1209 12:07:34.218266 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:07:34.718258436 +0000 UTC m=+153.967021955 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q57qh" (UID: "352f59bb-69ee-46d0-862c-d839ac334b35") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:07:34 crc kubenswrapper[4703]: I1209 12:07:34.318989 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:07:34 crc kubenswrapper[4703]: E1209 12:07:34.319327 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:07:34.819311245 +0000 UTC m=+154.068074754 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:07:34 crc kubenswrapper[4703]: I1209 12:07:34.319651 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-hcmrp" Dec 09 12:07:34 crc kubenswrapper[4703]: I1209 12:07:34.404094 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-hdphr" event={"ID":"e2de5e19-e8f3-4721-96c3-943bf64a7dab","Type":"ContainerStarted","Data":"e2220296035d1a894a9b987abab9c5339de8bc8163b8546a6fd22ded2d970d54"} Dec 09 12:07:34 crc kubenswrapper[4703]: I1209 12:07:34.406262 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"95adac1b-58b8-4b95-bd5a-e608dd6d3838","Type":"ContainerStarted","Data":"4aa42da3c50f3c9b69b01f337aa848c38d54513460de357137605d6ad6585f42"} Dec 09 12:07:34 crc kubenswrapper[4703]: I1209 12:07:34.413479 4703 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-5tsxq container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 09 12:07:34 crc kubenswrapper[4703]: I1209 12:07:34.413522 4703 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5tsxq" podUID="c4a6920d-8199-478b-bc08-a96dbc58d236" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 09 12:07:34 crc kubenswrapper[4703]: I1209 12:07:34.420576 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q57qh\" (UID: \"352f59bb-69ee-46d0-862c-d839ac334b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-q57qh" Dec 09 12:07:34 crc kubenswrapper[4703]: E1209 12:07:34.420859 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:07:34.92084635 +0000 UTC m=+154.169609869 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q57qh" (UID: "352f59bb-69ee-46d0-862c-d839ac334b35") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:07:34 crc kubenswrapper[4703]: I1209 12:07:34.521843 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:07:34 crc kubenswrapper[4703]: E1209 12:07:34.523047 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:07:35.023016853 +0000 UTC m=+154.271780382 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:07:34 crc kubenswrapper[4703]: I1209 12:07:34.623753 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q57qh\" (UID: \"352f59bb-69ee-46d0-862c-d839ac334b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-q57qh" Dec 09 12:07:34 crc kubenswrapper[4703]: E1209 12:07:34.624561 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:07:35.124548437 +0000 UTC m=+154.373311946 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q57qh" (UID: "352f59bb-69ee-46d0-862c-d839ac334b35") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:07:34 crc kubenswrapper[4703]: I1209 12:07:34.724998 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:07:34 crc kubenswrapper[4703]: E1209 12:07:34.725349 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:07:35.225323278 +0000 UTC m=+154.474086797 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:07:34 crc kubenswrapper[4703]: I1209 12:07:34.725703 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q57qh\" (UID: \"352f59bb-69ee-46d0-862c-d839ac334b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-q57qh" Dec 09 12:07:34 crc kubenswrapper[4703]: E1209 12:07:34.726103 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:07:35.226090881 +0000 UTC m=+154.474854390 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q57qh" (UID: "352f59bb-69ee-46d0-862c-d839ac334b35") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:07:34 crc kubenswrapper[4703]: I1209 12:07:34.753042 4703 patch_prober.go:28] interesting pod/router-default-5444994796-cjdjc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 12:07:34 crc kubenswrapper[4703]: [-]has-synced failed: reason withheld Dec 09 12:07:34 crc kubenswrapper[4703]: [+]process-running ok Dec 09 12:07:34 crc kubenswrapper[4703]: healthz check failed Dec 09 12:07:34 crc kubenswrapper[4703]: I1209 12:07:34.753402 4703 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-cjdjc" podUID="de18af96-27c1-4d28-acfe-e0317de38dba" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 12:07:34 crc kubenswrapper[4703]: I1209 12:07:34.826933 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:07:34 crc kubenswrapper[4703]: E1209 12:07:34.827171 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:07:35.327145661 +0000 UTC m=+154.575909180 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:07:34 crc kubenswrapper[4703]: I1209 12:07:34.827500 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q57qh\" (UID: \"352f59bb-69ee-46d0-862c-d839ac334b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-q57qh" Dec 09 12:07:34 crc kubenswrapper[4703]: E1209 12:07:34.827829 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:07:35.327817681 +0000 UTC m=+154.576581190 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q57qh" (UID: "352f59bb-69ee-46d0-862c-d839ac334b35") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:07:34 crc kubenswrapper[4703]: I1209 12:07:34.848050 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8xkww"] Dec 09 12:07:34 crc kubenswrapper[4703]: I1209 12:07:34.849071 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8xkww" Dec 09 12:07:34 crc kubenswrapper[4703]: I1209 12:07:34.857035 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 09 12:07:34 crc kubenswrapper[4703]: I1209 12:07:34.875309 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8xkww"] Dec 09 12:07:34 crc kubenswrapper[4703]: I1209 12:07:34.930719 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:07:34 crc kubenswrapper[4703]: I1209 12:07:34.930992 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/621283ab-7eb7-4952-9059-c3c4209bca7b-catalog-content\") pod \"community-operators-8xkww\" (UID: \"621283ab-7eb7-4952-9059-c3c4209bca7b\") " pod="openshift-marketplace/community-operators-8xkww" Dec 09 12:07:34 crc kubenswrapper[4703]: I1209 12:07:34.931048 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/621283ab-7eb7-4952-9059-c3c4209bca7b-utilities\") pod \"community-operators-8xkww\" (UID: \"621283ab-7eb7-4952-9059-c3c4209bca7b\") " pod="openshift-marketplace/community-operators-8xkww" Dec 09 12:07:34 crc kubenswrapper[4703]: I1209 12:07:34.931065 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qb4cq\" (UniqueName: \"kubernetes.io/projected/621283ab-7eb7-4952-9059-c3c4209bca7b-kube-api-access-qb4cq\") pod \"community-operators-8xkww\" (UID: \"621283ab-7eb7-4952-9059-c3c4209bca7b\") " pod="openshift-marketplace/community-operators-8xkww" Dec 09 12:07:34 crc kubenswrapper[4703]: E1209 12:07:34.931237 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:07:35.431220191 +0000 UTC m=+154.679983710 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:07:35 crc kubenswrapper[4703]: I1209 12:07:35.032278 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q57qh\" (UID: \"352f59bb-69ee-46d0-862c-d839ac334b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-q57qh" Dec 09 12:07:35 crc kubenswrapper[4703]: I1209 12:07:35.032324 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/621283ab-7eb7-4952-9059-c3c4209bca7b-catalog-content\") pod \"community-operators-8xkww\" (UID: \"621283ab-7eb7-4952-9059-c3c4209bca7b\") " pod="openshift-marketplace/community-operators-8xkww" Dec 09 12:07:35 crc kubenswrapper[4703]: I1209 12:07:35.032361 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/621283ab-7eb7-4952-9059-c3c4209bca7b-utilities\") pod \"community-operators-8xkww\" (UID: \"621283ab-7eb7-4952-9059-c3c4209bca7b\") " pod="openshift-marketplace/community-operators-8xkww" Dec 09 12:07:35 crc kubenswrapper[4703]: I1209 12:07:35.032379 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qb4cq\" (UniqueName: \"kubernetes.io/projected/621283ab-7eb7-4952-9059-c3c4209bca7b-kube-api-access-qb4cq\") pod \"community-operators-8xkww\" (UID: \"621283ab-7eb7-4952-9059-c3c4209bca7b\") " pod="openshift-marketplace/community-operators-8xkww" Dec 09 12:07:35 crc kubenswrapper[4703]: E1209 12:07:35.032676 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:07:35.532656392 +0000 UTC m=+154.781419911 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q57qh" (UID: "352f59bb-69ee-46d0-862c-d839ac334b35") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:07:35 crc kubenswrapper[4703]: I1209 12:07:35.032858 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/621283ab-7eb7-4952-9059-c3c4209bca7b-utilities\") pod \"community-operators-8xkww\" (UID: \"621283ab-7eb7-4952-9059-c3c4209bca7b\") " pod="openshift-marketplace/community-operators-8xkww" Dec 09 12:07:35 crc kubenswrapper[4703]: I1209 12:07:35.032980 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/621283ab-7eb7-4952-9059-c3c4209bca7b-catalog-content\") pod \"community-operators-8xkww\" (UID: \"621283ab-7eb7-4952-9059-c3c4209bca7b\") " pod="openshift-marketplace/community-operators-8xkww" Dec 09 12:07:35 crc kubenswrapper[4703]: I1209 12:07:35.072813 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qb4cq\" (UniqueName: \"kubernetes.io/projected/621283ab-7eb7-4952-9059-c3c4209bca7b-kube-api-access-qb4cq\") pod \"community-operators-8xkww\" (UID: \"621283ab-7eb7-4952-9059-c3c4209bca7b\") " pod="openshift-marketplace/community-operators-8xkww" Dec 09 12:07:35 crc kubenswrapper[4703]: I1209 12:07:35.133654 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:07:35 crc kubenswrapper[4703]: E1209 12:07:35.133884 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:07:35.633846446 +0000 UTC m=+154.882609975 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:07:35 crc kubenswrapper[4703]: I1209 12:07:35.134081 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q57qh\" (UID: \"352f59bb-69ee-46d0-862c-d839ac334b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-q57qh" Dec 09 12:07:35 crc kubenswrapper[4703]: E1209 12:07:35.134510 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:07:35.634502855 +0000 UTC m=+154.883266374 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q57qh" (UID: "352f59bb-69ee-46d0-862c-d839ac334b35") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:07:35 crc kubenswrapper[4703]: I1209 12:07:35.165705 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8xkww" Dec 09 12:07:35 crc kubenswrapper[4703]: I1209 12:07:35.191447 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6jqdn"] Dec 09 12:07:35 crc kubenswrapper[4703]: I1209 12:07:35.192710 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6jqdn" Dec 09 12:07:35 crc kubenswrapper[4703]: I1209 12:07:35.212182 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6jqdn"] Dec 09 12:07:35 crc kubenswrapper[4703]: I1209 12:07:35.235682 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:07:35 crc kubenswrapper[4703]: E1209 12:07:35.235844 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:07:35.735815403 +0000 UTC m=+154.984578922 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:07:35 crc kubenswrapper[4703]: I1209 12:07:35.235969 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q57qh\" (UID: \"352f59bb-69ee-46d0-862c-d839ac334b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-q57qh" Dec 09 12:07:35 crc kubenswrapper[4703]: E1209 12:07:35.236295 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:07:35.736283547 +0000 UTC m=+154.985047066 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q57qh" (UID: "352f59bb-69ee-46d0-862c-d839ac334b35") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:07:35 crc kubenswrapper[4703]: I1209 12:07:35.335678 4703 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Dec 09 12:07:35 crc kubenswrapper[4703]: I1209 12:07:35.336865 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:07:35 crc kubenswrapper[4703]: I1209 12:07:35.337199 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be89872a-5325-41c1-85b8-6c9880d40f68-catalog-content\") pod \"community-operators-6jqdn\" (UID: \"be89872a-5325-41c1-85b8-6c9880d40f68\") " pod="openshift-marketplace/community-operators-6jqdn" Dec 09 12:07:35 crc kubenswrapper[4703]: I1209 12:07:35.337283 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be89872a-5325-41c1-85b8-6c9880d40f68-utilities\") pod \"community-operators-6jqdn\" (UID: \"be89872a-5325-41c1-85b8-6c9880d40f68\") " pod="openshift-marketplace/community-operators-6jqdn" Dec 09 12:07:35 crc kubenswrapper[4703]: I1209 12:07:35.337321 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnbc6\" (UniqueName: \"kubernetes.io/projected/be89872a-5325-41c1-85b8-6c9880d40f68-kube-api-access-rnbc6\") pod \"community-operators-6jqdn\" (UID: \"be89872a-5325-41c1-85b8-6c9880d40f68\") " pod="openshift-marketplace/community-operators-6jqdn" Dec 09 12:07:35 crc kubenswrapper[4703]: E1209 12:07:35.337436 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:07:35.837418619 +0000 UTC m=+155.086182138 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:07:35 crc kubenswrapper[4703]: I1209 12:07:35.387100 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-p2dxk"] Dec 09 12:07:35 crc kubenswrapper[4703]: I1209 12:07:35.388383 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p2dxk" Dec 09 12:07:35 crc kubenswrapper[4703]: I1209 12:07:35.401876 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 09 12:07:35 crc kubenswrapper[4703]: I1209 12:07:35.405310 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-p2dxk"] Dec 09 12:07:35 crc kubenswrapper[4703]: I1209 12:07:35.439173 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be89872a-5325-41c1-85b8-6c9880d40f68-utilities\") pod \"community-operators-6jqdn\" (UID: \"be89872a-5325-41c1-85b8-6c9880d40f68\") " pod="openshift-marketplace/community-operators-6jqdn" Dec 09 12:07:35 crc kubenswrapper[4703]: I1209 12:07:35.439249 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnbc6\" (UniqueName: \"kubernetes.io/projected/be89872a-5325-41c1-85b8-6c9880d40f68-kube-api-access-rnbc6\") pod \"community-operators-6jqdn\" (UID: \"be89872a-5325-41c1-85b8-6c9880d40f68\") " pod="openshift-marketplace/community-operators-6jqdn" Dec 09 12:07:35 crc kubenswrapper[4703]: I1209 12:07:35.439299 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be89872a-5325-41c1-85b8-6c9880d40f68-catalog-content\") pod \"community-operators-6jqdn\" (UID: \"be89872a-5325-41c1-85b8-6c9880d40f68\") " pod="openshift-marketplace/community-operators-6jqdn" Dec 09 12:07:35 crc kubenswrapper[4703]: I1209 12:07:35.439331 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q57qh\" (UID: \"352f59bb-69ee-46d0-862c-d839ac334b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-q57qh" Dec 09 12:07:35 crc kubenswrapper[4703]: E1209 12:07:35.439581 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:07:35.939569951 +0000 UTC m=+155.188333470 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q57qh" (UID: "352f59bb-69ee-46d0-862c-d839ac334b35") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:07:35 crc kubenswrapper[4703]: I1209 12:07:35.440032 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be89872a-5325-41c1-85b8-6c9880d40f68-utilities\") pod \"community-operators-6jqdn\" (UID: \"be89872a-5325-41c1-85b8-6c9880d40f68\") " pod="openshift-marketplace/community-operators-6jqdn" Dec 09 12:07:35 crc kubenswrapper[4703]: I1209 12:07:35.440478 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be89872a-5325-41c1-85b8-6c9880d40f68-catalog-content\") pod \"community-operators-6jqdn\" (UID: \"be89872a-5325-41c1-85b8-6c9880d40f68\") " pod="openshift-marketplace/community-operators-6jqdn" Dec 09 12:07:35 crc kubenswrapper[4703]: I1209 12:07:35.488556 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnbc6\" (UniqueName: \"kubernetes.io/projected/be89872a-5325-41c1-85b8-6c9880d40f68-kube-api-access-rnbc6\") pod \"community-operators-6jqdn\" (UID: \"be89872a-5325-41c1-85b8-6c9880d40f68\") " pod="openshift-marketplace/community-operators-6jqdn" Dec 09 12:07:35 crc kubenswrapper[4703]: I1209 12:07:35.515437 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-hdphr" event={"ID":"e2de5e19-e8f3-4721-96c3-943bf64a7dab","Type":"ContainerStarted","Data":"2248855447c69087737b1ed61c57d42d19c35738d9a2447a005baeba86b5e0c0"} Dec 09 12:07:35 crc kubenswrapper[4703]: I1209 12:07:35.515483 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-hdphr" event={"ID":"e2de5e19-e8f3-4721-96c3-943bf64a7dab","Type":"ContainerStarted","Data":"30b6d754c2f1f4ffdde1bf5694f852c4b0e17dd5c94ad6b2cd2bea5411531595"} Dec 09 12:07:35 crc kubenswrapper[4703]: I1209 12:07:35.530470 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6jqdn" Dec 09 12:07:35 crc kubenswrapper[4703]: I1209 12:07:35.534822 4703 generic.go:334] "Generic (PLEG): container finished" podID="95adac1b-58b8-4b95-bd5a-e608dd6d3838" containerID="c821dc583592fb250b5be7d3efd4d6dd57e0c55457c97fd509198728d100e11b" exitCode=0 Dec 09 12:07:35 crc kubenswrapper[4703]: I1209 12:07:35.535215 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"95adac1b-58b8-4b95-bd5a-e608dd6d3838","Type":"ContainerDied","Data":"c821dc583592fb250b5be7d3efd4d6dd57e0c55457c97fd509198728d100e11b"} Dec 09 12:07:35 crc kubenswrapper[4703]: I1209 12:07:35.543650 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:07:35 crc kubenswrapper[4703]: I1209 12:07:35.543850 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6d97ca9-dbfc-4bb7-9784-32152f514675-utilities\") pod \"certified-operators-p2dxk\" (UID: \"a6d97ca9-dbfc-4bb7-9784-32152f514675\") " pod="openshift-marketplace/certified-operators-p2dxk" Dec 09 12:07:35 crc kubenswrapper[4703]: I1209 12:07:35.543885 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6d97ca9-dbfc-4bb7-9784-32152f514675-catalog-content\") pod \"certified-operators-p2dxk\" (UID: \"a6d97ca9-dbfc-4bb7-9784-32152f514675\") " pod="openshift-marketplace/certified-operators-p2dxk" Dec 09 12:07:35 crc kubenswrapper[4703]: I1209 12:07:35.543971 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9r94k\" (UniqueName: \"kubernetes.io/projected/a6d97ca9-dbfc-4bb7-9784-32152f514675-kube-api-access-9r94k\") pod \"certified-operators-p2dxk\" (UID: \"a6d97ca9-dbfc-4bb7-9784-32152f514675\") " pod="openshift-marketplace/certified-operators-p2dxk" Dec 09 12:07:35 crc kubenswrapper[4703]: E1209 12:07:35.544071 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:07:36.044056734 +0000 UTC m=+155.292820243 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:07:35 crc kubenswrapper[4703]: I1209 12:07:35.563730 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5tsxq" Dec 09 12:07:35 crc kubenswrapper[4703]: I1209 12:07:35.600982 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-f8tbx"] Dec 09 12:07:35 crc kubenswrapper[4703]: I1209 12:07:35.602244 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f8tbx" Dec 09 12:07:35 crc kubenswrapper[4703]: I1209 12:07:35.606049 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-f8tbx"] Dec 09 12:07:35 crc kubenswrapper[4703]: I1209 12:07:35.616002 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-mdtkr" Dec 09 12:07:35 crc kubenswrapper[4703]: I1209 12:07:35.637050 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-mdtkr" Dec 09 12:07:35 crc kubenswrapper[4703]: I1209 12:07:35.644815 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9r94k\" (UniqueName: \"kubernetes.io/projected/a6d97ca9-dbfc-4bb7-9784-32152f514675-kube-api-access-9r94k\") pod \"certified-operators-p2dxk\" (UID: \"a6d97ca9-dbfc-4bb7-9784-32152f514675\") " pod="openshift-marketplace/certified-operators-p2dxk" Dec 09 12:07:35 crc kubenswrapper[4703]: I1209 12:07:35.644918 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6d97ca9-dbfc-4bb7-9784-32152f514675-utilities\") pod \"certified-operators-p2dxk\" (UID: \"a6d97ca9-dbfc-4bb7-9784-32152f514675\") " pod="openshift-marketplace/certified-operators-p2dxk" Dec 09 12:07:35 crc kubenswrapper[4703]: I1209 12:07:35.644948 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6d97ca9-dbfc-4bb7-9784-32152f514675-catalog-content\") pod \"certified-operators-p2dxk\" (UID: \"a6d97ca9-dbfc-4bb7-9784-32152f514675\") " pod="openshift-marketplace/certified-operators-p2dxk" Dec 09 12:07:35 crc kubenswrapper[4703]: I1209 12:07:35.645062 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q57qh\" (UID: \"352f59bb-69ee-46d0-862c-d839ac334b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-q57qh" Dec 09 12:07:35 crc kubenswrapper[4703]: I1209 12:07:35.648963 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6d97ca9-dbfc-4bb7-9784-32152f514675-utilities\") pod \"certified-operators-p2dxk\" (UID: \"a6d97ca9-dbfc-4bb7-9784-32152f514675\") " pod="openshift-marketplace/certified-operators-p2dxk" Dec 09 12:07:35 crc kubenswrapper[4703]: I1209 12:07:35.649501 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6d97ca9-dbfc-4bb7-9784-32152f514675-catalog-content\") pod \"certified-operators-p2dxk\" (UID: \"a6d97ca9-dbfc-4bb7-9784-32152f514675\") " pod="openshift-marketplace/certified-operators-p2dxk" Dec 09 12:07:35 crc kubenswrapper[4703]: E1209 12:07:35.652227 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:07:36.152213736 +0000 UTC m=+155.400977255 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q57qh" (UID: "352f59bb-69ee-46d0-862c-d839ac334b35") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:07:35 crc kubenswrapper[4703]: I1209 12:07:35.681040 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9r94k\" (UniqueName: \"kubernetes.io/projected/a6d97ca9-dbfc-4bb7-9784-32152f514675-kube-api-access-9r94k\") pod \"certified-operators-p2dxk\" (UID: \"a6d97ca9-dbfc-4bb7-9784-32152f514675\") " pod="openshift-marketplace/certified-operators-p2dxk" Dec 09 12:07:35 crc kubenswrapper[4703]: I1209 12:07:35.713050 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p2dxk" Dec 09 12:07:35 crc kubenswrapper[4703]: I1209 12:07:35.745942 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:07:35 crc kubenswrapper[4703]: E1209 12:07:35.747683 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:07:36.247642557 +0000 UTC m=+155.496406076 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:07:35 crc kubenswrapper[4703]: I1209 12:07:35.748112 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qxgf\" (UniqueName: \"kubernetes.io/projected/6ef66484-04d6-4994-be99-16da9c301ea2-kube-api-access-2qxgf\") pod \"certified-operators-f8tbx\" (UID: \"6ef66484-04d6-4994-be99-16da9c301ea2\") " pod="openshift-marketplace/certified-operators-f8tbx" Dec 09 12:07:35 crc kubenswrapper[4703]: I1209 12:07:35.748282 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q57qh\" (UID: \"352f59bb-69ee-46d0-862c-d839ac334b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-q57qh" Dec 09 12:07:35 crc kubenswrapper[4703]: I1209 12:07:35.748322 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ef66484-04d6-4994-be99-16da9c301ea2-utilities\") pod \"certified-operators-f8tbx\" (UID: \"6ef66484-04d6-4994-be99-16da9c301ea2\") " pod="openshift-marketplace/certified-operators-f8tbx" Dec 09 12:07:35 crc kubenswrapper[4703]: I1209 12:07:35.748404 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ef66484-04d6-4994-be99-16da9c301ea2-catalog-content\") pod \"certified-operators-f8tbx\" (UID: \"6ef66484-04d6-4994-be99-16da9c301ea2\") " pod="openshift-marketplace/certified-operators-f8tbx" Dec 09 12:07:35 crc kubenswrapper[4703]: E1209 12:07:35.749424 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:07:36.249412741 +0000 UTC m=+155.498176260 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q57qh" (UID: "352f59bb-69ee-46d0-862c-d839ac334b35") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:07:35 crc kubenswrapper[4703]: I1209 12:07:35.767876 4703 patch_prober.go:28] interesting pod/router-default-5444994796-cjdjc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 12:07:35 crc kubenswrapper[4703]: [-]has-synced failed: reason withheld Dec 09 12:07:35 crc kubenswrapper[4703]: [+]process-running ok Dec 09 12:07:35 crc kubenswrapper[4703]: healthz check failed Dec 09 12:07:35 crc kubenswrapper[4703]: I1209 12:07:35.767984 4703 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-cjdjc" podUID="de18af96-27c1-4d28-acfe-e0317de38dba" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 12:07:35 crc kubenswrapper[4703]: I1209 12:07:35.787937 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8xkww"] Dec 09 12:07:35 crc kubenswrapper[4703]: W1209 12:07:35.827920 4703 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod621283ab_7eb7_4952_9059_c3c4209bca7b.slice/crio-76c26b9c6992ccadd1f1481349a151213db3c8473caf4210bba3c1032cf3e551 WatchSource:0}: Error finding container 76c26b9c6992ccadd1f1481349a151213db3c8473caf4210bba3c1032cf3e551: Status 404 returned error can't find the container with id 76c26b9c6992ccadd1f1481349a151213db3c8473caf4210bba3c1032cf3e551 Dec 09 12:07:35 crc kubenswrapper[4703]: I1209 12:07:35.851037 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:07:35 crc kubenswrapper[4703]: I1209 12:07:35.851331 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ef66484-04d6-4994-be99-16da9c301ea2-catalog-content\") pod \"certified-operators-f8tbx\" (UID: \"6ef66484-04d6-4994-be99-16da9c301ea2\") " pod="openshift-marketplace/certified-operators-f8tbx" Dec 09 12:07:35 crc kubenswrapper[4703]: I1209 12:07:35.851461 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qxgf\" (UniqueName: \"kubernetes.io/projected/6ef66484-04d6-4994-be99-16da9c301ea2-kube-api-access-2qxgf\") pod \"certified-operators-f8tbx\" (UID: \"6ef66484-04d6-4994-be99-16da9c301ea2\") " pod="openshift-marketplace/certified-operators-f8tbx" Dec 09 12:07:35 crc kubenswrapper[4703]: I1209 12:07:35.851544 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ef66484-04d6-4994-be99-16da9c301ea2-utilities\") pod \"certified-operators-f8tbx\" (UID: \"6ef66484-04d6-4994-be99-16da9c301ea2\") " pod="openshift-marketplace/certified-operators-f8tbx" Dec 09 12:07:35 crc kubenswrapper[4703]: I1209 12:07:35.852160 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ef66484-04d6-4994-be99-16da9c301ea2-utilities\") pod \"certified-operators-f8tbx\" (UID: \"6ef66484-04d6-4994-be99-16da9c301ea2\") " pod="openshift-marketplace/certified-operators-f8tbx" Dec 09 12:07:35 crc kubenswrapper[4703]: E1209 12:07:35.852316 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:07:36.352295105 +0000 UTC m=+155.601058624 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:07:35 crc kubenswrapper[4703]: I1209 12:07:35.852678 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ef66484-04d6-4994-be99-16da9c301ea2-catalog-content\") pod \"certified-operators-f8tbx\" (UID: \"6ef66484-04d6-4994-be99-16da9c301ea2\") " pod="openshift-marketplace/certified-operators-f8tbx" Dec 09 12:07:35 crc kubenswrapper[4703]: I1209 12:07:35.884126 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qxgf\" (UniqueName: \"kubernetes.io/projected/6ef66484-04d6-4994-be99-16da9c301ea2-kube-api-access-2qxgf\") pod \"certified-operators-f8tbx\" (UID: \"6ef66484-04d6-4994-be99-16da9c301ea2\") " pod="openshift-marketplace/certified-operators-f8tbx" Dec 09 12:07:35 crc kubenswrapper[4703]: I1209 12:07:35.959096 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q57qh\" (UID: \"352f59bb-69ee-46d0-862c-d839ac334b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-q57qh" Dec 09 12:07:35 crc kubenswrapper[4703]: E1209 12:07:35.959480 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:07:36.459465078 +0000 UTC m=+155.708228597 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q57qh" (UID: "352f59bb-69ee-46d0-862c-d839ac334b35") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:07:35 crc kubenswrapper[4703]: I1209 12:07:35.969078 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f8tbx" Dec 09 12:07:36 crc kubenswrapper[4703]: I1209 12:07:36.060536 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:07:36 crc kubenswrapper[4703]: E1209 12:07:36.061171 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:07:36.561152566 +0000 UTC m=+155.809916085 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:07:36 crc kubenswrapper[4703]: I1209 12:07:36.162568 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q57qh\" (UID: \"352f59bb-69ee-46d0-862c-d839ac334b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-q57qh" Dec 09 12:07:36 crc kubenswrapper[4703]: E1209 12:07:36.162939 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:07:36.662924487 +0000 UTC m=+155.911688006 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q57qh" (UID: "352f59bb-69ee-46d0-862c-d839ac334b35") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:07:36 crc kubenswrapper[4703]: I1209 12:07:36.258603 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fh878" Dec 09 12:07:36 crc kubenswrapper[4703]: I1209 12:07:36.258647 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fh878" Dec 09 12:07:36 crc kubenswrapper[4703]: I1209 12:07:36.264967 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:07:36 crc kubenswrapper[4703]: E1209 12:07:36.265339 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:07:36.765319077 +0000 UTC m=+156.014082596 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:07:36 crc kubenswrapper[4703]: I1209 12:07:36.291427 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fh878" Dec 09 12:07:36 crc kubenswrapper[4703]: I1209 12:07:36.331731 4703 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-12-09T12:07:35.335705158Z","Handler":null,"Name":""} Dec 09 12:07:36 crc kubenswrapper[4703]: I1209 12:07:36.375319 4703 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Dec 09 12:07:36 crc kubenswrapper[4703]: I1209 12:07:36.375360 4703 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Dec 09 12:07:36 crc kubenswrapper[4703]: I1209 12:07:36.375906 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q57qh\" (UID: \"352f59bb-69ee-46d0-862c-d839ac334b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-q57qh" Dec 09 12:07:36 crc kubenswrapper[4703]: I1209 12:07:36.379071 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6jqdn"] Dec 09 12:07:36 crc kubenswrapper[4703]: I1209 12:07:36.411393 4703 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 09 12:07:36 crc kubenswrapper[4703]: I1209 12:07:36.411649 4703 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q57qh\" (UID: \"352f59bb-69ee-46d0-862c-d839ac334b35\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-q57qh" Dec 09 12:07:36 crc kubenswrapper[4703]: I1209 12:07:36.493779 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q57qh\" (UID: \"352f59bb-69ee-46d0-862c-d839ac334b35\") " pod="openshift-image-registry/image-registry-697d97f7c8-q57qh" Dec 09 12:07:36 crc kubenswrapper[4703]: I1209 12:07:36.547532 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-p2dxk"] Dec 09 12:07:36 crc kubenswrapper[4703]: I1209 12:07:36.568241 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-hdphr" event={"ID":"e2de5e19-e8f3-4721-96c3-943bf64a7dab","Type":"ContainerStarted","Data":"269972d2687d8f47f665cda94dfd3464b6ec4cdbbe0379736c84e636b259eed9"} Dec 09 12:07:36 crc kubenswrapper[4703]: I1209 12:07:36.577888 4703 generic.go:334] "Generic (PLEG): container finished" podID="621283ab-7eb7-4952-9059-c3c4209bca7b" containerID="8b3b7159d30f5014913848d5ca60f2c27acd05e0825d83215a43926cc1833fd3" exitCode=0 Dec 09 12:07:36 crc kubenswrapper[4703]: I1209 12:07:36.577951 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8xkww" event={"ID":"621283ab-7eb7-4952-9059-c3c4209bca7b","Type":"ContainerDied","Data":"8b3b7159d30f5014913848d5ca60f2c27acd05e0825d83215a43926cc1833fd3"} Dec 09 12:07:36 crc kubenswrapper[4703]: I1209 12:07:36.577980 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8xkww" event={"ID":"621283ab-7eb7-4952-9059-c3c4209bca7b","Type":"ContainerStarted","Data":"76c26b9c6992ccadd1f1481349a151213db3c8473caf4210bba3c1032cf3e551"} Dec 09 12:07:36 crc kubenswrapper[4703]: I1209 12:07:36.580551 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:07:36 crc kubenswrapper[4703]: I1209 12:07:36.580920 4703 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 09 12:07:36 crc kubenswrapper[4703]: I1209 12:07:36.584407 4703 generic.go:334] "Generic (PLEG): container finished" podID="049a6f1a-4b47-421d-bfd6-f1c89a5c0a80" containerID="a43783aa0bf25bc8bb1b96389392d0c658ad806ff8f5c10d27ea3c0be3cf38a7" exitCode=0 Dec 09 12:07:36 crc kubenswrapper[4703]: I1209 12:07:36.584481 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421360-wvnqz" event={"ID":"049a6f1a-4b47-421d-bfd6-f1c89a5c0a80","Type":"ContainerDied","Data":"a43783aa0bf25bc8bb1b96389392d0c658ad806ff8f5c10d27ea3c0be3cf38a7"} Dec 09 12:07:36 crc kubenswrapper[4703]: I1209 12:07:36.593516 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-hdphr" podStartSLOduration=12.593497803 podStartE2EDuration="12.593497803s" podCreationTimestamp="2025-12-09 12:07:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:07:36.589814804 +0000 UTC m=+155.838578323" watchObservedRunningTime="2025-12-09 12:07:36.593497803 +0000 UTC m=+155.842261322" Dec 09 12:07:36 crc kubenswrapper[4703]: I1209 12:07:36.598611 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-q57qh" Dec 09 12:07:36 crc kubenswrapper[4703]: I1209 12:07:36.611794 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 09 12:07:36 crc kubenswrapper[4703]: I1209 12:07:36.618283 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6jqdn" event={"ID":"be89872a-5325-41c1-85b8-6c9880d40f68","Type":"ContainerStarted","Data":"50959f8aa94dcb5d84ddaab4e8727a8e67756d433215860ed8be939dfc82609e"} Dec 09 12:07:36 crc kubenswrapper[4703]: I1209 12:07:36.625498 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fh878" Dec 09 12:07:36 crc kubenswrapper[4703]: I1209 12:07:36.683395 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-f8tbx"] Dec 09 12:07:36 crc kubenswrapper[4703]: I1209 12:07:36.751964 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-cjdjc" Dec 09 12:07:36 crc kubenswrapper[4703]: I1209 12:07:36.759434 4703 patch_prober.go:28] interesting pod/router-default-5444994796-cjdjc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 12:07:36 crc kubenswrapper[4703]: [-]has-synced failed: reason withheld Dec 09 12:07:36 crc kubenswrapper[4703]: [+]process-running ok Dec 09 12:07:36 crc kubenswrapper[4703]: healthz check failed Dec 09 12:07:36 crc kubenswrapper[4703]: I1209 12:07:36.759775 4703 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-cjdjc" podUID="de18af96-27c1-4d28-acfe-e0317de38dba" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 12:07:36 crc kubenswrapper[4703]: I1209 12:07:36.759881 4703 patch_prober.go:28] interesting pod/downloads-7954f5f757-hqx9m container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Dec 09 12:07:36 crc kubenswrapper[4703]: I1209 12:07:36.759921 4703 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hqx9m" podUID="36981648-d6b7-4c08-96ec-622d069c4c19" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Dec 09 12:07:36 crc kubenswrapper[4703]: I1209 12:07:36.759969 4703 patch_prober.go:28] interesting pod/downloads-7954f5f757-hqx9m container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Dec 09 12:07:36 crc kubenswrapper[4703]: I1209 12:07:36.759983 4703 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-hqx9m" podUID="36981648-d6b7-4c08-96ec-622d069c4c19" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Dec 09 12:07:37 crc kubenswrapper[4703]: I1209 12:07:37.105071 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Dec 09 12:07:37 crc kubenswrapper[4703]: I1209 12:07:37.160721 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 09 12:07:37 crc kubenswrapper[4703]: I1209 12:07:37.185087 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-q57qh"] Dec 09 12:07:37 crc kubenswrapper[4703]: I1209 12:07:37.193281 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-bkmjk"] Dec 09 12:07:37 crc kubenswrapper[4703]: E1209 12:07:37.193765 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95adac1b-58b8-4b95-bd5a-e608dd6d3838" containerName="pruner" Dec 09 12:07:37 crc kubenswrapper[4703]: I1209 12:07:37.193788 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="95adac1b-58b8-4b95-bd5a-e608dd6d3838" containerName="pruner" Dec 09 12:07:37 crc kubenswrapper[4703]: I1209 12:07:37.194391 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="95adac1b-58b8-4b95-bd5a-e608dd6d3838" containerName="pruner" Dec 09 12:07:37 crc kubenswrapper[4703]: I1209 12:07:37.196402 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bkmjk" Dec 09 12:07:37 crc kubenswrapper[4703]: I1209 12:07:37.200158 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 09 12:07:37 crc kubenswrapper[4703]: I1209 12:07:37.219340 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bkmjk"] Dec 09 12:07:37 crc kubenswrapper[4703]: I1209 12:07:37.275660 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-w77wb" Dec 09 12:07:37 crc kubenswrapper[4703]: I1209 12:07:37.275696 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-w77wb" Dec 09 12:07:37 crc kubenswrapper[4703]: I1209 12:07:37.283536 4703 patch_prober.go:28] interesting pod/console-f9d7485db-w77wb container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.32:8443/health\": dial tcp 10.217.0.32:8443: connect: connection refused" start-of-body= Dec 09 12:07:37 crc kubenswrapper[4703]: I1209 12:07:37.283696 4703 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-w77wb" podUID="2078d397-e8a5-4dbf-8573-360a9c373084" containerName="console" probeResult="failure" output="Get \"https://10.217.0.32:8443/health\": dial tcp 10.217.0.32:8443: connect: connection refused" Dec 09 12:07:37 crc kubenswrapper[4703]: I1209 12:07:37.289597 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-vlkrh" Dec 09 12:07:37 crc kubenswrapper[4703]: I1209 12:07:37.305381 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/95adac1b-58b8-4b95-bd5a-e608dd6d3838-kubelet-dir\") pod \"95adac1b-58b8-4b95-bd5a-e608dd6d3838\" (UID: \"95adac1b-58b8-4b95-bd5a-e608dd6d3838\") " Dec 09 12:07:37 crc kubenswrapper[4703]: I1209 12:07:37.305527 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/95adac1b-58b8-4b95-bd5a-e608dd6d3838-kube-api-access\") pod \"95adac1b-58b8-4b95-bd5a-e608dd6d3838\" (UID: \"95adac1b-58b8-4b95-bd5a-e608dd6d3838\") " Dec 09 12:07:37 crc kubenswrapper[4703]: I1209 12:07:37.305516 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/95adac1b-58b8-4b95-bd5a-e608dd6d3838-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "95adac1b-58b8-4b95-bd5a-e608dd6d3838" (UID: "95adac1b-58b8-4b95-bd5a-e608dd6d3838"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 12:07:37 crc kubenswrapper[4703]: I1209 12:07:37.305855 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5508ed7b-36a7-4d73-a489-6d7eb6eb6c3e-catalog-content\") pod \"redhat-marketplace-bkmjk\" (UID: \"5508ed7b-36a7-4d73-a489-6d7eb6eb6c3e\") " pod="openshift-marketplace/redhat-marketplace-bkmjk" Dec 09 12:07:37 crc kubenswrapper[4703]: I1209 12:07:37.305915 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5508ed7b-36a7-4d73-a489-6d7eb6eb6c3e-utilities\") pod \"redhat-marketplace-bkmjk\" (UID: \"5508ed7b-36a7-4d73-a489-6d7eb6eb6c3e\") " pod="openshift-marketplace/redhat-marketplace-bkmjk" Dec 09 12:07:37 crc kubenswrapper[4703]: I1209 12:07:37.305947 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klflg\" (UniqueName: \"kubernetes.io/projected/5508ed7b-36a7-4d73-a489-6d7eb6eb6c3e-kube-api-access-klflg\") pod \"redhat-marketplace-bkmjk\" (UID: \"5508ed7b-36a7-4d73-a489-6d7eb6eb6c3e\") " pod="openshift-marketplace/redhat-marketplace-bkmjk" Dec 09 12:07:37 crc kubenswrapper[4703]: I1209 12:07:37.305993 4703 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/95adac1b-58b8-4b95-bd5a-e608dd6d3838-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 09 12:07:37 crc kubenswrapper[4703]: I1209 12:07:37.314415 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95adac1b-58b8-4b95-bd5a-e608dd6d3838-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "95adac1b-58b8-4b95-bd5a-e608dd6d3838" (UID: "95adac1b-58b8-4b95-bd5a-e608dd6d3838"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:07:37 crc kubenswrapper[4703]: I1209 12:07:37.407282 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5508ed7b-36a7-4d73-a489-6d7eb6eb6c3e-catalog-content\") pod \"redhat-marketplace-bkmjk\" (UID: \"5508ed7b-36a7-4d73-a489-6d7eb6eb6c3e\") " pod="openshift-marketplace/redhat-marketplace-bkmjk" Dec 09 12:07:37 crc kubenswrapper[4703]: I1209 12:07:37.407382 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5508ed7b-36a7-4d73-a489-6d7eb6eb6c3e-utilities\") pod \"redhat-marketplace-bkmjk\" (UID: \"5508ed7b-36a7-4d73-a489-6d7eb6eb6c3e\") " pod="openshift-marketplace/redhat-marketplace-bkmjk" Dec 09 12:07:37 crc kubenswrapper[4703]: I1209 12:07:37.407410 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-klflg\" (UniqueName: \"kubernetes.io/projected/5508ed7b-36a7-4d73-a489-6d7eb6eb6c3e-kube-api-access-klflg\") pod \"redhat-marketplace-bkmjk\" (UID: \"5508ed7b-36a7-4d73-a489-6d7eb6eb6c3e\") " pod="openshift-marketplace/redhat-marketplace-bkmjk" Dec 09 12:07:37 crc kubenswrapper[4703]: I1209 12:07:37.407567 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/95adac1b-58b8-4b95-bd5a-e608dd6d3838-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 09 12:07:37 crc kubenswrapper[4703]: I1209 12:07:37.408704 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5508ed7b-36a7-4d73-a489-6d7eb6eb6c3e-catalog-content\") pod \"redhat-marketplace-bkmjk\" (UID: \"5508ed7b-36a7-4d73-a489-6d7eb6eb6c3e\") " pod="openshift-marketplace/redhat-marketplace-bkmjk" Dec 09 12:07:37 crc kubenswrapper[4703]: I1209 12:07:37.409325 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5508ed7b-36a7-4d73-a489-6d7eb6eb6c3e-utilities\") pod \"redhat-marketplace-bkmjk\" (UID: \"5508ed7b-36a7-4d73-a489-6d7eb6eb6c3e\") " pod="openshift-marketplace/redhat-marketplace-bkmjk" Dec 09 12:07:37 crc kubenswrapper[4703]: I1209 12:07:37.434921 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-klflg\" (UniqueName: \"kubernetes.io/projected/5508ed7b-36a7-4d73-a489-6d7eb6eb6c3e-kube-api-access-klflg\") pod \"redhat-marketplace-bkmjk\" (UID: \"5508ed7b-36a7-4d73-a489-6d7eb6eb6c3e\") " pod="openshift-marketplace/redhat-marketplace-bkmjk" Dec 09 12:07:37 crc kubenswrapper[4703]: I1209 12:07:37.537944 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bkmjk" Dec 09 12:07:37 crc kubenswrapper[4703]: I1209 12:07:37.575178 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jgffv"] Dec 09 12:07:37 crc kubenswrapper[4703]: I1209 12:07:37.576304 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jgffv" Dec 09 12:07:37 crc kubenswrapper[4703]: I1209 12:07:37.585207 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jgffv"] Dec 09 12:07:37 crc kubenswrapper[4703]: I1209 12:07:37.674039 4703 generic.go:334] "Generic (PLEG): container finished" podID="6ef66484-04d6-4994-be99-16da9c301ea2" containerID="0ec318d759a307a288b15ea3624342244932a6e1654d992019372b2a1b4886ef" exitCode=0 Dec 09 12:07:37 crc kubenswrapper[4703]: I1209 12:07:37.674112 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f8tbx" event={"ID":"6ef66484-04d6-4994-be99-16da9c301ea2","Type":"ContainerDied","Data":"0ec318d759a307a288b15ea3624342244932a6e1654d992019372b2a1b4886ef"} Dec 09 12:07:37 crc kubenswrapper[4703]: I1209 12:07:37.674145 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f8tbx" event={"ID":"6ef66484-04d6-4994-be99-16da9c301ea2","Type":"ContainerStarted","Data":"034f96304e8984e4d763e19cfd5bfae285be9670bee6f5b6ac7d92c328f01712"} Dec 09 12:07:37 crc kubenswrapper[4703]: I1209 12:07:37.687299 4703 generic.go:334] "Generic (PLEG): container finished" podID="be89872a-5325-41c1-85b8-6c9880d40f68" containerID="02b42c9ddea6ff3651c310756774be9c089a3b0d47b72f763ad60ea647116679" exitCode=0 Dec 09 12:07:37 crc kubenswrapper[4703]: I1209 12:07:37.687371 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6jqdn" event={"ID":"be89872a-5325-41c1-85b8-6c9880d40f68","Type":"ContainerDied","Data":"02b42c9ddea6ff3651c310756774be9c089a3b0d47b72f763ad60ea647116679"} Dec 09 12:07:37 crc kubenswrapper[4703]: I1209 12:07:37.690571 4703 generic.go:334] "Generic (PLEG): container finished" podID="a6d97ca9-dbfc-4bb7-9784-32152f514675" containerID="650f55920ad556417d2ae71fcae01687332d9d5271cc9c84da9eebe55d781c47" exitCode=0 Dec 09 12:07:37 crc kubenswrapper[4703]: I1209 12:07:37.690619 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p2dxk" event={"ID":"a6d97ca9-dbfc-4bb7-9784-32152f514675","Type":"ContainerDied","Data":"650f55920ad556417d2ae71fcae01687332d9d5271cc9c84da9eebe55d781c47"} Dec 09 12:07:37 crc kubenswrapper[4703]: I1209 12:07:37.690640 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p2dxk" event={"ID":"a6d97ca9-dbfc-4bb7-9784-32152f514675","Type":"ContainerStarted","Data":"ef3b0ccc9a4e865c5a147c61fa65fe22c3e0430da19335e7f753e35382f5a5c9"} Dec 09 12:07:37 crc kubenswrapper[4703]: I1209 12:07:37.693359 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"95adac1b-58b8-4b95-bd5a-e608dd6d3838","Type":"ContainerDied","Data":"4aa42da3c50f3c9b69b01f337aa848c38d54513460de357137605d6ad6585f42"} Dec 09 12:07:37 crc kubenswrapper[4703]: I1209 12:07:37.693392 4703 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4aa42da3c50f3c9b69b01f337aa848c38d54513460de357137605d6ad6585f42" Dec 09 12:07:37 crc kubenswrapper[4703]: I1209 12:07:37.693469 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 09 12:07:37 crc kubenswrapper[4703]: I1209 12:07:37.710592 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5cb3256-7c02-446e-8c63-d1559675704e-utilities\") pod \"redhat-marketplace-jgffv\" (UID: \"d5cb3256-7c02-446e-8c63-d1559675704e\") " pod="openshift-marketplace/redhat-marketplace-jgffv" Dec 09 12:07:37 crc kubenswrapper[4703]: I1209 12:07:37.710656 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bfst\" (UniqueName: \"kubernetes.io/projected/d5cb3256-7c02-446e-8c63-d1559675704e-kube-api-access-6bfst\") pod \"redhat-marketplace-jgffv\" (UID: \"d5cb3256-7c02-446e-8c63-d1559675704e\") " pod="openshift-marketplace/redhat-marketplace-jgffv" Dec 09 12:07:37 crc kubenswrapper[4703]: I1209 12:07:37.710691 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5cb3256-7c02-446e-8c63-d1559675704e-catalog-content\") pod \"redhat-marketplace-jgffv\" (UID: \"d5cb3256-7c02-446e-8c63-d1559675704e\") " pod="openshift-marketplace/redhat-marketplace-jgffv" Dec 09 12:07:37 crc kubenswrapper[4703]: I1209 12:07:37.717800 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-q57qh" event={"ID":"352f59bb-69ee-46d0-862c-d839ac334b35","Type":"ContainerStarted","Data":"fd18ad87dc79f7b95db43c3756df3a5bb3e8f58c7e3a6648ef7346a28e4a5cfe"} Dec 09 12:07:37 crc kubenswrapper[4703]: I1209 12:07:37.717831 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-q57qh" event={"ID":"352f59bb-69ee-46d0-862c-d839ac334b35","Type":"ContainerStarted","Data":"2e207cb8bd75964a3baad1154e873808723fd3ca57f41a30ef408b021e3039dc"} Dec 09 12:07:37 crc kubenswrapper[4703]: I1209 12:07:37.765942 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-cjdjc" Dec 09 12:07:37 crc kubenswrapper[4703]: I1209 12:07:37.768532 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-q57qh" podStartSLOduration=138.768496646 podStartE2EDuration="2m18.768496646s" podCreationTimestamp="2025-12-09 12:05:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:07:37.763023702 +0000 UTC m=+157.011787221" watchObservedRunningTime="2025-12-09 12:07:37.768496646 +0000 UTC m=+157.017260165" Dec 09 12:07:37 crc kubenswrapper[4703]: I1209 12:07:37.775617 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-cjdjc" Dec 09 12:07:37 crc kubenswrapper[4703]: I1209 12:07:37.812485 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5cb3256-7c02-446e-8c63-d1559675704e-utilities\") pod \"redhat-marketplace-jgffv\" (UID: \"d5cb3256-7c02-446e-8c63-d1559675704e\") " pod="openshift-marketplace/redhat-marketplace-jgffv" Dec 09 12:07:37 crc kubenswrapper[4703]: I1209 12:07:37.812563 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bfst\" (UniqueName: \"kubernetes.io/projected/d5cb3256-7c02-446e-8c63-d1559675704e-kube-api-access-6bfst\") pod \"redhat-marketplace-jgffv\" (UID: \"d5cb3256-7c02-446e-8c63-d1559675704e\") " pod="openshift-marketplace/redhat-marketplace-jgffv" Dec 09 12:07:37 crc kubenswrapper[4703]: I1209 12:07:37.812626 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5cb3256-7c02-446e-8c63-d1559675704e-catalog-content\") pod \"redhat-marketplace-jgffv\" (UID: \"d5cb3256-7c02-446e-8c63-d1559675704e\") " pod="openshift-marketplace/redhat-marketplace-jgffv" Dec 09 12:07:37 crc kubenswrapper[4703]: I1209 12:07:37.815975 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5cb3256-7c02-446e-8c63-d1559675704e-catalog-content\") pod \"redhat-marketplace-jgffv\" (UID: \"d5cb3256-7c02-446e-8c63-d1559675704e\") " pod="openshift-marketplace/redhat-marketplace-jgffv" Dec 09 12:07:37 crc kubenswrapper[4703]: I1209 12:07:37.817620 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5cb3256-7c02-446e-8c63-d1559675704e-utilities\") pod \"redhat-marketplace-jgffv\" (UID: \"d5cb3256-7c02-446e-8c63-d1559675704e\") " pod="openshift-marketplace/redhat-marketplace-jgffv" Dec 09 12:07:37 crc kubenswrapper[4703]: I1209 12:07:37.850776 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bfst\" (UniqueName: \"kubernetes.io/projected/d5cb3256-7c02-446e-8c63-d1559675704e-kube-api-access-6bfst\") pod \"redhat-marketplace-jgffv\" (UID: \"d5cb3256-7c02-446e-8c63-d1559675704e\") " pod="openshift-marketplace/redhat-marketplace-jgffv" Dec 09 12:07:37 crc kubenswrapper[4703]: I1209 12:07:37.905330 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jgffv" Dec 09 12:07:37 crc kubenswrapper[4703]: I1209 12:07:37.986851 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bkmjk"] Dec 09 12:07:38 crc kubenswrapper[4703]: W1209 12:07:38.021516 4703 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5508ed7b_36a7_4d73_a489_6d7eb6eb6c3e.slice/crio-6c84979f6638f5504aeb7d56a2788bb369a2d8d857e6600da4354ad528dae1cf WatchSource:0}: Error finding container 6c84979f6638f5504aeb7d56a2788bb369a2d8d857e6600da4354ad528dae1cf: Status 404 returned error can't find the container with id 6c84979f6638f5504aeb7d56a2788bb369a2d8d857e6600da4354ad528dae1cf Dec 09 12:07:38 crc kubenswrapper[4703]: I1209 12:07:38.078745 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 09 12:07:38 crc kubenswrapper[4703]: I1209 12:07:38.079614 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 09 12:07:38 crc kubenswrapper[4703]: I1209 12:07:38.094594 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 09 12:07:38 crc kubenswrapper[4703]: I1209 12:07:38.094858 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 09 12:07:38 crc kubenswrapper[4703]: I1209 12:07:38.094972 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 09 12:07:38 crc kubenswrapper[4703]: I1209 12:07:38.118544 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/69c63478-7f81-40ad-aad1-9d6e00c3a506-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"69c63478-7f81-40ad-aad1-9d6e00c3a506\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 09 12:07:38 crc kubenswrapper[4703]: I1209 12:07:38.118620 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/69c63478-7f81-40ad-aad1-9d6e00c3a506-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"69c63478-7f81-40ad-aad1-9d6e00c3a506\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 09 12:07:38 crc kubenswrapper[4703]: I1209 12:07:38.188183 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9nc2d"] Dec 09 12:07:38 crc kubenswrapper[4703]: I1209 12:07:38.206635 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9nc2d"] Dec 09 12:07:38 crc kubenswrapper[4703]: I1209 12:07:38.214008 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9nc2d" Dec 09 12:07:38 crc kubenswrapper[4703]: I1209 12:07:38.216659 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 09 12:07:38 crc kubenswrapper[4703]: I1209 12:07:38.224586 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/69c63478-7f81-40ad-aad1-9d6e00c3a506-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"69c63478-7f81-40ad-aad1-9d6e00c3a506\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 09 12:07:38 crc kubenswrapper[4703]: I1209 12:07:38.224771 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/69c63478-7f81-40ad-aad1-9d6e00c3a506-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"69c63478-7f81-40ad-aad1-9d6e00c3a506\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 09 12:07:38 crc kubenswrapper[4703]: I1209 12:07:38.225613 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/69c63478-7f81-40ad-aad1-9d6e00c3a506-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"69c63478-7f81-40ad-aad1-9d6e00c3a506\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 09 12:07:38 crc kubenswrapper[4703]: I1209 12:07:38.254954 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/69c63478-7f81-40ad-aad1-9d6e00c3a506-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"69c63478-7f81-40ad-aad1-9d6e00c3a506\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 09 12:07:38 crc kubenswrapper[4703]: I1209 12:07:38.306831 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421360-wvnqz" Dec 09 12:07:38 crc kubenswrapper[4703]: I1209 12:07:38.327306 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e4c8435-609b-49fe-9f13-17547856b18a-catalog-content\") pod \"redhat-operators-9nc2d\" (UID: \"2e4c8435-609b-49fe-9f13-17547856b18a\") " pod="openshift-marketplace/redhat-operators-9nc2d" Dec 09 12:07:38 crc kubenswrapper[4703]: I1209 12:07:38.327356 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e4c8435-609b-49fe-9f13-17547856b18a-utilities\") pod \"redhat-operators-9nc2d\" (UID: \"2e4c8435-609b-49fe-9f13-17547856b18a\") " pod="openshift-marketplace/redhat-operators-9nc2d" Dec 09 12:07:38 crc kubenswrapper[4703]: I1209 12:07:38.327423 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7gxl\" (UniqueName: \"kubernetes.io/projected/2e4c8435-609b-49fe-9f13-17547856b18a-kube-api-access-n7gxl\") pod \"redhat-operators-9nc2d\" (UID: \"2e4c8435-609b-49fe-9f13-17547856b18a\") " pod="openshift-marketplace/redhat-operators-9nc2d" Dec 09 12:07:38 crc kubenswrapper[4703]: I1209 12:07:38.357584 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jgffv"] Dec 09 12:07:38 crc kubenswrapper[4703]: W1209 12:07:38.378178 4703 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd5cb3256_7c02_446e_8c63_d1559675704e.slice/crio-c78e35a4d92dc3cbc0ab96cf3eaa37329a0af4701c90b4038fcc6a50c9473826 WatchSource:0}: Error finding container c78e35a4d92dc3cbc0ab96cf3eaa37329a0af4701c90b4038fcc6a50c9473826: Status 404 returned error can't find the container with id c78e35a4d92dc3cbc0ab96cf3eaa37329a0af4701c90b4038fcc6a50c9473826 Dec 09 12:07:38 crc kubenswrapper[4703]: I1209 12:07:38.428521 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9ztdz\" (UniqueName: \"kubernetes.io/projected/049a6f1a-4b47-421d-bfd6-f1c89a5c0a80-kube-api-access-9ztdz\") pod \"049a6f1a-4b47-421d-bfd6-f1c89a5c0a80\" (UID: \"049a6f1a-4b47-421d-bfd6-f1c89a5c0a80\") " Dec 09 12:07:38 crc kubenswrapper[4703]: I1209 12:07:38.428758 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/049a6f1a-4b47-421d-bfd6-f1c89a5c0a80-config-volume\") pod \"049a6f1a-4b47-421d-bfd6-f1c89a5c0a80\" (UID: \"049a6f1a-4b47-421d-bfd6-f1c89a5c0a80\") " Dec 09 12:07:38 crc kubenswrapper[4703]: I1209 12:07:38.428830 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/049a6f1a-4b47-421d-bfd6-f1c89a5c0a80-secret-volume\") pod \"049a6f1a-4b47-421d-bfd6-f1c89a5c0a80\" (UID: \"049a6f1a-4b47-421d-bfd6-f1c89a5c0a80\") " Dec 09 12:07:38 crc kubenswrapper[4703]: I1209 12:07:38.429427 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7gxl\" (UniqueName: \"kubernetes.io/projected/2e4c8435-609b-49fe-9f13-17547856b18a-kube-api-access-n7gxl\") pod \"redhat-operators-9nc2d\" (UID: \"2e4c8435-609b-49fe-9f13-17547856b18a\") " pod="openshift-marketplace/redhat-operators-9nc2d" Dec 09 12:07:38 crc kubenswrapper[4703]: I1209 12:07:38.429667 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e4c8435-609b-49fe-9f13-17547856b18a-catalog-content\") pod \"redhat-operators-9nc2d\" (UID: \"2e4c8435-609b-49fe-9f13-17547856b18a\") " pod="openshift-marketplace/redhat-operators-9nc2d" Dec 09 12:07:38 crc kubenswrapper[4703]: I1209 12:07:38.429755 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e4c8435-609b-49fe-9f13-17547856b18a-utilities\") pod \"redhat-operators-9nc2d\" (UID: \"2e4c8435-609b-49fe-9f13-17547856b18a\") " pod="openshift-marketplace/redhat-operators-9nc2d" Dec 09 12:07:38 crc kubenswrapper[4703]: I1209 12:07:38.431529 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/049a6f1a-4b47-421d-bfd6-f1c89a5c0a80-config-volume" (OuterVolumeSpecName: "config-volume") pod "049a6f1a-4b47-421d-bfd6-f1c89a5c0a80" (UID: "049a6f1a-4b47-421d-bfd6-f1c89a5c0a80"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:07:38 crc kubenswrapper[4703]: I1209 12:07:38.431864 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e4c8435-609b-49fe-9f13-17547856b18a-utilities\") pod \"redhat-operators-9nc2d\" (UID: \"2e4c8435-609b-49fe-9f13-17547856b18a\") " pod="openshift-marketplace/redhat-operators-9nc2d" Dec 09 12:07:38 crc kubenswrapper[4703]: I1209 12:07:38.431917 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e4c8435-609b-49fe-9f13-17547856b18a-catalog-content\") pod \"redhat-operators-9nc2d\" (UID: \"2e4c8435-609b-49fe-9f13-17547856b18a\") " pod="openshift-marketplace/redhat-operators-9nc2d" Dec 09 12:07:38 crc kubenswrapper[4703]: I1209 12:07:38.435933 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/049a6f1a-4b47-421d-bfd6-f1c89a5c0a80-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "049a6f1a-4b47-421d-bfd6-f1c89a5c0a80" (UID: "049a6f1a-4b47-421d-bfd6-f1c89a5c0a80"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:07:38 crc kubenswrapper[4703]: I1209 12:07:38.437469 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/049a6f1a-4b47-421d-bfd6-f1c89a5c0a80-kube-api-access-9ztdz" (OuterVolumeSpecName: "kube-api-access-9ztdz") pod "049a6f1a-4b47-421d-bfd6-f1c89a5c0a80" (UID: "049a6f1a-4b47-421d-bfd6-f1c89a5c0a80"). InnerVolumeSpecName "kube-api-access-9ztdz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:07:38 crc kubenswrapper[4703]: I1209 12:07:38.456330 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7gxl\" (UniqueName: \"kubernetes.io/projected/2e4c8435-609b-49fe-9f13-17547856b18a-kube-api-access-n7gxl\") pod \"redhat-operators-9nc2d\" (UID: \"2e4c8435-609b-49fe-9f13-17547856b18a\") " pod="openshift-marketplace/redhat-operators-9nc2d" Dec 09 12:07:38 crc kubenswrapper[4703]: I1209 12:07:38.480153 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 09 12:07:38 crc kubenswrapper[4703]: I1209 12:07:38.531815 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9ztdz\" (UniqueName: \"kubernetes.io/projected/049a6f1a-4b47-421d-bfd6-f1c89a5c0a80-kube-api-access-9ztdz\") on node \"crc\" DevicePath \"\"" Dec 09 12:07:38 crc kubenswrapper[4703]: I1209 12:07:38.531849 4703 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/049a6f1a-4b47-421d-bfd6-f1c89a5c0a80-config-volume\") on node \"crc\" DevicePath \"\"" Dec 09 12:07:38 crc kubenswrapper[4703]: I1209 12:07:38.531860 4703 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/049a6f1a-4b47-421d-bfd6-f1c89a5c0a80-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 09 12:07:38 crc kubenswrapper[4703]: I1209 12:07:38.604367 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9nc2d" Dec 09 12:07:38 crc kubenswrapper[4703]: I1209 12:07:38.614071 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pl7l8"] Dec 09 12:07:38 crc kubenswrapper[4703]: E1209 12:07:38.614378 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="049a6f1a-4b47-421d-bfd6-f1c89a5c0a80" containerName="collect-profiles" Dec 09 12:07:38 crc kubenswrapper[4703]: I1209 12:07:38.614391 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="049a6f1a-4b47-421d-bfd6-f1c89a5c0a80" containerName="collect-profiles" Dec 09 12:07:38 crc kubenswrapper[4703]: I1209 12:07:38.614550 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="049a6f1a-4b47-421d-bfd6-f1c89a5c0a80" containerName="collect-profiles" Dec 09 12:07:38 crc kubenswrapper[4703]: I1209 12:07:38.615676 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pl7l8" Dec 09 12:07:38 crc kubenswrapper[4703]: I1209 12:07:38.633818 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pl7l8"] Dec 09 12:07:38 crc kubenswrapper[4703]: I1209 12:07:38.725277 4703 generic.go:334] "Generic (PLEG): container finished" podID="5508ed7b-36a7-4d73-a489-6d7eb6eb6c3e" containerID="f78172f60d26bf604615d807f573f6f6c127e409eee7646867d7de318a7d1ca4" exitCode=0 Dec 09 12:07:38 crc kubenswrapper[4703]: I1209 12:07:38.725406 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bkmjk" event={"ID":"5508ed7b-36a7-4d73-a489-6d7eb6eb6c3e","Type":"ContainerDied","Data":"f78172f60d26bf604615d807f573f6f6c127e409eee7646867d7de318a7d1ca4"} Dec 09 12:07:38 crc kubenswrapper[4703]: I1209 12:07:38.725444 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bkmjk" event={"ID":"5508ed7b-36a7-4d73-a489-6d7eb6eb6c3e","Type":"ContainerStarted","Data":"6c84979f6638f5504aeb7d56a2788bb369a2d8d857e6600da4354ad528dae1cf"} Dec 09 12:07:38 crc kubenswrapper[4703]: I1209 12:07:38.735700 4703 generic.go:334] "Generic (PLEG): container finished" podID="d5cb3256-7c02-446e-8c63-d1559675704e" containerID="89c7fdf1d249a00dfdf3ceb78432ed76e627b8bd24f97c44e641d49011c76b0e" exitCode=0 Dec 09 12:07:38 crc kubenswrapper[4703]: I1209 12:07:38.735830 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jgffv" event={"ID":"d5cb3256-7c02-446e-8c63-d1559675704e","Type":"ContainerDied","Data":"89c7fdf1d249a00dfdf3ceb78432ed76e627b8bd24f97c44e641d49011c76b0e"} Dec 09 12:07:38 crc kubenswrapper[4703]: I1209 12:07:38.735863 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jgffv" event={"ID":"d5cb3256-7c02-446e-8c63-d1559675704e","Type":"ContainerStarted","Data":"c78e35a4d92dc3cbc0ab96cf3eaa37329a0af4701c90b4038fcc6a50c9473826"} Dec 09 12:07:38 crc kubenswrapper[4703]: I1209 12:07:38.740662 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8cddd58-cfa9-4371-bbfb-93747eceefca-catalog-content\") pod \"redhat-operators-pl7l8\" (UID: \"e8cddd58-cfa9-4371-bbfb-93747eceefca\") " pod="openshift-marketplace/redhat-operators-pl7l8" Dec 09 12:07:38 crc kubenswrapper[4703]: I1209 12:07:38.740727 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8cddd58-cfa9-4371-bbfb-93747eceefca-utilities\") pod \"redhat-operators-pl7l8\" (UID: \"e8cddd58-cfa9-4371-bbfb-93747eceefca\") " pod="openshift-marketplace/redhat-operators-pl7l8" Dec 09 12:07:38 crc kubenswrapper[4703]: I1209 12:07:38.741471 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vc84x\" (UniqueName: \"kubernetes.io/projected/e8cddd58-cfa9-4371-bbfb-93747eceefca-kube-api-access-vc84x\") pod \"redhat-operators-pl7l8\" (UID: \"e8cddd58-cfa9-4371-bbfb-93747eceefca\") " pod="openshift-marketplace/redhat-operators-pl7l8" Dec 09 12:07:38 crc kubenswrapper[4703]: I1209 12:07:38.741600 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421360-wvnqz" event={"ID":"049a6f1a-4b47-421d-bfd6-f1c89a5c0a80","Type":"ContainerDied","Data":"86adb7ef60f42e2269e050cbe3f4c302652145a80459721cb0c076435b86ddc7"} Dec 09 12:07:38 crc kubenswrapper[4703]: I1209 12:07:38.741710 4703 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86adb7ef60f42e2269e050cbe3f4c302652145a80459721cb0c076435b86ddc7" Dec 09 12:07:38 crc kubenswrapper[4703]: I1209 12:07:38.741884 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-q57qh" Dec 09 12:07:38 crc kubenswrapper[4703]: I1209 12:07:38.741738 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421360-wvnqz" Dec 09 12:07:38 crc kubenswrapper[4703]: I1209 12:07:38.845251 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vc84x\" (UniqueName: \"kubernetes.io/projected/e8cddd58-cfa9-4371-bbfb-93747eceefca-kube-api-access-vc84x\") pod \"redhat-operators-pl7l8\" (UID: \"e8cddd58-cfa9-4371-bbfb-93747eceefca\") " pod="openshift-marketplace/redhat-operators-pl7l8" Dec 09 12:07:38 crc kubenswrapper[4703]: I1209 12:07:38.845649 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8cddd58-cfa9-4371-bbfb-93747eceefca-catalog-content\") pod \"redhat-operators-pl7l8\" (UID: \"e8cddd58-cfa9-4371-bbfb-93747eceefca\") " pod="openshift-marketplace/redhat-operators-pl7l8" Dec 09 12:07:38 crc kubenswrapper[4703]: I1209 12:07:38.845682 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8cddd58-cfa9-4371-bbfb-93747eceefca-utilities\") pod \"redhat-operators-pl7l8\" (UID: \"e8cddd58-cfa9-4371-bbfb-93747eceefca\") " pod="openshift-marketplace/redhat-operators-pl7l8" Dec 09 12:07:38 crc kubenswrapper[4703]: I1209 12:07:38.850531 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8cddd58-cfa9-4371-bbfb-93747eceefca-catalog-content\") pod \"redhat-operators-pl7l8\" (UID: \"e8cddd58-cfa9-4371-bbfb-93747eceefca\") " pod="openshift-marketplace/redhat-operators-pl7l8" Dec 09 12:07:38 crc kubenswrapper[4703]: I1209 12:07:38.850873 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8cddd58-cfa9-4371-bbfb-93747eceefca-utilities\") pod \"redhat-operators-pl7l8\" (UID: \"e8cddd58-cfa9-4371-bbfb-93747eceefca\") " pod="openshift-marketplace/redhat-operators-pl7l8" Dec 09 12:07:38 crc kubenswrapper[4703]: I1209 12:07:38.893203 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vc84x\" (UniqueName: \"kubernetes.io/projected/e8cddd58-cfa9-4371-bbfb-93747eceefca-kube-api-access-vc84x\") pod \"redhat-operators-pl7l8\" (UID: \"e8cddd58-cfa9-4371-bbfb-93747eceefca\") " pod="openshift-marketplace/redhat-operators-pl7l8" Dec 09 12:07:39 crc kubenswrapper[4703]: I1209 12:07:39.003731 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pl7l8" Dec 09 12:07:39 crc kubenswrapper[4703]: I1209 12:07:39.031787 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 09 12:07:39 crc kubenswrapper[4703]: W1209 12:07:39.134666 4703 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod69c63478_7f81_40ad_aad1_9d6e00c3a506.slice/crio-4adbc0515fd655b03c2a3d83b61151b32942497a68ae3cb15af541c653ffed63 WatchSource:0}: Error finding container 4adbc0515fd655b03c2a3d83b61151b32942497a68ae3cb15af541c653ffed63: Status 404 returned error can't find the container with id 4adbc0515fd655b03c2a3d83b61151b32942497a68ae3cb15af541c653ffed63 Dec 09 12:07:39 crc kubenswrapper[4703]: I1209 12:07:39.333623 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9nc2d"] Dec 09 12:07:39 crc kubenswrapper[4703]: I1209 12:07:39.431488 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pl7l8"] Dec 09 12:07:39 crc kubenswrapper[4703]: W1209 12:07:39.450628 4703 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode8cddd58_cfa9_4371_bbfb_93747eceefca.slice/crio-47a8bb095033af191ac0ec13aba863ec8246c576e23336da1eac88ccdc8768ef WatchSource:0}: Error finding container 47a8bb095033af191ac0ec13aba863ec8246c576e23336da1eac88ccdc8768ef: Status 404 returned error can't find the container with id 47a8bb095033af191ac0ec13aba863ec8246c576e23336da1eac88ccdc8768ef Dec 09 12:07:39 crc kubenswrapper[4703]: I1209 12:07:39.757412 4703 generic.go:334] "Generic (PLEG): container finished" podID="2e4c8435-609b-49fe-9f13-17547856b18a" containerID="0fcc5b77b0e92f6d4c0c2c2ecdafcb0a1fa9ee0a6d6a11288f19934db17806d6" exitCode=0 Dec 09 12:07:39 crc kubenswrapper[4703]: I1209 12:07:39.758126 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9nc2d" event={"ID":"2e4c8435-609b-49fe-9f13-17547856b18a","Type":"ContainerDied","Data":"0fcc5b77b0e92f6d4c0c2c2ecdafcb0a1fa9ee0a6d6a11288f19934db17806d6"} Dec 09 12:07:39 crc kubenswrapper[4703]: I1209 12:07:39.758224 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9nc2d" event={"ID":"2e4c8435-609b-49fe-9f13-17547856b18a","Type":"ContainerStarted","Data":"dabb28cd7f7044484b4b997a8195183b37b30faec42c038faa7a16534bc48458"} Dec 09 12:07:39 crc kubenswrapper[4703]: I1209 12:07:39.761223 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"69c63478-7f81-40ad-aad1-9d6e00c3a506","Type":"ContainerStarted","Data":"4adbc0515fd655b03c2a3d83b61151b32942497a68ae3cb15af541c653ffed63"} Dec 09 12:07:39 crc kubenswrapper[4703]: I1209 12:07:39.770814 4703 generic.go:334] "Generic (PLEG): container finished" podID="e8cddd58-cfa9-4371-bbfb-93747eceefca" containerID="bac0f131c09e3a62726e0b2ecc77fcef9cfff0dc733216b78d1c4db29ce40bef" exitCode=0 Dec 09 12:07:39 crc kubenswrapper[4703]: I1209 12:07:39.770855 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pl7l8" event={"ID":"e8cddd58-cfa9-4371-bbfb-93747eceefca","Type":"ContainerDied","Data":"bac0f131c09e3a62726e0b2ecc77fcef9cfff0dc733216b78d1c4db29ce40bef"} Dec 09 12:07:39 crc kubenswrapper[4703]: I1209 12:07:39.770930 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pl7l8" event={"ID":"e8cddd58-cfa9-4371-bbfb-93747eceefca","Type":"ContainerStarted","Data":"47a8bb095033af191ac0ec13aba863ec8246c576e23336da1eac88ccdc8768ef"} Dec 09 12:07:40 crc kubenswrapper[4703]: I1209 12:07:40.787887 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"69c63478-7f81-40ad-aad1-9d6e00c3a506","Type":"ContainerStarted","Data":"e2a49cdd4b85aaac72b9ce8b0af1a73329a9abfcdde8f215d2cb3b1add91d72c"} Dec 09 12:07:40 crc kubenswrapper[4703]: I1209 12:07:40.809589 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.80957143 podStartE2EDuration="2.80957143s" podCreationTimestamp="2025-12-09 12:07:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:07:40.807833947 +0000 UTC m=+160.056597466" watchObservedRunningTime="2025-12-09 12:07:40.80957143 +0000 UTC m=+160.058334949" Dec 09 12:07:41 crc kubenswrapper[4703]: I1209 12:07:41.006244 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9f199898-7916-48b6-b5e6-c878bacae384-metrics-certs\") pod \"network-metrics-daemon-pf4r7\" (UID: \"9f199898-7916-48b6-b5e6-c878bacae384\") " pod="openshift-multus/network-metrics-daemon-pf4r7" Dec 09 12:07:41 crc kubenswrapper[4703]: I1209 12:07:41.036389 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9f199898-7916-48b6-b5e6-c878bacae384-metrics-certs\") pod \"network-metrics-daemon-pf4r7\" (UID: \"9f199898-7916-48b6-b5e6-c878bacae384\") " pod="openshift-multus/network-metrics-daemon-pf4r7" Dec 09 12:07:41 crc kubenswrapper[4703]: I1209 12:07:41.100423 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pf4r7" Dec 09 12:07:41 crc kubenswrapper[4703]: I1209 12:07:41.704654 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-pf4r7"] Dec 09 12:07:41 crc kubenswrapper[4703]: W1209 12:07:41.741450 4703 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f199898_7916_48b6_b5e6_c878bacae384.slice/crio-8c0552881ac5597fc7bc9b6b333600455dc018b4ab8121666c90dc2d6ec5df5f WatchSource:0}: Error finding container 8c0552881ac5597fc7bc9b6b333600455dc018b4ab8121666c90dc2d6ec5df5f: Status 404 returned error can't find the container with id 8c0552881ac5597fc7bc9b6b333600455dc018b4ab8121666c90dc2d6ec5df5f Dec 09 12:07:41 crc kubenswrapper[4703]: I1209 12:07:41.838715 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-pf4r7" event={"ID":"9f199898-7916-48b6-b5e6-c878bacae384","Type":"ContainerStarted","Data":"8c0552881ac5597fc7bc9b6b333600455dc018b4ab8121666c90dc2d6ec5df5f"} Dec 09 12:07:41 crc kubenswrapper[4703]: I1209 12:07:41.876835 4703 generic.go:334] "Generic (PLEG): container finished" podID="69c63478-7f81-40ad-aad1-9d6e00c3a506" containerID="e2a49cdd4b85aaac72b9ce8b0af1a73329a9abfcdde8f215d2cb3b1add91d72c" exitCode=0 Dec 09 12:07:41 crc kubenswrapper[4703]: I1209 12:07:41.876884 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"69c63478-7f81-40ad-aad1-9d6e00c3a506","Type":"ContainerDied","Data":"e2a49cdd4b85aaac72b9ce8b0af1a73329a9abfcdde8f215d2cb3b1add91d72c"} Dec 09 12:07:42 crc kubenswrapper[4703]: I1209 12:07:42.362750 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-7phbd" Dec 09 12:07:42 crc kubenswrapper[4703]: I1209 12:07:42.903121 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-pf4r7" event={"ID":"9f199898-7916-48b6-b5e6-c878bacae384","Type":"ContainerStarted","Data":"7194cf1fd310b7ed0915a05c05a7432419f1ea4eb2cfddb86c89c4fcc116441d"} Dec 09 12:07:43 crc kubenswrapper[4703]: I1209 12:07:43.479965 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 09 12:07:43 crc kubenswrapper[4703]: I1209 12:07:43.575657 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/69c63478-7f81-40ad-aad1-9d6e00c3a506-kubelet-dir\") pod \"69c63478-7f81-40ad-aad1-9d6e00c3a506\" (UID: \"69c63478-7f81-40ad-aad1-9d6e00c3a506\") " Dec 09 12:07:43 crc kubenswrapper[4703]: I1209 12:07:43.576057 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/69c63478-7f81-40ad-aad1-9d6e00c3a506-kube-api-access\") pod \"69c63478-7f81-40ad-aad1-9d6e00c3a506\" (UID: \"69c63478-7f81-40ad-aad1-9d6e00c3a506\") " Dec 09 12:07:43 crc kubenswrapper[4703]: I1209 12:07:43.581308 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69c63478-7f81-40ad-aad1-9d6e00c3a506-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "69c63478-7f81-40ad-aad1-9d6e00c3a506" (UID: "69c63478-7f81-40ad-aad1-9d6e00c3a506"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 12:07:43 crc kubenswrapper[4703]: I1209 12:07:43.594233 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69c63478-7f81-40ad-aad1-9d6e00c3a506-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "69c63478-7f81-40ad-aad1-9d6e00c3a506" (UID: "69c63478-7f81-40ad-aad1-9d6e00c3a506"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:07:43 crc kubenswrapper[4703]: I1209 12:07:43.678142 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/69c63478-7f81-40ad-aad1-9d6e00c3a506-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 09 12:07:43 crc kubenswrapper[4703]: I1209 12:07:43.678181 4703 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/69c63478-7f81-40ad-aad1-9d6e00c3a506-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 09 12:07:43 crc kubenswrapper[4703]: I1209 12:07:43.929662 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 09 12:07:43 crc kubenswrapper[4703]: I1209 12:07:43.929660 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"69c63478-7f81-40ad-aad1-9d6e00c3a506","Type":"ContainerDied","Data":"4adbc0515fd655b03c2a3d83b61151b32942497a68ae3cb15af541c653ffed63"} Dec 09 12:07:43 crc kubenswrapper[4703]: I1209 12:07:43.929858 4703 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4adbc0515fd655b03c2a3d83b61151b32942497a68ae3cb15af541c653ffed63" Dec 09 12:07:43 crc kubenswrapper[4703]: I1209 12:07:43.934300 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-pf4r7" event={"ID":"9f199898-7916-48b6-b5e6-c878bacae384","Type":"ContainerStarted","Data":"82a4d798caa3ef3fd5affba46bfba99ff9502c0fff3a2527c1155971ff71941f"} Dec 09 12:07:46 crc kubenswrapper[4703]: I1209 12:07:46.758914 4703 patch_prober.go:28] interesting pod/downloads-7954f5f757-hqx9m container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Dec 09 12:07:46 crc kubenswrapper[4703]: I1209 12:07:46.759399 4703 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hqx9m" podUID="36981648-d6b7-4c08-96ec-622d069c4c19" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Dec 09 12:07:46 crc kubenswrapper[4703]: I1209 12:07:46.758954 4703 patch_prober.go:28] interesting pod/downloads-7954f5f757-hqx9m container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Dec 09 12:07:46 crc kubenswrapper[4703]: I1209 12:07:46.759627 4703 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-hqx9m" podUID="36981648-d6b7-4c08-96ec-622d069c4c19" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Dec 09 12:07:47 crc kubenswrapper[4703]: I1209 12:07:47.340342 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-w77wb" Dec 09 12:07:47 crc kubenswrapper[4703]: I1209 12:07:47.349330 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-w77wb" Dec 09 12:07:47 crc kubenswrapper[4703]: I1209 12:07:47.369122 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-pf4r7" podStartSLOduration=148.369103905 podStartE2EDuration="2m28.369103905s" podCreationTimestamp="2025-12-09 12:05:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:07:43.95122375 +0000 UTC m=+163.199987279" watchObservedRunningTime="2025-12-09 12:07:47.369103905 +0000 UTC m=+166.617867424" Dec 09 12:07:56 crc kubenswrapper[4703]: I1209 12:07:56.603987 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-q57qh" Dec 09 12:07:56 crc kubenswrapper[4703]: I1209 12:07:56.765530 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-hqx9m" Dec 09 12:07:57 crc kubenswrapper[4703]: I1209 12:07:57.791551 4703 patch_prober.go:28] interesting pod/router-default-5444994796-cjdjc container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 09 12:07:57 crc kubenswrapper[4703]: I1209 12:07:57.791714 4703 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-cjdjc" podUID="de18af96-27c1-4d28-acfe-e0317de38dba" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 09 12:08:00 crc kubenswrapper[4703]: I1209 12:08:00.083924 4703 patch_prober.go:28] interesting pod/machine-config-daemon-q8sfk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 12:08:00 crc kubenswrapper[4703]: I1209 12:08:00.083992 4703 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 12:08:07 crc kubenswrapper[4703]: I1209 12:08:07.273346 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-knw7d" Dec 09 12:08:07 crc kubenswrapper[4703]: I1209 12:08:07.457155 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:08:09 crc kubenswrapper[4703]: E1209 12:08:09.044822 4703 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Dec 09 12:08:09 crc kubenswrapper[4703]: E1209 12:08:09.045377 4703 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qb4cq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-8xkww_openshift-marketplace(621283ab-7eb7-4952-9059-c3c4209bca7b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 09 12:08:09 crc kubenswrapper[4703]: E1209 12:08:09.046567 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-8xkww" podUID="621283ab-7eb7-4952-9059-c3c4209bca7b" Dec 09 12:08:11 crc kubenswrapper[4703]: I1209 12:08:11.471625 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 09 12:08:11 crc kubenswrapper[4703]: E1209 12:08:11.471835 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69c63478-7f81-40ad-aad1-9d6e00c3a506" containerName="pruner" Dec 09 12:08:11 crc kubenswrapper[4703]: I1209 12:08:11.471848 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="69c63478-7f81-40ad-aad1-9d6e00c3a506" containerName="pruner" Dec 09 12:08:11 crc kubenswrapper[4703]: I1209 12:08:11.471949 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="69c63478-7f81-40ad-aad1-9d6e00c3a506" containerName="pruner" Dec 09 12:08:11 crc kubenswrapper[4703]: I1209 12:08:11.472342 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 09 12:08:11 crc kubenswrapper[4703]: I1209 12:08:11.475768 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 09 12:08:11 crc kubenswrapper[4703]: I1209 12:08:11.476220 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 09 12:08:11 crc kubenswrapper[4703]: I1209 12:08:11.480301 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 09 12:08:11 crc kubenswrapper[4703]: I1209 12:08:11.580972 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f7cbd2ca-0c83-4c61-b265-a2abcb8ea07f-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"f7cbd2ca-0c83-4c61-b265-a2abcb8ea07f\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 09 12:08:11 crc kubenswrapper[4703]: I1209 12:08:11.581021 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f7cbd2ca-0c83-4c61-b265-a2abcb8ea07f-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"f7cbd2ca-0c83-4c61-b265-a2abcb8ea07f\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 09 12:08:11 crc kubenswrapper[4703]: I1209 12:08:11.682426 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f7cbd2ca-0c83-4c61-b265-a2abcb8ea07f-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"f7cbd2ca-0c83-4c61-b265-a2abcb8ea07f\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 09 12:08:11 crc kubenswrapper[4703]: I1209 12:08:11.682551 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f7cbd2ca-0c83-4c61-b265-a2abcb8ea07f-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"f7cbd2ca-0c83-4c61-b265-a2abcb8ea07f\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 09 12:08:11 crc kubenswrapper[4703]: I1209 12:08:11.682524 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f7cbd2ca-0c83-4c61-b265-a2abcb8ea07f-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"f7cbd2ca-0c83-4c61-b265-a2abcb8ea07f\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 09 12:08:11 crc kubenswrapper[4703]: I1209 12:08:11.701784 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f7cbd2ca-0c83-4c61-b265-a2abcb8ea07f-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"f7cbd2ca-0c83-4c61-b265-a2abcb8ea07f\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 09 12:08:11 crc kubenswrapper[4703]: I1209 12:08:11.797620 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 09 12:08:13 crc kubenswrapper[4703]: E1209 12:08:13.799130 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8xkww" podUID="621283ab-7eb7-4952-9059-c3c4209bca7b" Dec 09 12:08:13 crc kubenswrapper[4703]: E1209 12:08:13.868448 4703 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Dec 09 12:08:13 crc kubenswrapper[4703]: E1209 12:08:13.868594 4703 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rnbc6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-6jqdn_openshift-marketplace(be89872a-5325-41c1-85b8-6c9880d40f68): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 09 12:08:13 crc kubenswrapper[4703]: E1209 12:08:13.869832 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-6jqdn" podUID="be89872a-5325-41c1-85b8-6c9880d40f68" Dec 09 12:08:13 crc kubenswrapper[4703]: E1209 12:08:13.886028 4703 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Dec 09 12:08:13 crc kubenswrapper[4703]: E1209 12:08:13.886452 4703 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9r94k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-p2dxk_openshift-marketplace(a6d97ca9-dbfc-4bb7-9784-32152f514675): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 09 12:08:13 crc kubenswrapper[4703]: E1209 12:08:13.887877 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-p2dxk" podUID="a6d97ca9-dbfc-4bb7-9784-32152f514675" Dec 09 12:08:17 crc kubenswrapper[4703]: I1209 12:08:17.080178 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 09 12:08:17 crc kubenswrapper[4703]: I1209 12:08:17.081555 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 09 12:08:17 crc kubenswrapper[4703]: I1209 12:08:17.081659 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 09 12:08:17 crc kubenswrapper[4703]: I1209 12:08:17.256029 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/47f5e685-7028-4185-9345-fbb2aa35ca07-kubelet-dir\") pod \"installer-9-crc\" (UID: \"47f5e685-7028-4185-9345-fbb2aa35ca07\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 09 12:08:17 crc kubenswrapper[4703]: I1209 12:08:17.256094 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/47f5e685-7028-4185-9345-fbb2aa35ca07-kube-api-access\") pod \"installer-9-crc\" (UID: \"47f5e685-7028-4185-9345-fbb2aa35ca07\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 09 12:08:17 crc kubenswrapper[4703]: I1209 12:08:17.256132 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/47f5e685-7028-4185-9345-fbb2aa35ca07-var-lock\") pod \"installer-9-crc\" (UID: \"47f5e685-7028-4185-9345-fbb2aa35ca07\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 09 12:08:17 crc kubenswrapper[4703]: I1209 12:08:17.357965 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/47f5e685-7028-4185-9345-fbb2aa35ca07-kubelet-dir\") pod \"installer-9-crc\" (UID: \"47f5e685-7028-4185-9345-fbb2aa35ca07\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 09 12:08:17 crc kubenswrapper[4703]: I1209 12:08:17.358056 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/47f5e685-7028-4185-9345-fbb2aa35ca07-kube-api-access\") pod \"installer-9-crc\" (UID: \"47f5e685-7028-4185-9345-fbb2aa35ca07\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 09 12:08:17 crc kubenswrapper[4703]: I1209 12:08:17.358102 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/47f5e685-7028-4185-9345-fbb2aa35ca07-var-lock\") pod \"installer-9-crc\" (UID: \"47f5e685-7028-4185-9345-fbb2aa35ca07\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 09 12:08:17 crc kubenswrapper[4703]: I1209 12:08:17.358151 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/47f5e685-7028-4185-9345-fbb2aa35ca07-kubelet-dir\") pod \"installer-9-crc\" (UID: \"47f5e685-7028-4185-9345-fbb2aa35ca07\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 09 12:08:17 crc kubenswrapper[4703]: I1209 12:08:17.358259 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/47f5e685-7028-4185-9345-fbb2aa35ca07-var-lock\") pod \"installer-9-crc\" (UID: \"47f5e685-7028-4185-9345-fbb2aa35ca07\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 09 12:08:17 crc kubenswrapper[4703]: I1209 12:08:17.386522 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/47f5e685-7028-4185-9345-fbb2aa35ca07-kube-api-access\") pod \"installer-9-crc\" (UID: \"47f5e685-7028-4185-9345-fbb2aa35ca07\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 09 12:08:17 crc kubenswrapper[4703]: I1209 12:08:17.407243 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 09 12:08:19 crc kubenswrapper[4703]: E1209 12:08:19.352043 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-6jqdn" podUID="be89872a-5325-41c1-85b8-6c9880d40f68" Dec 09 12:08:19 crc kubenswrapper[4703]: E1209 12:08:19.352095 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-p2dxk" podUID="a6d97ca9-dbfc-4bb7-9784-32152f514675" Dec 09 12:08:19 crc kubenswrapper[4703]: E1209 12:08:19.435776 4703 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Dec 09 12:08:19 crc kubenswrapper[4703]: E1209 12:08:19.435966 4703 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2qxgf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-f8tbx_openshift-marketplace(6ef66484-04d6-4994-be99-16da9c301ea2): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 09 12:08:19 crc kubenswrapper[4703]: E1209 12:08:19.435934 4703 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Dec 09 12:08:19 crc kubenswrapper[4703]: E1209 12:08:19.436549 4703 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vc84x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-pl7l8_openshift-marketplace(e8cddd58-cfa9-4371-bbfb-93747eceefca): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 09 12:08:19 crc kubenswrapper[4703]: E1209 12:08:19.437160 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-f8tbx" podUID="6ef66484-04d6-4994-be99-16da9c301ea2" Dec 09 12:08:19 crc kubenswrapper[4703]: E1209 12:08:19.438385 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-pl7l8" podUID="e8cddd58-cfa9-4371-bbfb-93747eceefca" Dec 09 12:08:20 crc kubenswrapper[4703]: E1209 12:08:20.466328 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-f8tbx" podUID="6ef66484-04d6-4994-be99-16da9c301ea2" Dec 09 12:08:20 crc kubenswrapper[4703]: E1209 12:08:20.466531 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pl7l8" podUID="e8cddd58-cfa9-4371-bbfb-93747eceefca" Dec 09 12:08:20 crc kubenswrapper[4703]: E1209 12:08:20.534040 4703 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Dec 09 12:08:20 crc kubenswrapper[4703]: E1209 12:08:20.534173 4703 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-klflg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-bkmjk_openshift-marketplace(5508ed7b-36a7-4d73-a489-6d7eb6eb6c3e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 09 12:08:20 crc kubenswrapper[4703]: E1209 12:08:20.536084 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-bkmjk" podUID="5508ed7b-36a7-4d73-a489-6d7eb6eb6c3e" Dec 09 12:08:20 crc kubenswrapper[4703]: E1209 12:08:20.577585 4703 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Dec 09 12:08:20 crc kubenswrapper[4703]: E1209 12:08:20.577736 4703 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6bfst,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-jgffv_openshift-marketplace(d5cb3256-7c02-446e-8c63-d1559675704e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 09 12:08:20 crc kubenswrapper[4703]: E1209 12:08:20.580220 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-jgffv" podUID="d5cb3256-7c02-446e-8c63-d1559675704e" Dec 09 12:08:20 crc kubenswrapper[4703]: E1209 12:08:20.616919 4703 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Dec 09 12:08:20 crc kubenswrapper[4703]: E1209 12:08:20.617458 4703 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n7gxl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-9nc2d_openshift-marketplace(2e4c8435-609b-49fe-9f13-17547856b18a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 09 12:08:20 crc kubenswrapper[4703]: E1209 12:08:20.618643 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-9nc2d" podUID="2e4c8435-609b-49fe-9f13-17547856b18a" Dec 09 12:08:20 crc kubenswrapper[4703]: I1209 12:08:20.921199 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 09 12:08:20 crc kubenswrapper[4703]: I1209 12:08:20.970676 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 09 12:08:21 crc kubenswrapper[4703]: I1209 12:08:21.191650 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"47f5e685-7028-4185-9345-fbb2aa35ca07","Type":"ContainerStarted","Data":"af932872b81e79802a56c37bf20fd5a066a464efa34b7017dff74982758d3f65"} Dec 09 12:08:21 crc kubenswrapper[4703]: I1209 12:08:21.192560 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"f7cbd2ca-0c83-4c61-b265-a2abcb8ea07f","Type":"ContainerStarted","Data":"dcc736f826133793cd1f24fe06ce414d0e43bf527105c1a3404a8d618bc40ffb"} Dec 09 12:08:21 crc kubenswrapper[4703]: E1209 12:08:21.194613 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-bkmjk" podUID="5508ed7b-36a7-4d73-a489-6d7eb6eb6c3e" Dec 09 12:08:21 crc kubenswrapper[4703]: E1209 12:08:21.194723 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-jgffv" podUID="d5cb3256-7c02-446e-8c63-d1559675704e" Dec 09 12:08:21 crc kubenswrapper[4703]: E1209 12:08:21.194902 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-9nc2d" podUID="2e4c8435-609b-49fe-9f13-17547856b18a" Dec 09 12:08:22 crc kubenswrapper[4703]: I1209 12:08:22.218343 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"47f5e685-7028-4185-9345-fbb2aa35ca07","Type":"ContainerStarted","Data":"2ac3d0eb46d06cf0e5e865d7b300e7d08b6ccd81d6a03baaa7e2e66dc80b1591"} Dec 09 12:08:22 crc kubenswrapper[4703]: I1209 12:08:22.220240 4703 generic.go:334] "Generic (PLEG): container finished" podID="f7cbd2ca-0c83-4c61-b265-a2abcb8ea07f" containerID="8e98ead19740e298db64488826913fba377d9acd5caf049f3809e564a5d5748d" exitCode=0 Dec 09 12:08:22 crc kubenswrapper[4703]: I1209 12:08:22.220291 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"f7cbd2ca-0c83-4c61-b265-a2abcb8ea07f","Type":"ContainerDied","Data":"8e98ead19740e298db64488826913fba377d9acd5caf049f3809e564a5d5748d"} Dec 09 12:08:22 crc kubenswrapper[4703]: I1209 12:08:22.249127 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=5.249111107 podStartE2EDuration="5.249111107s" podCreationTimestamp="2025-12-09 12:08:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:08:22.242889576 +0000 UTC m=+201.491653095" watchObservedRunningTime="2025-12-09 12:08:22.249111107 +0000 UTC m=+201.497874626" Dec 09 12:08:23 crc kubenswrapper[4703]: I1209 12:08:23.510170 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 09 12:08:23 crc kubenswrapper[4703]: I1209 12:08:23.640914 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f7cbd2ca-0c83-4c61-b265-a2abcb8ea07f-kubelet-dir\") pod \"f7cbd2ca-0c83-4c61-b265-a2abcb8ea07f\" (UID: \"f7cbd2ca-0c83-4c61-b265-a2abcb8ea07f\") " Dec 09 12:08:23 crc kubenswrapper[4703]: I1209 12:08:23.641097 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7cbd2ca-0c83-4c61-b265-a2abcb8ea07f-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "f7cbd2ca-0c83-4c61-b265-a2abcb8ea07f" (UID: "f7cbd2ca-0c83-4c61-b265-a2abcb8ea07f"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 12:08:23 crc kubenswrapper[4703]: I1209 12:08:23.641178 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f7cbd2ca-0c83-4c61-b265-a2abcb8ea07f-kube-api-access\") pod \"f7cbd2ca-0c83-4c61-b265-a2abcb8ea07f\" (UID: \"f7cbd2ca-0c83-4c61-b265-a2abcb8ea07f\") " Dec 09 12:08:23 crc kubenswrapper[4703]: I1209 12:08:23.641775 4703 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f7cbd2ca-0c83-4c61-b265-a2abcb8ea07f-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 09 12:08:23 crc kubenswrapper[4703]: I1209 12:08:23.650244 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7cbd2ca-0c83-4c61-b265-a2abcb8ea07f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f7cbd2ca-0c83-4c61-b265-a2abcb8ea07f" (UID: "f7cbd2ca-0c83-4c61-b265-a2abcb8ea07f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:08:23 crc kubenswrapper[4703]: I1209 12:08:23.742657 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f7cbd2ca-0c83-4c61-b265-a2abcb8ea07f-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 09 12:08:24 crc kubenswrapper[4703]: I1209 12:08:24.233827 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"f7cbd2ca-0c83-4c61-b265-a2abcb8ea07f","Type":"ContainerDied","Data":"dcc736f826133793cd1f24fe06ce414d0e43bf527105c1a3404a8d618bc40ffb"} Dec 09 12:08:24 crc kubenswrapper[4703]: I1209 12:08:24.233870 4703 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dcc736f826133793cd1f24fe06ce414d0e43bf527105c1a3404a8d618bc40ffb" Dec 09 12:08:24 crc kubenswrapper[4703]: I1209 12:08:24.233895 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 09 12:08:29 crc kubenswrapper[4703]: I1209 12:08:29.267085 4703 generic.go:334] "Generic (PLEG): container finished" podID="621283ab-7eb7-4952-9059-c3c4209bca7b" containerID="d6a59fab2942575e02d88bde0007eb122ee8e310ec2ab013a6b0ce5ba5f418eb" exitCode=0 Dec 09 12:08:29 crc kubenswrapper[4703]: I1209 12:08:29.267182 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8xkww" event={"ID":"621283ab-7eb7-4952-9059-c3c4209bca7b","Type":"ContainerDied","Data":"d6a59fab2942575e02d88bde0007eb122ee8e310ec2ab013a6b0ce5ba5f418eb"} Dec 09 12:08:30 crc kubenswrapper[4703]: I1209 12:08:30.083649 4703 patch_prober.go:28] interesting pod/machine-config-daemon-q8sfk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 12:08:30 crc kubenswrapper[4703]: I1209 12:08:30.084073 4703 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 12:08:30 crc kubenswrapper[4703]: I1209 12:08:30.084146 4703 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" Dec 09 12:08:30 crc kubenswrapper[4703]: I1209 12:08:30.084858 4703 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d149dfa980c5e9cee99cd7ba8a51bca19529368e037a1fd8e9b8dbea04179378"} pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 12:08:30 crc kubenswrapper[4703]: I1209 12:08:30.084984 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" containerName="machine-config-daemon" containerID="cri-o://d149dfa980c5e9cee99cd7ba8a51bca19529368e037a1fd8e9b8dbea04179378" gracePeriod=600 Dec 09 12:08:30 crc kubenswrapper[4703]: I1209 12:08:30.274960 4703 generic.go:334] "Generic (PLEG): container finished" podID="32956ceb-8540-406e-8693-e86efb46cd42" containerID="d149dfa980c5e9cee99cd7ba8a51bca19529368e037a1fd8e9b8dbea04179378" exitCode=0 Dec 09 12:08:30 crc kubenswrapper[4703]: I1209 12:08:30.275049 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" event={"ID":"32956ceb-8540-406e-8693-e86efb46cd42","Type":"ContainerDied","Data":"d149dfa980c5e9cee99cd7ba8a51bca19529368e037a1fd8e9b8dbea04179378"} Dec 09 12:08:30 crc kubenswrapper[4703]: I1209 12:08:30.276926 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8xkww" event={"ID":"621283ab-7eb7-4952-9059-c3c4209bca7b","Type":"ContainerStarted","Data":"c37c9e3e8358b2782077cadf7336e0fc1751fa099d459ac7b6070a1aba612859"} Dec 09 12:08:30 crc kubenswrapper[4703]: I1209 12:08:30.304852 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8xkww" podStartSLOduration=3.192064229 podStartE2EDuration="56.304824695s" podCreationTimestamp="2025-12-09 12:07:34 +0000 UTC" firstStartedPulling="2025-12-09 12:07:36.58067848 +0000 UTC m=+155.829441999" lastFinishedPulling="2025-12-09 12:08:29.693438946 +0000 UTC m=+208.942202465" observedRunningTime="2025-12-09 12:08:30.303316339 +0000 UTC m=+209.552079858" watchObservedRunningTime="2025-12-09 12:08:30.304824695 +0000 UTC m=+209.553588214" Dec 09 12:08:31 crc kubenswrapper[4703]: I1209 12:08:31.284244 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" event={"ID":"32956ceb-8540-406e-8693-e86efb46cd42","Type":"ContainerStarted","Data":"6f04a84ed7bc72ac4ce5881e099355fdb1ee5268da9c4102eb52871ec941d585"} Dec 09 12:08:32 crc kubenswrapper[4703]: I1209 12:08:32.295704 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p2dxk" event={"ID":"a6d97ca9-dbfc-4bb7-9784-32152f514675","Type":"ContainerStarted","Data":"337d292b3c04637c2858d0f0d502bde68329ed3d7fb1c0b8eb00dc3f4baf5856"} Dec 09 12:08:33 crc kubenswrapper[4703]: I1209 12:08:33.302836 4703 generic.go:334] "Generic (PLEG): container finished" podID="a6d97ca9-dbfc-4bb7-9784-32152f514675" containerID="337d292b3c04637c2858d0f0d502bde68329ed3d7fb1c0b8eb00dc3f4baf5856" exitCode=0 Dec 09 12:08:33 crc kubenswrapper[4703]: I1209 12:08:33.303211 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p2dxk" event={"ID":"a6d97ca9-dbfc-4bb7-9784-32152f514675","Type":"ContainerDied","Data":"337d292b3c04637c2858d0f0d502bde68329ed3d7fb1c0b8eb00dc3f4baf5856"} Dec 09 12:08:33 crc kubenswrapper[4703]: I1209 12:08:33.313163 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9nc2d" event={"ID":"2e4c8435-609b-49fe-9f13-17547856b18a","Type":"ContainerStarted","Data":"e59bdddbb96e033179062515ac6627b681b00afb0f97221563fad30b1774fee2"} Dec 09 12:08:34 crc kubenswrapper[4703]: I1209 12:08:34.320529 4703 generic.go:334] "Generic (PLEG): container finished" podID="2e4c8435-609b-49fe-9f13-17547856b18a" containerID="e59bdddbb96e033179062515ac6627b681b00afb0f97221563fad30b1774fee2" exitCode=0 Dec 09 12:08:34 crc kubenswrapper[4703]: I1209 12:08:34.320589 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9nc2d" event={"ID":"2e4c8435-609b-49fe-9f13-17547856b18a","Type":"ContainerDied","Data":"e59bdddbb96e033179062515ac6627b681b00afb0f97221563fad30b1774fee2"} Dec 09 12:08:34 crc kubenswrapper[4703]: I1209 12:08:34.323084 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6jqdn" event={"ID":"be89872a-5325-41c1-85b8-6c9880d40f68","Type":"ContainerStarted","Data":"18435a9f38224c12318333cbb1ec0e5bcc510306e8a9d3eeb5764308fe6f2fe0"} Dec 09 12:08:34 crc kubenswrapper[4703]: I1209 12:08:34.325512 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p2dxk" event={"ID":"a6d97ca9-dbfc-4bb7-9784-32152f514675","Type":"ContainerStarted","Data":"ce23f2cecd5ef604f45a01af6cc4cbc196b5c36a17becd4aa72b345c05fc6499"} Dec 09 12:08:34 crc kubenswrapper[4703]: I1209 12:08:34.327370 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pl7l8" event={"ID":"e8cddd58-cfa9-4371-bbfb-93747eceefca","Type":"ContainerStarted","Data":"a76d04af646fde4455dead2c33c5a6fed48fb82f196cb7b0c77186bf5a253943"} Dec 09 12:08:34 crc kubenswrapper[4703]: I1209 12:08:34.328986 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f8tbx" event={"ID":"6ef66484-04d6-4994-be99-16da9c301ea2","Type":"ContainerStarted","Data":"00e7bbf03bc9973f048de61e39a88c27520a3bfdc4651728672fa0d86a994e93"} Dec 09 12:08:34 crc kubenswrapper[4703]: I1209 12:08:34.330752 4703 generic.go:334] "Generic (PLEG): container finished" podID="5508ed7b-36a7-4d73-a489-6d7eb6eb6c3e" containerID="674065bafe35563c804dee9435387e8d63129a31218dbf24208de54954d20cf3" exitCode=0 Dec 09 12:08:34 crc kubenswrapper[4703]: I1209 12:08:34.330786 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bkmjk" event={"ID":"5508ed7b-36a7-4d73-a489-6d7eb6eb6c3e","Type":"ContainerDied","Data":"674065bafe35563c804dee9435387e8d63129a31218dbf24208de54954d20cf3"} Dec 09 12:08:34 crc kubenswrapper[4703]: I1209 12:08:34.450100 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-p2dxk" podStartSLOduration=3.267916964 podStartE2EDuration="59.450081449s" podCreationTimestamp="2025-12-09 12:07:35 +0000 UTC" firstStartedPulling="2025-12-09 12:07:37.691904077 +0000 UTC m=+156.940667596" lastFinishedPulling="2025-12-09 12:08:33.874068562 +0000 UTC m=+213.122832081" observedRunningTime="2025-12-09 12:08:34.449110485 +0000 UTC m=+213.697874004" watchObservedRunningTime="2025-12-09 12:08:34.450081449 +0000 UTC m=+213.698844988" Dec 09 12:08:35 crc kubenswrapper[4703]: I1209 12:08:35.166034 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8xkww" Dec 09 12:08:35 crc kubenswrapper[4703]: I1209 12:08:35.166109 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8xkww" Dec 09 12:08:35 crc kubenswrapper[4703]: I1209 12:08:35.284514 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8xkww" Dec 09 12:08:35 crc kubenswrapper[4703]: I1209 12:08:35.351071 4703 generic.go:334] "Generic (PLEG): container finished" podID="e8cddd58-cfa9-4371-bbfb-93747eceefca" containerID="a76d04af646fde4455dead2c33c5a6fed48fb82f196cb7b0c77186bf5a253943" exitCode=0 Dec 09 12:08:35 crc kubenswrapper[4703]: I1209 12:08:35.351128 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pl7l8" event={"ID":"e8cddd58-cfa9-4371-bbfb-93747eceefca","Type":"ContainerDied","Data":"a76d04af646fde4455dead2c33c5a6fed48fb82f196cb7b0c77186bf5a253943"} Dec 09 12:08:35 crc kubenswrapper[4703]: I1209 12:08:35.357666 4703 generic.go:334] "Generic (PLEG): container finished" podID="6ef66484-04d6-4994-be99-16da9c301ea2" containerID="00e7bbf03bc9973f048de61e39a88c27520a3bfdc4651728672fa0d86a994e93" exitCode=0 Dec 09 12:08:35 crc kubenswrapper[4703]: I1209 12:08:35.357742 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f8tbx" event={"ID":"6ef66484-04d6-4994-be99-16da9c301ea2","Type":"ContainerDied","Data":"00e7bbf03bc9973f048de61e39a88c27520a3bfdc4651728672fa0d86a994e93"} Dec 09 12:08:35 crc kubenswrapper[4703]: I1209 12:08:35.368583 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bkmjk" event={"ID":"5508ed7b-36a7-4d73-a489-6d7eb6eb6c3e","Type":"ContainerStarted","Data":"583dcb944ad38886efe2ead42442edcae149f2128528491d31a626f8c7707de2"} Dec 09 12:08:35 crc kubenswrapper[4703]: I1209 12:08:35.372951 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jgffv" event={"ID":"d5cb3256-7c02-446e-8c63-d1559675704e","Type":"ContainerStarted","Data":"5c68a02d58bb14fb5c8b0d3e43321482d23578d912af7b4f9ca94f7bacebfbb0"} Dec 09 12:08:35 crc kubenswrapper[4703]: I1209 12:08:35.380797 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9nc2d" event={"ID":"2e4c8435-609b-49fe-9f13-17547856b18a","Type":"ContainerStarted","Data":"b018e667e0212de01425abc6cf5678df03b6ca3461762d4085ec79680cf93f0f"} Dec 09 12:08:35 crc kubenswrapper[4703]: I1209 12:08:35.384809 4703 generic.go:334] "Generic (PLEG): container finished" podID="be89872a-5325-41c1-85b8-6c9880d40f68" containerID="18435a9f38224c12318333cbb1ec0e5bcc510306e8a9d3eeb5764308fe6f2fe0" exitCode=0 Dec 09 12:08:35 crc kubenswrapper[4703]: I1209 12:08:35.384909 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6jqdn" event={"ID":"be89872a-5325-41c1-85b8-6c9880d40f68","Type":"ContainerDied","Data":"18435a9f38224c12318333cbb1ec0e5bcc510306e8a9d3eeb5764308fe6f2fe0"} Dec 09 12:08:35 crc kubenswrapper[4703]: I1209 12:08:35.450032 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8xkww" Dec 09 12:08:35 crc kubenswrapper[4703]: I1209 12:08:35.451018 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-bkmjk" podStartSLOduration=2.171591551 podStartE2EDuration="58.450994051s" podCreationTimestamp="2025-12-09 12:07:37 +0000 UTC" firstStartedPulling="2025-12-09 12:07:38.727380448 +0000 UTC m=+157.976143967" lastFinishedPulling="2025-12-09 12:08:35.006782948 +0000 UTC m=+214.255546467" observedRunningTime="2025-12-09 12:08:35.429397177 +0000 UTC m=+214.678160696" watchObservedRunningTime="2025-12-09 12:08:35.450994051 +0000 UTC m=+214.699757570" Dec 09 12:08:35 crc kubenswrapper[4703]: I1209 12:08:35.467899 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9nc2d" podStartSLOduration=2.518868983 podStartE2EDuration="57.467879892s" podCreationTimestamp="2025-12-09 12:07:38 +0000 UTC" firstStartedPulling="2025-12-09 12:07:39.763852831 +0000 UTC m=+159.012616350" lastFinishedPulling="2025-12-09 12:08:34.71286374 +0000 UTC m=+213.961627259" observedRunningTime="2025-12-09 12:08:35.466783855 +0000 UTC m=+214.715547374" watchObservedRunningTime="2025-12-09 12:08:35.467879892 +0000 UTC m=+214.716643401" Dec 09 12:08:35 crc kubenswrapper[4703]: I1209 12:08:35.714933 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-p2dxk" Dec 09 12:08:35 crc kubenswrapper[4703]: I1209 12:08:35.714984 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-p2dxk" Dec 09 12:08:36 crc kubenswrapper[4703]: I1209 12:08:36.410717 4703 generic.go:334] "Generic (PLEG): container finished" podID="d5cb3256-7c02-446e-8c63-d1559675704e" containerID="5c68a02d58bb14fb5c8b0d3e43321482d23578d912af7b4f9ca94f7bacebfbb0" exitCode=0 Dec 09 12:08:36 crc kubenswrapper[4703]: I1209 12:08:36.410788 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jgffv" event={"ID":"d5cb3256-7c02-446e-8c63-d1559675704e","Type":"ContainerDied","Data":"5c68a02d58bb14fb5c8b0d3e43321482d23578d912af7b4f9ca94f7bacebfbb0"} Dec 09 12:08:36 crc kubenswrapper[4703]: I1209 12:08:36.410822 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jgffv" event={"ID":"d5cb3256-7c02-446e-8c63-d1559675704e","Type":"ContainerStarted","Data":"473c62f891dfff88e7555985b41a5d413ff1f32439938d60c3b99c3cc6edd100"} Dec 09 12:08:36 crc kubenswrapper[4703]: I1209 12:08:36.415416 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6jqdn" event={"ID":"be89872a-5325-41c1-85b8-6c9880d40f68","Type":"ContainerStarted","Data":"153aadf41c179d77acb21f91770176d7e5abcc97d1b84e336778c10002346d9d"} Dec 09 12:08:36 crc kubenswrapper[4703]: I1209 12:08:36.418062 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pl7l8" event={"ID":"e8cddd58-cfa9-4371-bbfb-93747eceefca","Type":"ContainerStarted","Data":"2230ba1ff845c007cf415e59e228fe718cba569e618038d4b07104c7d6bd4ed2"} Dec 09 12:08:36 crc kubenswrapper[4703]: I1209 12:08:36.421407 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f8tbx" event={"ID":"6ef66484-04d6-4994-be99-16da9c301ea2","Type":"ContainerStarted","Data":"0f736eb5589303669b38b6a7d489f259284004aff2f8d77d4a8c8c77ca998978"} Dec 09 12:08:36 crc kubenswrapper[4703]: I1209 12:08:36.432078 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jgffv" podStartSLOduration=2.4023184029999998 podStartE2EDuration="59.432063362s" podCreationTimestamp="2025-12-09 12:07:37 +0000 UTC" firstStartedPulling="2025-12-09 12:07:38.77025558 +0000 UTC m=+158.019019099" lastFinishedPulling="2025-12-09 12:08:35.800000539 +0000 UTC m=+215.048764058" observedRunningTime="2025-12-09 12:08:36.431071047 +0000 UTC m=+215.679834566" watchObservedRunningTime="2025-12-09 12:08:36.432063362 +0000 UTC m=+215.680826881" Dec 09 12:08:36 crc kubenswrapper[4703]: I1209 12:08:36.451815 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pl7l8" podStartSLOduration=2.487928959 podStartE2EDuration="58.451796502s" podCreationTimestamp="2025-12-09 12:07:38 +0000 UTC" firstStartedPulling="2025-12-09 12:07:39.772307584 +0000 UTC m=+159.021071103" lastFinishedPulling="2025-12-09 12:08:35.736175127 +0000 UTC m=+214.984938646" observedRunningTime="2025-12-09 12:08:36.449511256 +0000 UTC m=+215.698274775" watchObservedRunningTime="2025-12-09 12:08:36.451796502 +0000 UTC m=+215.700560021" Dec 09 12:08:36 crc kubenswrapper[4703]: I1209 12:08:36.475799 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6jqdn" podStartSLOduration=3.291688478 podStartE2EDuration="1m1.475785125s" podCreationTimestamp="2025-12-09 12:07:35 +0000 UTC" firstStartedPulling="2025-12-09 12:07:37.689059172 +0000 UTC m=+156.937822691" lastFinishedPulling="2025-12-09 12:08:35.873155819 +0000 UTC m=+215.121919338" observedRunningTime="2025-12-09 12:08:36.473368276 +0000 UTC m=+215.722131805" watchObservedRunningTime="2025-12-09 12:08:36.475785125 +0000 UTC m=+215.724548644" Dec 09 12:08:36 crc kubenswrapper[4703]: I1209 12:08:36.505301 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-f8tbx" podStartSLOduration=3.414488439 podStartE2EDuration="1m1.505285112s" podCreationTimestamp="2025-12-09 12:07:35 +0000 UTC" firstStartedPulling="2025-12-09 12:07:37.675531317 +0000 UTC m=+156.924294836" lastFinishedPulling="2025-12-09 12:08:35.76632799 +0000 UTC m=+215.015091509" observedRunningTime="2025-12-09 12:08:36.504325309 +0000 UTC m=+215.753088828" watchObservedRunningTime="2025-12-09 12:08:36.505285112 +0000 UTC m=+215.754048631" Dec 09 12:08:36 crc kubenswrapper[4703]: I1209 12:08:36.756068 4703 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-p2dxk" podUID="a6d97ca9-dbfc-4bb7-9784-32152f514675" containerName="registry-server" probeResult="failure" output=< Dec 09 12:08:36 crc kubenswrapper[4703]: timeout: failed to connect service ":50051" within 1s Dec 09 12:08:36 crc kubenswrapper[4703]: > Dec 09 12:08:37 crc kubenswrapper[4703]: I1209 12:08:37.539210 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-bkmjk" Dec 09 12:08:37 crc kubenswrapper[4703]: I1209 12:08:37.539266 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-bkmjk" Dec 09 12:08:37 crc kubenswrapper[4703]: I1209 12:08:37.593066 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-bkmjk" Dec 09 12:08:37 crc kubenswrapper[4703]: I1209 12:08:37.905977 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jgffv" Dec 09 12:08:37 crc kubenswrapper[4703]: I1209 12:08:37.906059 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jgffv" Dec 09 12:08:38 crc kubenswrapper[4703]: I1209 12:08:38.605641 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9nc2d" Dec 09 12:08:38 crc kubenswrapper[4703]: I1209 12:08:38.607556 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9nc2d" Dec 09 12:08:38 crc kubenswrapper[4703]: I1209 12:08:38.953050 4703 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-jgffv" podUID="d5cb3256-7c02-446e-8c63-d1559675704e" containerName="registry-server" probeResult="failure" output=< Dec 09 12:08:38 crc kubenswrapper[4703]: timeout: failed to connect service ":50051" within 1s Dec 09 12:08:38 crc kubenswrapper[4703]: > Dec 09 12:08:39 crc kubenswrapper[4703]: I1209 12:08:39.005023 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pl7l8" Dec 09 12:08:39 crc kubenswrapper[4703]: I1209 12:08:39.005328 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-pl7l8" Dec 09 12:08:39 crc kubenswrapper[4703]: I1209 12:08:39.649838 4703 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9nc2d" podUID="2e4c8435-609b-49fe-9f13-17547856b18a" containerName="registry-server" probeResult="failure" output=< Dec 09 12:08:39 crc kubenswrapper[4703]: timeout: failed to connect service ":50051" within 1s Dec 09 12:08:39 crc kubenswrapper[4703]: > Dec 09 12:08:40 crc kubenswrapper[4703]: I1209 12:08:40.040449 4703 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pl7l8" podUID="e8cddd58-cfa9-4371-bbfb-93747eceefca" containerName="registry-server" probeResult="failure" output=< Dec 09 12:08:40 crc kubenswrapper[4703]: timeout: failed to connect service ":50051" within 1s Dec 09 12:08:40 crc kubenswrapper[4703]: > Dec 09 12:08:45 crc kubenswrapper[4703]: I1209 12:08:45.531255 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6jqdn" Dec 09 12:08:45 crc kubenswrapper[4703]: I1209 12:08:45.531819 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6jqdn" Dec 09 12:08:45 crc kubenswrapper[4703]: I1209 12:08:45.579838 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6jqdn" Dec 09 12:08:45 crc kubenswrapper[4703]: I1209 12:08:45.751171 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-p2dxk" Dec 09 12:08:45 crc kubenswrapper[4703]: I1209 12:08:45.792518 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-p2dxk" Dec 09 12:08:45 crc kubenswrapper[4703]: I1209 12:08:45.970170 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-f8tbx" Dec 09 12:08:45 crc kubenswrapper[4703]: I1209 12:08:45.970513 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-f8tbx" Dec 09 12:08:46 crc kubenswrapper[4703]: I1209 12:08:46.007182 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-f8tbx" Dec 09 12:08:46 crc kubenswrapper[4703]: I1209 12:08:46.514979 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-f8tbx" Dec 09 12:08:46 crc kubenswrapper[4703]: I1209 12:08:46.533177 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6jqdn" Dec 09 12:08:47 crc kubenswrapper[4703]: I1209 12:08:47.010561 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-f8tbx"] Dec 09 12:08:47 crc kubenswrapper[4703]: I1209 12:08:47.587384 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-bkmjk" Dec 09 12:08:47 crc kubenswrapper[4703]: I1209 12:08:47.964893 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jgffv" Dec 09 12:08:48 crc kubenswrapper[4703]: I1209 12:08:48.009990 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jgffv" Dec 09 12:08:48 crc kubenswrapper[4703]: I1209 12:08:48.482356 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-f8tbx" podUID="6ef66484-04d6-4994-be99-16da9c301ea2" containerName="registry-server" containerID="cri-o://0f736eb5589303669b38b6a7d489f259284004aff2f8d77d4a8c8c77ca998978" gracePeriod=2 Dec 09 12:08:48 crc kubenswrapper[4703]: I1209 12:08:48.647514 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9nc2d" Dec 09 12:08:48 crc kubenswrapper[4703]: I1209 12:08:48.683860 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9nc2d" Dec 09 12:08:48 crc kubenswrapper[4703]: I1209 12:08:48.811931 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6jqdn"] Dec 09 12:08:48 crc kubenswrapper[4703]: I1209 12:08:48.812170 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6jqdn" podUID="be89872a-5325-41c1-85b8-6c9880d40f68" containerName="registry-server" containerID="cri-o://153aadf41c179d77acb21f91770176d7e5abcc97d1b84e336778c10002346d9d" gracePeriod=2 Dec 09 12:08:49 crc kubenswrapper[4703]: I1209 12:08:49.096653 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pl7l8" Dec 09 12:08:49 crc kubenswrapper[4703]: I1209 12:08:49.143610 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pl7l8" Dec 09 12:08:49 crc kubenswrapper[4703]: I1209 12:08:49.213309 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6jqdn" Dec 09 12:08:49 crc kubenswrapper[4703]: I1209 12:08:49.335870 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnbc6\" (UniqueName: \"kubernetes.io/projected/be89872a-5325-41c1-85b8-6c9880d40f68-kube-api-access-rnbc6\") pod \"be89872a-5325-41c1-85b8-6c9880d40f68\" (UID: \"be89872a-5325-41c1-85b8-6c9880d40f68\") " Dec 09 12:08:49 crc kubenswrapper[4703]: I1209 12:08:49.336829 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f8tbx" Dec 09 12:08:49 crc kubenswrapper[4703]: I1209 12:08:49.337486 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be89872a-5325-41c1-85b8-6c9880d40f68-catalog-content\") pod \"be89872a-5325-41c1-85b8-6c9880d40f68\" (UID: \"be89872a-5325-41c1-85b8-6c9880d40f68\") " Dec 09 12:08:49 crc kubenswrapper[4703]: I1209 12:08:49.337606 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be89872a-5325-41c1-85b8-6c9880d40f68-utilities\") pod \"be89872a-5325-41c1-85b8-6c9880d40f68\" (UID: \"be89872a-5325-41c1-85b8-6c9880d40f68\") " Dec 09 12:08:49 crc kubenswrapper[4703]: I1209 12:08:49.338911 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be89872a-5325-41c1-85b8-6c9880d40f68-utilities" (OuterVolumeSpecName: "utilities") pod "be89872a-5325-41c1-85b8-6c9880d40f68" (UID: "be89872a-5325-41c1-85b8-6c9880d40f68"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:08:49 crc kubenswrapper[4703]: I1209 12:08:49.342646 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be89872a-5325-41c1-85b8-6c9880d40f68-kube-api-access-rnbc6" (OuterVolumeSpecName: "kube-api-access-rnbc6") pod "be89872a-5325-41c1-85b8-6c9880d40f68" (UID: "be89872a-5325-41c1-85b8-6c9880d40f68"). InnerVolumeSpecName "kube-api-access-rnbc6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:08:49 crc kubenswrapper[4703]: I1209 12:08:49.403970 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be89872a-5325-41c1-85b8-6c9880d40f68-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "be89872a-5325-41c1-85b8-6c9880d40f68" (UID: "be89872a-5325-41c1-85b8-6c9880d40f68"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:08:49 crc kubenswrapper[4703]: I1209 12:08:49.438644 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ef66484-04d6-4994-be99-16da9c301ea2-utilities\") pod \"6ef66484-04d6-4994-be99-16da9c301ea2\" (UID: \"6ef66484-04d6-4994-be99-16da9c301ea2\") " Dec 09 12:08:49 crc kubenswrapper[4703]: I1209 12:08:49.438715 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ef66484-04d6-4994-be99-16da9c301ea2-catalog-content\") pod \"6ef66484-04d6-4994-be99-16da9c301ea2\" (UID: \"6ef66484-04d6-4994-be99-16da9c301ea2\") " Dec 09 12:08:49 crc kubenswrapper[4703]: I1209 12:08:49.438761 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2qxgf\" (UniqueName: \"kubernetes.io/projected/6ef66484-04d6-4994-be99-16da9c301ea2-kube-api-access-2qxgf\") pod \"6ef66484-04d6-4994-be99-16da9c301ea2\" (UID: \"6ef66484-04d6-4994-be99-16da9c301ea2\") " Dec 09 12:08:49 crc kubenswrapper[4703]: I1209 12:08:49.438900 4703 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be89872a-5325-41c1-85b8-6c9880d40f68-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 12:08:49 crc kubenswrapper[4703]: I1209 12:08:49.438912 4703 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be89872a-5325-41c1-85b8-6c9880d40f68-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 12:08:49 crc kubenswrapper[4703]: I1209 12:08:49.438921 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnbc6\" (UniqueName: \"kubernetes.io/projected/be89872a-5325-41c1-85b8-6c9880d40f68-kube-api-access-rnbc6\") on node \"crc\" DevicePath \"\"" Dec 09 12:08:49 crc kubenswrapper[4703]: I1209 12:08:49.440036 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ef66484-04d6-4994-be99-16da9c301ea2-utilities" (OuterVolumeSpecName: "utilities") pod "6ef66484-04d6-4994-be99-16da9c301ea2" (UID: "6ef66484-04d6-4994-be99-16da9c301ea2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:08:49 crc kubenswrapper[4703]: I1209 12:08:49.442396 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ef66484-04d6-4994-be99-16da9c301ea2-kube-api-access-2qxgf" (OuterVolumeSpecName: "kube-api-access-2qxgf") pod "6ef66484-04d6-4994-be99-16da9c301ea2" (UID: "6ef66484-04d6-4994-be99-16da9c301ea2"). InnerVolumeSpecName "kube-api-access-2qxgf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:08:49 crc kubenswrapper[4703]: I1209 12:08:49.493466 4703 generic.go:334] "Generic (PLEG): container finished" podID="be89872a-5325-41c1-85b8-6c9880d40f68" containerID="153aadf41c179d77acb21f91770176d7e5abcc97d1b84e336778c10002346d9d" exitCode=0 Dec 09 12:08:49 crc kubenswrapper[4703]: I1209 12:08:49.493544 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6jqdn" Dec 09 12:08:49 crc kubenswrapper[4703]: I1209 12:08:49.493545 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6jqdn" event={"ID":"be89872a-5325-41c1-85b8-6c9880d40f68","Type":"ContainerDied","Data":"153aadf41c179d77acb21f91770176d7e5abcc97d1b84e336778c10002346d9d"} Dec 09 12:08:49 crc kubenswrapper[4703]: I1209 12:08:49.493632 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6jqdn" event={"ID":"be89872a-5325-41c1-85b8-6c9880d40f68","Type":"ContainerDied","Data":"50959f8aa94dcb5d84ddaab4e8727a8e67756d433215860ed8be939dfc82609e"} Dec 09 12:08:49 crc kubenswrapper[4703]: I1209 12:08:49.493666 4703 scope.go:117] "RemoveContainer" containerID="153aadf41c179d77acb21f91770176d7e5abcc97d1b84e336778c10002346d9d" Dec 09 12:08:49 crc kubenswrapper[4703]: I1209 12:08:49.493732 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ef66484-04d6-4994-be99-16da9c301ea2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6ef66484-04d6-4994-be99-16da9c301ea2" (UID: "6ef66484-04d6-4994-be99-16da9c301ea2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:08:49 crc kubenswrapper[4703]: I1209 12:08:49.500006 4703 generic.go:334] "Generic (PLEG): container finished" podID="6ef66484-04d6-4994-be99-16da9c301ea2" containerID="0f736eb5589303669b38b6a7d489f259284004aff2f8d77d4a8c8c77ca998978" exitCode=0 Dec 09 12:08:49 crc kubenswrapper[4703]: I1209 12:08:49.501249 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f8tbx" Dec 09 12:08:49 crc kubenswrapper[4703]: I1209 12:08:49.501382 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f8tbx" event={"ID":"6ef66484-04d6-4994-be99-16da9c301ea2","Type":"ContainerDied","Data":"0f736eb5589303669b38b6a7d489f259284004aff2f8d77d4a8c8c77ca998978"} Dec 09 12:08:49 crc kubenswrapper[4703]: I1209 12:08:49.501459 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f8tbx" event={"ID":"6ef66484-04d6-4994-be99-16da9c301ea2","Type":"ContainerDied","Data":"034f96304e8984e4d763e19cfd5bfae285be9670bee6f5b6ac7d92c328f01712"} Dec 09 12:08:49 crc kubenswrapper[4703]: I1209 12:08:49.522031 4703 scope.go:117] "RemoveContainer" containerID="18435a9f38224c12318333cbb1ec0e5bcc510306e8a9d3eeb5764308fe6f2fe0" Dec 09 12:08:49 crc kubenswrapper[4703]: I1209 12:08:49.528673 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6jqdn"] Dec 09 12:08:49 crc kubenswrapper[4703]: I1209 12:08:49.531094 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6jqdn"] Dec 09 12:08:49 crc kubenswrapper[4703]: I1209 12:08:49.540038 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2qxgf\" (UniqueName: \"kubernetes.io/projected/6ef66484-04d6-4994-be99-16da9c301ea2-kube-api-access-2qxgf\") on node \"crc\" DevicePath \"\"" Dec 09 12:08:49 crc kubenswrapper[4703]: I1209 12:08:49.540073 4703 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ef66484-04d6-4994-be99-16da9c301ea2-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 12:08:49 crc kubenswrapper[4703]: I1209 12:08:49.540083 4703 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ef66484-04d6-4994-be99-16da9c301ea2-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 12:08:49 crc kubenswrapper[4703]: I1209 12:08:49.540740 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-f8tbx"] Dec 09 12:08:49 crc kubenswrapper[4703]: I1209 12:08:49.544288 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-f8tbx"] Dec 09 12:08:49 crc kubenswrapper[4703]: I1209 12:08:49.561109 4703 scope.go:117] "RemoveContainer" containerID="02b42c9ddea6ff3651c310756774be9c089a3b0d47b72f763ad60ea647116679" Dec 09 12:08:49 crc kubenswrapper[4703]: I1209 12:08:49.577641 4703 scope.go:117] "RemoveContainer" containerID="153aadf41c179d77acb21f91770176d7e5abcc97d1b84e336778c10002346d9d" Dec 09 12:08:49 crc kubenswrapper[4703]: E1209 12:08:49.578138 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"153aadf41c179d77acb21f91770176d7e5abcc97d1b84e336778c10002346d9d\": container with ID starting with 153aadf41c179d77acb21f91770176d7e5abcc97d1b84e336778c10002346d9d not found: ID does not exist" containerID="153aadf41c179d77acb21f91770176d7e5abcc97d1b84e336778c10002346d9d" Dec 09 12:08:49 crc kubenswrapper[4703]: I1209 12:08:49.578208 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"153aadf41c179d77acb21f91770176d7e5abcc97d1b84e336778c10002346d9d"} err="failed to get container status \"153aadf41c179d77acb21f91770176d7e5abcc97d1b84e336778c10002346d9d\": rpc error: code = NotFound desc = could not find container \"153aadf41c179d77acb21f91770176d7e5abcc97d1b84e336778c10002346d9d\": container with ID starting with 153aadf41c179d77acb21f91770176d7e5abcc97d1b84e336778c10002346d9d not found: ID does not exist" Dec 09 12:08:49 crc kubenswrapper[4703]: I1209 12:08:49.578259 4703 scope.go:117] "RemoveContainer" containerID="18435a9f38224c12318333cbb1ec0e5bcc510306e8a9d3eeb5764308fe6f2fe0" Dec 09 12:08:49 crc kubenswrapper[4703]: E1209 12:08:49.578716 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18435a9f38224c12318333cbb1ec0e5bcc510306e8a9d3eeb5764308fe6f2fe0\": container with ID starting with 18435a9f38224c12318333cbb1ec0e5bcc510306e8a9d3eeb5764308fe6f2fe0 not found: ID does not exist" containerID="18435a9f38224c12318333cbb1ec0e5bcc510306e8a9d3eeb5764308fe6f2fe0" Dec 09 12:08:49 crc kubenswrapper[4703]: I1209 12:08:49.578750 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18435a9f38224c12318333cbb1ec0e5bcc510306e8a9d3eeb5764308fe6f2fe0"} err="failed to get container status \"18435a9f38224c12318333cbb1ec0e5bcc510306e8a9d3eeb5764308fe6f2fe0\": rpc error: code = NotFound desc = could not find container \"18435a9f38224c12318333cbb1ec0e5bcc510306e8a9d3eeb5764308fe6f2fe0\": container with ID starting with 18435a9f38224c12318333cbb1ec0e5bcc510306e8a9d3eeb5764308fe6f2fe0 not found: ID does not exist" Dec 09 12:08:49 crc kubenswrapper[4703]: I1209 12:08:49.578775 4703 scope.go:117] "RemoveContainer" containerID="02b42c9ddea6ff3651c310756774be9c089a3b0d47b72f763ad60ea647116679" Dec 09 12:08:49 crc kubenswrapper[4703]: E1209 12:08:49.579596 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02b42c9ddea6ff3651c310756774be9c089a3b0d47b72f763ad60ea647116679\": container with ID starting with 02b42c9ddea6ff3651c310756774be9c089a3b0d47b72f763ad60ea647116679 not found: ID does not exist" containerID="02b42c9ddea6ff3651c310756774be9c089a3b0d47b72f763ad60ea647116679" Dec 09 12:08:49 crc kubenswrapper[4703]: I1209 12:08:49.579620 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02b42c9ddea6ff3651c310756774be9c089a3b0d47b72f763ad60ea647116679"} err="failed to get container status \"02b42c9ddea6ff3651c310756774be9c089a3b0d47b72f763ad60ea647116679\": rpc error: code = NotFound desc = could not find container \"02b42c9ddea6ff3651c310756774be9c089a3b0d47b72f763ad60ea647116679\": container with ID starting with 02b42c9ddea6ff3651c310756774be9c089a3b0d47b72f763ad60ea647116679 not found: ID does not exist" Dec 09 12:08:49 crc kubenswrapper[4703]: I1209 12:08:49.579633 4703 scope.go:117] "RemoveContainer" containerID="0f736eb5589303669b38b6a7d489f259284004aff2f8d77d4a8c8c77ca998978" Dec 09 12:08:49 crc kubenswrapper[4703]: I1209 12:08:49.596356 4703 scope.go:117] "RemoveContainer" containerID="00e7bbf03bc9973f048de61e39a88c27520a3bfdc4651728672fa0d86a994e93" Dec 09 12:08:49 crc kubenswrapper[4703]: I1209 12:08:49.620637 4703 scope.go:117] "RemoveContainer" containerID="0ec318d759a307a288b15ea3624342244932a6e1654d992019372b2a1b4886ef" Dec 09 12:08:49 crc kubenswrapper[4703]: I1209 12:08:49.636891 4703 scope.go:117] "RemoveContainer" containerID="0f736eb5589303669b38b6a7d489f259284004aff2f8d77d4a8c8c77ca998978" Dec 09 12:08:49 crc kubenswrapper[4703]: E1209 12:08:49.637481 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f736eb5589303669b38b6a7d489f259284004aff2f8d77d4a8c8c77ca998978\": container with ID starting with 0f736eb5589303669b38b6a7d489f259284004aff2f8d77d4a8c8c77ca998978 not found: ID does not exist" containerID="0f736eb5589303669b38b6a7d489f259284004aff2f8d77d4a8c8c77ca998978" Dec 09 12:08:49 crc kubenswrapper[4703]: I1209 12:08:49.637520 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f736eb5589303669b38b6a7d489f259284004aff2f8d77d4a8c8c77ca998978"} err="failed to get container status \"0f736eb5589303669b38b6a7d489f259284004aff2f8d77d4a8c8c77ca998978\": rpc error: code = NotFound desc = could not find container \"0f736eb5589303669b38b6a7d489f259284004aff2f8d77d4a8c8c77ca998978\": container with ID starting with 0f736eb5589303669b38b6a7d489f259284004aff2f8d77d4a8c8c77ca998978 not found: ID does not exist" Dec 09 12:08:49 crc kubenswrapper[4703]: I1209 12:08:49.637550 4703 scope.go:117] "RemoveContainer" containerID="00e7bbf03bc9973f048de61e39a88c27520a3bfdc4651728672fa0d86a994e93" Dec 09 12:08:49 crc kubenswrapper[4703]: E1209 12:08:49.638022 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00e7bbf03bc9973f048de61e39a88c27520a3bfdc4651728672fa0d86a994e93\": container with ID starting with 00e7bbf03bc9973f048de61e39a88c27520a3bfdc4651728672fa0d86a994e93 not found: ID does not exist" containerID="00e7bbf03bc9973f048de61e39a88c27520a3bfdc4651728672fa0d86a994e93" Dec 09 12:08:49 crc kubenswrapper[4703]: I1209 12:08:49.638067 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00e7bbf03bc9973f048de61e39a88c27520a3bfdc4651728672fa0d86a994e93"} err="failed to get container status \"00e7bbf03bc9973f048de61e39a88c27520a3bfdc4651728672fa0d86a994e93\": rpc error: code = NotFound desc = could not find container \"00e7bbf03bc9973f048de61e39a88c27520a3bfdc4651728672fa0d86a994e93\": container with ID starting with 00e7bbf03bc9973f048de61e39a88c27520a3bfdc4651728672fa0d86a994e93 not found: ID does not exist" Dec 09 12:08:49 crc kubenswrapper[4703]: I1209 12:08:49.638083 4703 scope.go:117] "RemoveContainer" containerID="0ec318d759a307a288b15ea3624342244932a6e1654d992019372b2a1b4886ef" Dec 09 12:08:49 crc kubenswrapper[4703]: E1209 12:08:49.638365 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ec318d759a307a288b15ea3624342244932a6e1654d992019372b2a1b4886ef\": container with ID starting with 0ec318d759a307a288b15ea3624342244932a6e1654d992019372b2a1b4886ef not found: ID does not exist" containerID="0ec318d759a307a288b15ea3624342244932a6e1654d992019372b2a1b4886ef" Dec 09 12:08:49 crc kubenswrapper[4703]: I1209 12:08:49.638415 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ec318d759a307a288b15ea3624342244932a6e1654d992019372b2a1b4886ef"} err="failed to get container status \"0ec318d759a307a288b15ea3624342244932a6e1654d992019372b2a1b4886ef\": rpc error: code = NotFound desc = could not find container \"0ec318d759a307a288b15ea3624342244932a6e1654d992019372b2a1b4886ef\": container with ID starting with 0ec318d759a307a288b15ea3624342244932a6e1654d992019372b2a1b4886ef not found: ID does not exist" Dec 09 12:08:51 crc kubenswrapper[4703]: I1209 12:08:51.077549 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ef66484-04d6-4994-be99-16da9c301ea2" path="/var/lib/kubelet/pods/6ef66484-04d6-4994-be99-16da9c301ea2/volumes" Dec 09 12:08:51 crc kubenswrapper[4703]: I1209 12:08:51.078441 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be89872a-5325-41c1-85b8-6c9880d40f68" path="/var/lib/kubelet/pods/be89872a-5325-41c1-85b8-6c9880d40f68/volumes" Dec 09 12:08:51 crc kubenswrapper[4703]: I1209 12:08:51.212294 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jgffv"] Dec 09 12:08:51 crc kubenswrapper[4703]: I1209 12:08:51.212521 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jgffv" podUID="d5cb3256-7c02-446e-8c63-d1559675704e" containerName="registry-server" containerID="cri-o://473c62f891dfff88e7555985b41a5d413ff1f32439938d60c3b99c3cc6edd100" gracePeriod=2 Dec 09 12:08:51 crc kubenswrapper[4703]: I1209 12:08:51.810157 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pl7l8"] Dec 09 12:08:51 crc kubenswrapper[4703]: I1209 12:08:51.810425 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-pl7l8" podUID="e8cddd58-cfa9-4371-bbfb-93747eceefca" containerName="registry-server" containerID="cri-o://2230ba1ff845c007cf415e59e228fe718cba569e618038d4b07104c7d6bd4ed2" gracePeriod=2 Dec 09 12:08:53 crc kubenswrapper[4703]: I1209 12:08:53.077308 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pl7l8" Dec 09 12:08:53 crc kubenswrapper[4703]: I1209 12:08:53.182656 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8cddd58-cfa9-4371-bbfb-93747eceefca-utilities\") pod \"e8cddd58-cfa9-4371-bbfb-93747eceefca\" (UID: \"e8cddd58-cfa9-4371-bbfb-93747eceefca\") " Dec 09 12:08:53 crc kubenswrapper[4703]: I1209 12:08:53.182804 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8cddd58-cfa9-4371-bbfb-93747eceefca-catalog-content\") pod \"e8cddd58-cfa9-4371-bbfb-93747eceefca\" (UID: \"e8cddd58-cfa9-4371-bbfb-93747eceefca\") " Dec 09 12:08:53 crc kubenswrapper[4703]: I1209 12:08:53.182867 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vc84x\" (UniqueName: \"kubernetes.io/projected/e8cddd58-cfa9-4371-bbfb-93747eceefca-kube-api-access-vc84x\") pod \"e8cddd58-cfa9-4371-bbfb-93747eceefca\" (UID: \"e8cddd58-cfa9-4371-bbfb-93747eceefca\") " Dec 09 12:08:53 crc kubenswrapper[4703]: I1209 12:08:53.184560 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8cddd58-cfa9-4371-bbfb-93747eceefca-utilities" (OuterVolumeSpecName: "utilities") pod "e8cddd58-cfa9-4371-bbfb-93747eceefca" (UID: "e8cddd58-cfa9-4371-bbfb-93747eceefca"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:08:53 crc kubenswrapper[4703]: I1209 12:08:53.189438 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8cddd58-cfa9-4371-bbfb-93747eceefca-kube-api-access-vc84x" (OuterVolumeSpecName: "kube-api-access-vc84x") pod "e8cddd58-cfa9-4371-bbfb-93747eceefca" (UID: "e8cddd58-cfa9-4371-bbfb-93747eceefca"). InnerVolumeSpecName "kube-api-access-vc84x". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:08:53 crc kubenswrapper[4703]: I1209 12:08:53.284110 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vc84x\" (UniqueName: \"kubernetes.io/projected/e8cddd58-cfa9-4371-bbfb-93747eceefca-kube-api-access-vc84x\") on node \"crc\" DevicePath \"\"" Dec 09 12:08:53 crc kubenswrapper[4703]: I1209 12:08:53.284146 4703 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8cddd58-cfa9-4371-bbfb-93747eceefca-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 12:08:53 crc kubenswrapper[4703]: I1209 12:08:53.307318 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8cddd58-cfa9-4371-bbfb-93747eceefca-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e8cddd58-cfa9-4371-bbfb-93747eceefca" (UID: "e8cddd58-cfa9-4371-bbfb-93747eceefca"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:08:53 crc kubenswrapper[4703]: I1209 12:08:53.385145 4703 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8cddd58-cfa9-4371-bbfb-93747eceefca-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 12:08:53 crc kubenswrapper[4703]: I1209 12:08:53.524322 4703 generic.go:334] "Generic (PLEG): container finished" podID="d5cb3256-7c02-446e-8c63-d1559675704e" containerID="473c62f891dfff88e7555985b41a5d413ff1f32439938d60c3b99c3cc6edd100" exitCode=0 Dec 09 12:08:53 crc kubenswrapper[4703]: I1209 12:08:53.524410 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jgffv" event={"ID":"d5cb3256-7c02-446e-8c63-d1559675704e","Type":"ContainerDied","Data":"473c62f891dfff88e7555985b41a5d413ff1f32439938d60c3b99c3cc6edd100"} Dec 09 12:08:53 crc kubenswrapper[4703]: I1209 12:08:53.526542 4703 generic.go:334] "Generic (PLEG): container finished" podID="e8cddd58-cfa9-4371-bbfb-93747eceefca" containerID="2230ba1ff845c007cf415e59e228fe718cba569e618038d4b07104c7d6bd4ed2" exitCode=0 Dec 09 12:08:53 crc kubenswrapper[4703]: I1209 12:08:53.526580 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pl7l8" event={"ID":"e8cddd58-cfa9-4371-bbfb-93747eceefca","Type":"ContainerDied","Data":"2230ba1ff845c007cf415e59e228fe718cba569e618038d4b07104c7d6bd4ed2"} Dec 09 12:08:53 crc kubenswrapper[4703]: I1209 12:08:53.526789 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pl7l8" event={"ID":"e8cddd58-cfa9-4371-bbfb-93747eceefca","Type":"ContainerDied","Data":"47a8bb095033af191ac0ec13aba863ec8246c576e23336da1eac88ccdc8768ef"} Dec 09 12:08:53 crc kubenswrapper[4703]: I1209 12:08:53.526630 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pl7l8" Dec 09 12:08:53 crc kubenswrapper[4703]: I1209 12:08:53.526857 4703 scope.go:117] "RemoveContainer" containerID="2230ba1ff845c007cf415e59e228fe718cba569e618038d4b07104c7d6bd4ed2" Dec 09 12:08:53 crc kubenswrapper[4703]: I1209 12:08:53.557014 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pl7l8"] Dec 09 12:08:53 crc kubenswrapper[4703]: I1209 12:08:53.558216 4703 scope.go:117] "RemoveContainer" containerID="a76d04af646fde4455dead2c33c5a6fed48fb82f196cb7b0c77186bf5a253943" Dec 09 12:08:53 crc kubenswrapper[4703]: I1209 12:08:53.566635 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-pl7l8"] Dec 09 12:08:53 crc kubenswrapper[4703]: I1209 12:08:53.581712 4703 scope.go:117] "RemoveContainer" containerID="bac0f131c09e3a62726e0b2ecc77fcef9cfff0dc733216b78d1c4db29ce40bef" Dec 09 12:08:53 crc kubenswrapper[4703]: I1209 12:08:53.597369 4703 scope.go:117] "RemoveContainer" containerID="2230ba1ff845c007cf415e59e228fe718cba569e618038d4b07104c7d6bd4ed2" Dec 09 12:08:53 crc kubenswrapper[4703]: E1209 12:08:53.600589 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2230ba1ff845c007cf415e59e228fe718cba569e618038d4b07104c7d6bd4ed2\": container with ID starting with 2230ba1ff845c007cf415e59e228fe718cba569e618038d4b07104c7d6bd4ed2 not found: ID does not exist" containerID="2230ba1ff845c007cf415e59e228fe718cba569e618038d4b07104c7d6bd4ed2" Dec 09 12:08:53 crc kubenswrapper[4703]: I1209 12:08:53.600634 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2230ba1ff845c007cf415e59e228fe718cba569e618038d4b07104c7d6bd4ed2"} err="failed to get container status \"2230ba1ff845c007cf415e59e228fe718cba569e618038d4b07104c7d6bd4ed2\": rpc error: code = NotFound desc = could not find container \"2230ba1ff845c007cf415e59e228fe718cba569e618038d4b07104c7d6bd4ed2\": container with ID starting with 2230ba1ff845c007cf415e59e228fe718cba569e618038d4b07104c7d6bd4ed2 not found: ID does not exist" Dec 09 12:08:53 crc kubenswrapper[4703]: I1209 12:08:53.600663 4703 scope.go:117] "RemoveContainer" containerID="a76d04af646fde4455dead2c33c5a6fed48fb82f196cb7b0c77186bf5a253943" Dec 09 12:08:53 crc kubenswrapper[4703]: E1209 12:08:53.603566 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a76d04af646fde4455dead2c33c5a6fed48fb82f196cb7b0c77186bf5a253943\": container with ID starting with a76d04af646fde4455dead2c33c5a6fed48fb82f196cb7b0c77186bf5a253943 not found: ID does not exist" containerID="a76d04af646fde4455dead2c33c5a6fed48fb82f196cb7b0c77186bf5a253943" Dec 09 12:08:53 crc kubenswrapper[4703]: I1209 12:08:53.603612 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a76d04af646fde4455dead2c33c5a6fed48fb82f196cb7b0c77186bf5a253943"} err="failed to get container status \"a76d04af646fde4455dead2c33c5a6fed48fb82f196cb7b0c77186bf5a253943\": rpc error: code = NotFound desc = could not find container \"a76d04af646fde4455dead2c33c5a6fed48fb82f196cb7b0c77186bf5a253943\": container with ID starting with a76d04af646fde4455dead2c33c5a6fed48fb82f196cb7b0c77186bf5a253943 not found: ID does not exist" Dec 09 12:08:53 crc kubenswrapper[4703]: I1209 12:08:53.603644 4703 scope.go:117] "RemoveContainer" containerID="bac0f131c09e3a62726e0b2ecc77fcef9cfff0dc733216b78d1c4db29ce40bef" Dec 09 12:08:53 crc kubenswrapper[4703]: E1209 12:08:53.604006 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bac0f131c09e3a62726e0b2ecc77fcef9cfff0dc733216b78d1c4db29ce40bef\": container with ID starting with bac0f131c09e3a62726e0b2ecc77fcef9cfff0dc733216b78d1c4db29ce40bef not found: ID does not exist" containerID="bac0f131c09e3a62726e0b2ecc77fcef9cfff0dc733216b78d1c4db29ce40bef" Dec 09 12:08:53 crc kubenswrapper[4703]: I1209 12:08:53.604159 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bac0f131c09e3a62726e0b2ecc77fcef9cfff0dc733216b78d1c4db29ce40bef"} err="failed to get container status \"bac0f131c09e3a62726e0b2ecc77fcef9cfff0dc733216b78d1c4db29ce40bef\": rpc error: code = NotFound desc = could not find container \"bac0f131c09e3a62726e0b2ecc77fcef9cfff0dc733216b78d1c4db29ce40bef\": container with ID starting with bac0f131c09e3a62726e0b2ecc77fcef9cfff0dc733216b78d1c4db29ce40bef not found: ID does not exist" Dec 09 12:08:53 crc kubenswrapper[4703]: I1209 12:08:53.836493 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jgffv" Dec 09 12:08:53 crc kubenswrapper[4703]: I1209 12:08:53.990901 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6bfst\" (UniqueName: \"kubernetes.io/projected/d5cb3256-7c02-446e-8c63-d1559675704e-kube-api-access-6bfst\") pod \"d5cb3256-7c02-446e-8c63-d1559675704e\" (UID: \"d5cb3256-7c02-446e-8c63-d1559675704e\") " Dec 09 12:08:53 crc kubenswrapper[4703]: I1209 12:08:53.990945 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5cb3256-7c02-446e-8c63-d1559675704e-utilities\") pod \"d5cb3256-7c02-446e-8c63-d1559675704e\" (UID: \"d5cb3256-7c02-446e-8c63-d1559675704e\") " Dec 09 12:08:53 crc kubenswrapper[4703]: I1209 12:08:53.991047 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5cb3256-7c02-446e-8c63-d1559675704e-catalog-content\") pod \"d5cb3256-7c02-446e-8c63-d1559675704e\" (UID: \"d5cb3256-7c02-446e-8c63-d1559675704e\") " Dec 09 12:08:53 crc kubenswrapper[4703]: I1209 12:08:53.991720 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5cb3256-7c02-446e-8c63-d1559675704e-utilities" (OuterVolumeSpecName: "utilities") pod "d5cb3256-7c02-446e-8c63-d1559675704e" (UID: "d5cb3256-7c02-446e-8c63-d1559675704e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:08:53 crc kubenswrapper[4703]: I1209 12:08:53.993489 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5cb3256-7c02-446e-8c63-d1559675704e-kube-api-access-6bfst" (OuterVolumeSpecName: "kube-api-access-6bfst") pod "d5cb3256-7c02-446e-8c63-d1559675704e" (UID: "d5cb3256-7c02-446e-8c63-d1559675704e"). InnerVolumeSpecName "kube-api-access-6bfst". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:08:54 crc kubenswrapper[4703]: I1209 12:08:54.008320 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5cb3256-7c02-446e-8c63-d1559675704e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d5cb3256-7c02-446e-8c63-d1559675704e" (UID: "d5cb3256-7c02-446e-8c63-d1559675704e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:08:54 crc kubenswrapper[4703]: I1209 12:08:54.092563 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6bfst\" (UniqueName: \"kubernetes.io/projected/d5cb3256-7c02-446e-8c63-d1559675704e-kube-api-access-6bfst\") on node \"crc\" DevicePath \"\"" Dec 09 12:08:54 crc kubenswrapper[4703]: I1209 12:08:54.092645 4703 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5cb3256-7c02-446e-8c63-d1559675704e-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 12:08:54 crc kubenswrapper[4703]: I1209 12:08:54.092677 4703 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5cb3256-7c02-446e-8c63-d1559675704e-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 12:08:54 crc kubenswrapper[4703]: I1209 12:08:54.536480 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jgffv" event={"ID":"d5cb3256-7c02-446e-8c63-d1559675704e","Type":"ContainerDied","Data":"c78e35a4d92dc3cbc0ab96cf3eaa37329a0af4701c90b4038fcc6a50c9473826"} Dec 09 12:08:54 crc kubenswrapper[4703]: I1209 12:08:54.536558 4703 scope.go:117] "RemoveContainer" containerID="473c62f891dfff88e7555985b41a5d413ff1f32439938d60c3b99c3cc6edd100" Dec 09 12:08:54 crc kubenswrapper[4703]: I1209 12:08:54.536504 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jgffv" Dec 09 12:08:54 crc kubenswrapper[4703]: I1209 12:08:54.553699 4703 scope.go:117] "RemoveContainer" containerID="5c68a02d58bb14fb5c8b0d3e43321482d23578d912af7b4f9ca94f7bacebfbb0" Dec 09 12:08:54 crc kubenswrapper[4703]: I1209 12:08:54.565665 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jgffv"] Dec 09 12:08:54 crc kubenswrapper[4703]: I1209 12:08:54.568395 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jgffv"] Dec 09 12:08:54 crc kubenswrapper[4703]: I1209 12:08:54.588524 4703 scope.go:117] "RemoveContainer" containerID="89c7fdf1d249a00dfdf3ceb78432ed76e627b8bd24f97c44e641d49011c76b0e" Dec 09 12:08:55 crc kubenswrapper[4703]: I1209 12:08:55.076854 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5cb3256-7c02-446e-8c63-d1559675704e" path="/var/lib/kubelet/pods/d5cb3256-7c02-446e-8c63-d1559675704e/volumes" Dec 09 12:08:55 crc kubenswrapper[4703]: I1209 12:08:55.077484 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8cddd58-cfa9-4371-bbfb-93747eceefca" path="/var/lib/kubelet/pods/e8cddd58-cfa9-4371-bbfb-93747eceefca/volumes" Dec 09 12:08:56 crc kubenswrapper[4703]: I1209 12:08:56.374766 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-6hpw7"] Dec 09 12:08:59 crc kubenswrapper[4703]: I1209 12:08:59.058799 4703 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 09 12:08:59 crc kubenswrapper[4703]: I1209 12:08:59.059560 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://cd9149c63c18f6178219d1f2e4f8e44bc861746b90fb9748105461192a870431" gracePeriod=15 Dec 09 12:08:59 crc kubenswrapper[4703]: I1209 12:08:59.059652 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://8612defa917cc99574e7762a0260e1b670277ea754e0a66bca960ef6b5a97f2d" gracePeriod=15 Dec 09 12:08:59 crc kubenswrapper[4703]: I1209 12:08:59.059713 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://4dc0c9f4fdf8ce1ac42d54d3823b0137e0fd8ddf4499236ac3b954f9bff7c491" gracePeriod=15 Dec 09 12:08:59 crc kubenswrapper[4703]: I1209 12:08:59.059707 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://917390e847b6ec24f370316f0fbc305db0f667eab946b4f0e749855a62112cc9" gracePeriod=15 Dec 09 12:08:59 crc kubenswrapper[4703]: I1209 12:08:59.060012 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://dd716e645be3932ca219e76ba8d896a8cf27086bbbac8f5e68057873fa68a6f4" gracePeriod=15 Dec 09 12:08:59 crc kubenswrapper[4703]: I1209 12:08:59.064143 4703 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 09 12:08:59 crc kubenswrapper[4703]: E1209 12:08:59.064602 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be89872a-5325-41c1-85b8-6c9880d40f68" containerName="extract-utilities" Dec 09 12:08:59 crc kubenswrapper[4703]: I1209 12:08:59.064629 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="be89872a-5325-41c1-85b8-6c9880d40f68" containerName="extract-utilities" Dec 09 12:08:59 crc kubenswrapper[4703]: E1209 12:08:59.064648 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be89872a-5325-41c1-85b8-6c9880d40f68" containerName="extract-content" Dec 09 12:08:59 crc kubenswrapper[4703]: I1209 12:08:59.064660 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="be89872a-5325-41c1-85b8-6c9880d40f68" containerName="extract-content" Dec 09 12:08:59 crc kubenswrapper[4703]: E1209 12:08:59.064683 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 09 12:08:59 crc kubenswrapper[4703]: I1209 12:08:59.064694 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 09 12:08:59 crc kubenswrapper[4703]: E1209 12:08:59.064702 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 09 12:08:59 crc kubenswrapper[4703]: I1209 12:08:59.064711 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 09 12:08:59 crc kubenswrapper[4703]: E1209 12:08:59.064722 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Dec 09 12:08:59 crc kubenswrapper[4703]: I1209 12:08:59.064729 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Dec 09 12:08:59 crc kubenswrapper[4703]: E1209 12:08:59.064746 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8cddd58-cfa9-4371-bbfb-93747eceefca" containerName="extract-utilities" Dec 09 12:08:59 crc kubenswrapper[4703]: I1209 12:08:59.064755 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8cddd58-cfa9-4371-bbfb-93747eceefca" containerName="extract-utilities" Dec 09 12:08:59 crc kubenswrapper[4703]: E1209 12:08:59.064769 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ef66484-04d6-4994-be99-16da9c301ea2" containerName="extract-utilities" Dec 09 12:08:59 crc kubenswrapper[4703]: I1209 12:08:59.064777 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ef66484-04d6-4994-be99-16da9c301ea2" containerName="extract-utilities" Dec 09 12:08:59 crc kubenswrapper[4703]: E1209 12:08:59.064785 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ef66484-04d6-4994-be99-16da9c301ea2" containerName="registry-server" Dec 09 12:08:59 crc kubenswrapper[4703]: I1209 12:08:59.064795 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ef66484-04d6-4994-be99-16da9c301ea2" containerName="registry-server" Dec 09 12:08:59 crc kubenswrapper[4703]: E1209 12:08:59.064807 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5cb3256-7c02-446e-8c63-d1559675704e" containerName="registry-server" Dec 09 12:08:59 crc kubenswrapper[4703]: I1209 12:08:59.064816 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5cb3256-7c02-446e-8c63-d1559675704e" containerName="registry-server" Dec 09 12:08:59 crc kubenswrapper[4703]: E1209 12:08:59.064826 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 09 12:08:59 crc kubenswrapper[4703]: I1209 12:08:59.064834 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 09 12:08:59 crc kubenswrapper[4703]: E1209 12:08:59.064850 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7cbd2ca-0c83-4c61-b265-a2abcb8ea07f" containerName="pruner" Dec 09 12:08:59 crc kubenswrapper[4703]: I1209 12:08:59.064859 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7cbd2ca-0c83-4c61-b265-a2abcb8ea07f" containerName="pruner" Dec 09 12:08:59 crc kubenswrapper[4703]: E1209 12:08:59.064872 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 09 12:08:59 crc kubenswrapper[4703]: I1209 12:08:59.064881 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 09 12:08:59 crc kubenswrapper[4703]: E1209 12:08:59.064892 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5cb3256-7c02-446e-8c63-d1559675704e" containerName="extract-utilities" Dec 09 12:08:59 crc kubenswrapper[4703]: I1209 12:08:59.064901 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5cb3256-7c02-446e-8c63-d1559675704e" containerName="extract-utilities" Dec 09 12:08:59 crc kubenswrapper[4703]: E1209 12:08:59.064915 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be89872a-5325-41c1-85b8-6c9880d40f68" containerName="registry-server" Dec 09 12:08:59 crc kubenswrapper[4703]: I1209 12:08:59.064923 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="be89872a-5325-41c1-85b8-6c9880d40f68" containerName="registry-server" Dec 09 12:08:59 crc kubenswrapper[4703]: E1209 12:08:59.064933 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ef66484-04d6-4994-be99-16da9c301ea2" containerName="extract-content" Dec 09 12:08:59 crc kubenswrapper[4703]: I1209 12:08:59.064941 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ef66484-04d6-4994-be99-16da9c301ea2" containerName="extract-content" Dec 09 12:08:59 crc kubenswrapper[4703]: E1209 12:08:59.064954 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8cddd58-cfa9-4371-bbfb-93747eceefca" containerName="extract-content" Dec 09 12:08:59 crc kubenswrapper[4703]: I1209 12:08:59.064963 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8cddd58-cfa9-4371-bbfb-93747eceefca" containerName="extract-content" Dec 09 12:08:59 crc kubenswrapper[4703]: E1209 12:08:59.064971 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8cddd58-cfa9-4371-bbfb-93747eceefca" containerName="registry-server" Dec 09 12:08:59 crc kubenswrapper[4703]: I1209 12:08:59.064978 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8cddd58-cfa9-4371-bbfb-93747eceefca" containerName="registry-server" Dec 09 12:08:59 crc kubenswrapper[4703]: E1209 12:08:59.064994 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 09 12:08:59 crc kubenswrapper[4703]: I1209 12:08:59.065003 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 09 12:08:59 crc kubenswrapper[4703]: E1209 12:08:59.065016 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5cb3256-7c02-446e-8c63-d1559675704e" containerName="extract-content" Dec 09 12:08:59 crc kubenswrapper[4703]: I1209 12:08:59.065026 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5cb3256-7c02-446e-8c63-d1559675704e" containerName="extract-content" Dec 09 12:08:59 crc kubenswrapper[4703]: E1209 12:08:59.065041 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 09 12:08:59 crc kubenswrapper[4703]: I1209 12:08:59.065052 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 09 12:08:59 crc kubenswrapper[4703]: I1209 12:08:59.065210 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 09 12:08:59 crc kubenswrapper[4703]: I1209 12:08:59.065226 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8cddd58-cfa9-4371-bbfb-93747eceefca" containerName="registry-server" Dec 09 12:08:59 crc kubenswrapper[4703]: I1209 12:08:59.065246 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5cb3256-7c02-446e-8c63-d1559675704e" containerName="registry-server" Dec 09 12:08:59 crc kubenswrapper[4703]: I1209 12:08:59.065261 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7cbd2ca-0c83-4c61-b265-a2abcb8ea07f" containerName="pruner" Dec 09 12:08:59 crc kubenswrapper[4703]: I1209 12:08:59.065273 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 09 12:08:59 crc kubenswrapper[4703]: I1209 12:08:59.065288 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 09 12:08:59 crc kubenswrapper[4703]: I1209 12:08:59.065305 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 09 12:08:59 crc kubenswrapper[4703]: I1209 12:08:59.065315 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ef66484-04d6-4994-be99-16da9c301ea2" containerName="registry-server" Dec 09 12:08:59 crc kubenswrapper[4703]: I1209 12:08:59.065329 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 09 12:08:59 crc kubenswrapper[4703]: I1209 12:08:59.065343 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="be89872a-5325-41c1-85b8-6c9880d40f68" containerName="registry-server" Dec 09 12:08:59 crc kubenswrapper[4703]: I1209 12:08:59.065355 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 09 12:08:59 crc kubenswrapper[4703]: I1209 12:08:59.067056 4703 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 09 12:08:59 crc kubenswrapper[4703]: I1209 12:08:59.067849 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 12:08:59 crc kubenswrapper[4703]: I1209 12:08:59.076498 4703 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Dec 09 12:08:59 crc kubenswrapper[4703]: I1209 12:08:59.080159 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 12:08:59 crc kubenswrapper[4703]: I1209 12:08:59.080251 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 12:08:59 crc kubenswrapper[4703]: I1209 12:08:59.080274 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 12:08:59 crc kubenswrapper[4703]: I1209 12:08:59.080295 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 12:08:59 crc kubenswrapper[4703]: I1209 12:08:59.080323 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 12:08:59 crc kubenswrapper[4703]: I1209 12:08:59.080560 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 12:08:59 crc kubenswrapper[4703]: I1209 12:08:59.080623 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 12:08:59 crc kubenswrapper[4703]: I1209 12:08:59.080666 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 12:08:59 crc kubenswrapper[4703]: I1209 12:08:59.124726 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 09 12:08:59 crc kubenswrapper[4703]: I1209 12:08:59.189198 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 12:08:59 crc kubenswrapper[4703]: I1209 12:08:59.189383 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 12:08:59 crc kubenswrapper[4703]: I1209 12:08:59.189425 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 12:08:59 crc kubenswrapper[4703]: I1209 12:08:59.189458 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 12:08:59 crc kubenswrapper[4703]: I1209 12:08:59.189493 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 12:08:59 crc kubenswrapper[4703]: I1209 12:08:59.189513 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 12:08:59 crc kubenswrapper[4703]: I1209 12:08:59.189537 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 12:08:59 crc kubenswrapper[4703]: I1209 12:08:59.189568 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 12:08:59 crc kubenswrapper[4703]: I1209 12:08:59.189663 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 12:08:59 crc kubenswrapper[4703]: I1209 12:08:59.189713 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 12:08:59 crc kubenswrapper[4703]: I1209 12:08:59.189739 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 12:08:59 crc kubenswrapper[4703]: I1209 12:08:59.189765 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 12:08:59 crc kubenswrapper[4703]: I1209 12:08:59.189787 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 12:08:59 crc kubenswrapper[4703]: I1209 12:08:59.189806 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 12:08:59 crc kubenswrapper[4703]: I1209 12:08:59.189830 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 12:08:59 crc kubenswrapper[4703]: I1209 12:08:59.189848 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 12:08:59 crc kubenswrapper[4703]: I1209 12:08:59.414314 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 12:08:59 crc kubenswrapper[4703]: E1209 12:08:59.441229 4703 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.201:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187f8ac0c1b29461 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 12:08:59.440583777 +0000 UTC m=+238.689347286,LastTimestamp:2025-12-09 12:08:59.440583777 +0000 UTC m=+238.689347286,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 12:08:59 crc kubenswrapper[4703]: I1209 12:08:59.572479 4703 generic.go:334] "Generic (PLEG): container finished" podID="47f5e685-7028-4185-9345-fbb2aa35ca07" containerID="2ac3d0eb46d06cf0e5e865d7b300e7d08b6ccd81d6a03baaa7e2e66dc80b1591" exitCode=0 Dec 09 12:08:59 crc kubenswrapper[4703]: I1209 12:08:59.572559 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"47f5e685-7028-4185-9345-fbb2aa35ca07","Type":"ContainerDied","Data":"2ac3d0eb46d06cf0e5e865d7b300e7d08b6ccd81d6a03baaa7e2e66dc80b1591"} Dec 09 12:08:59 crc kubenswrapper[4703]: I1209 12:08:59.573246 4703 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Dec 09 12:08:59 crc kubenswrapper[4703]: I1209 12:08:59.573464 4703 status_manager.go:851] "Failed to get status for pod" podUID="47f5e685-7028-4185-9345-fbb2aa35ca07" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Dec 09 12:08:59 crc kubenswrapper[4703]: I1209 12:08:59.574966 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 09 12:08:59 crc kubenswrapper[4703]: I1209 12:08:59.577027 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 09 12:08:59 crc kubenswrapper[4703]: I1209 12:08:59.577849 4703 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="dd716e645be3932ca219e76ba8d896a8cf27086bbbac8f5e68057873fa68a6f4" exitCode=0 Dec 09 12:08:59 crc kubenswrapper[4703]: I1209 12:08:59.577960 4703 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="917390e847b6ec24f370316f0fbc305db0f667eab946b4f0e749855a62112cc9" exitCode=0 Dec 09 12:08:59 crc kubenswrapper[4703]: I1209 12:08:59.578050 4703 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="8612defa917cc99574e7762a0260e1b670277ea754e0a66bca960ef6b5a97f2d" exitCode=0 Dec 09 12:08:59 crc kubenswrapper[4703]: I1209 12:08:59.577937 4703 scope.go:117] "RemoveContainer" containerID="da708e8829aba2597198cdab64374791885bac9ddf7cb04b296c81d4e81d42ff" Dec 09 12:08:59 crc kubenswrapper[4703]: I1209 12:08:59.578121 4703 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="4dc0c9f4fdf8ce1ac42d54d3823b0137e0fd8ddf4499236ac3b954f9bff7c491" exitCode=2 Dec 09 12:08:59 crc kubenswrapper[4703]: I1209 12:08:59.579395 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"c8f012182f05573ceca12301c55359b469da6b75b53baebfcbf35952efbeb692"} Dec 09 12:09:00 crc kubenswrapper[4703]: I1209 12:09:00.587279 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 09 12:09:00 crc kubenswrapper[4703]: I1209 12:09:00.590040 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"2a437abab8ce1740858840ebbe802fd6c9042dba1ac21b624c0b8bac56a47515"} Dec 09 12:09:00 crc kubenswrapper[4703]: I1209 12:09:00.590777 4703 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Dec 09 12:09:00 crc kubenswrapper[4703]: I1209 12:09:00.591076 4703 status_manager.go:851] "Failed to get status for pod" podUID="47f5e685-7028-4185-9345-fbb2aa35ca07" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Dec 09 12:09:00 crc kubenswrapper[4703]: I1209 12:09:00.782850 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 09 12:09:00 crc kubenswrapper[4703]: I1209 12:09:00.783416 4703 status_manager.go:851] "Failed to get status for pod" podUID="47f5e685-7028-4185-9345-fbb2aa35ca07" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Dec 09 12:09:00 crc kubenswrapper[4703]: I1209 12:09:00.783876 4703 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Dec 09 12:09:00 crc kubenswrapper[4703]: I1209 12:09:00.812853 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/47f5e685-7028-4185-9345-fbb2aa35ca07-var-lock\") pod \"47f5e685-7028-4185-9345-fbb2aa35ca07\" (UID: \"47f5e685-7028-4185-9345-fbb2aa35ca07\") " Dec 09 12:09:00 crc kubenswrapper[4703]: I1209 12:09:00.812937 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/47f5e685-7028-4185-9345-fbb2aa35ca07-kubelet-dir\") pod \"47f5e685-7028-4185-9345-fbb2aa35ca07\" (UID: \"47f5e685-7028-4185-9345-fbb2aa35ca07\") " Dec 09 12:09:00 crc kubenswrapper[4703]: I1209 12:09:00.813001 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/47f5e685-7028-4185-9345-fbb2aa35ca07-kube-api-access\") pod \"47f5e685-7028-4185-9345-fbb2aa35ca07\" (UID: \"47f5e685-7028-4185-9345-fbb2aa35ca07\") " Dec 09 12:09:00 crc kubenswrapper[4703]: I1209 12:09:00.813018 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47f5e685-7028-4185-9345-fbb2aa35ca07-var-lock" (OuterVolumeSpecName: "var-lock") pod "47f5e685-7028-4185-9345-fbb2aa35ca07" (UID: "47f5e685-7028-4185-9345-fbb2aa35ca07"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 12:09:00 crc kubenswrapper[4703]: I1209 12:09:00.813104 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47f5e685-7028-4185-9345-fbb2aa35ca07-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "47f5e685-7028-4185-9345-fbb2aa35ca07" (UID: "47f5e685-7028-4185-9345-fbb2aa35ca07"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 12:09:00 crc kubenswrapper[4703]: I1209 12:09:00.813267 4703 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/47f5e685-7028-4185-9345-fbb2aa35ca07-var-lock\") on node \"crc\" DevicePath \"\"" Dec 09 12:09:00 crc kubenswrapper[4703]: I1209 12:09:00.813282 4703 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/47f5e685-7028-4185-9345-fbb2aa35ca07-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 09 12:09:00 crc kubenswrapper[4703]: I1209 12:09:00.818343 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47f5e685-7028-4185-9345-fbb2aa35ca07-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "47f5e685-7028-4185-9345-fbb2aa35ca07" (UID: "47f5e685-7028-4185-9345-fbb2aa35ca07"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:09:00 crc kubenswrapper[4703]: I1209 12:09:00.914468 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/47f5e685-7028-4185-9345-fbb2aa35ca07-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 09 12:09:01 crc kubenswrapper[4703]: I1209 12:09:01.072287 4703 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Dec 09 12:09:01 crc kubenswrapper[4703]: I1209 12:09:01.072532 4703 status_manager.go:851] "Failed to get status for pod" podUID="47f5e685-7028-4185-9345-fbb2aa35ca07" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Dec 09 12:09:01 crc kubenswrapper[4703]: I1209 12:09:01.523376 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 09 12:09:01 crc kubenswrapper[4703]: I1209 12:09:01.524487 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 12:09:01 crc kubenswrapper[4703]: I1209 12:09:01.525104 4703 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Dec 09 12:09:01 crc kubenswrapper[4703]: I1209 12:09:01.525434 4703 status_manager.go:851] "Failed to get status for pod" podUID="47f5e685-7028-4185-9345-fbb2aa35ca07" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Dec 09 12:09:01 crc kubenswrapper[4703]: I1209 12:09:01.525874 4703 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Dec 09 12:09:01 crc kubenswrapper[4703]: I1209 12:09:01.597727 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 09 12:09:01 crc kubenswrapper[4703]: I1209 12:09:01.598377 4703 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="cd9149c63c18f6178219d1f2e4f8e44bc861746b90fb9748105461192a870431" exitCode=0 Dec 09 12:09:01 crc kubenswrapper[4703]: I1209 12:09:01.598458 4703 scope.go:117] "RemoveContainer" containerID="dd716e645be3932ca219e76ba8d896a8cf27086bbbac8f5e68057873fa68a6f4" Dec 09 12:09:01 crc kubenswrapper[4703]: I1209 12:09:01.598471 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 12:09:01 crc kubenswrapper[4703]: I1209 12:09:01.600421 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"47f5e685-7028-4185-9345-fbb2aa35ca07","Type":"ContainerDied","Data":"af932872b81e79802a56c37bf20fd5a066a464efa34b7017dff74982758d3f65"} Dec 09 12:09:01 crc kubenswrapper[4703]: I1209 12:09:01.600469 4703 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af932872b81e79802a56c37bf20fd5a066a464efa34b7017dff74982758d3f65" Dec 09 12:09:01 crc kubenswrapper[4703]: I1209 12:09:01.600629 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 09 12:09:01 crc kubenswrapper[4703]: I1209 12:09:01.604812 4703 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Dec 09 12:09:01 crc kubenswrapper[4703]: I1209 12:09:01.605272 4703 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Dec 09 12:09:01 crc kubenswrapper[4703]: I1209 12:09:01.605425 4703 status_manager.go:851] "Failed to get status for pod" podUID="47f5e685-7028-4185-9345-fbb2aa35ca07" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Dec 09 12:09:01 crc kubenswrapper[4703]: I1209 12:09:01.614556 4703 scope.go:117] "RemoveContainer" containerID="917390e847b6ec24f370316f0fbc305db0f667eab946b4f0e749855a62112cc9" Dec 09 12:09:01 crc kubenswrapper[4703]: I1209 12:09:01.625431 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 09 12:09:01 crc kubenswrapper[4703]: I1209 12:09:01.625483 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 09 12:09:01 crc kubenswrapper[4703]: I1209 12:09:01.625581 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 12:09:01 crc kubenswrapper[4703]: I1209 12:09:01.625596 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 09 12:09:01 crc kubenswrapper[4703]: I1209 12:09:01.625634 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 12:09:01 crc kubenswrapper[4703]: I1209 12:09:01.625665 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 12:09:01 crc kubenswrapper[4703]: I1209 12:09:01.626065 4703 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 09 12:09:01 crc kubenswrapper[4703]: I1209 12:09:01.626085 4703 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 09 12:09:01 crc kubenswrapper[4703]: I1209 12:09:01.626093 4703 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Dec 09 12:09:01 crc kubenswrapper[4703]: I1209 12:09:01.626432 4703 scope.go:117] "RemoveContainer" containerID="8612defa917cc99574e7762a0260e1b670277ea754e0a66bca960ef6b5a97f2d" Dec 09 12:09:01 crc kubenswrapper[4703]: I1209 12:09:01.638126 4703 scope.go:117] "RemoveContainer" containerID="4dc0c9f4fdf8ce1ac42d54d3823b0137e0fd8ddf4499236ac3b954f9bff7c491" Dec 09 12:09:01 crc kubenswrapper[4703]: I1209 12:09:01.649937 4703 scope.go:117] "RemoveContainer" containerID="cd9149c63c18f6178219d1f2e4f8e44bc861746b90fb9748105461192a870431" Dec 09 12:09:01 crc kubenswrapper[4703]: I1209 12:09:01.663916 4703 scope.go:117] "RemoveContainer" containerID="55c7e1e49ab4af3fff08ccfa9a7d9b4f889fbdd1d60592bca6c87990f4623d5d" Dec 09 12:09:01 crc kubenswrapper[4703]: I1209 12:09:01.680153 4703 scope.go:117] "RemoveContainer" containerID="dd716e645be3932ca219e76ba8d896a8cf27086bbbac8f5e68057873fa68a6f4" Dec 09 12:09:01 crc kubenswrapper[4703]: E1209 12:09:01.680634 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd716e645be3932ca219e76ba8d896a8cf27086bbbac8f5e68057873fa68a6f4\": container with ID starting with dd716e645be3932ca219e76ba8d896a8cf27086bbbac8f5e68057873fa68a6f4 not found: ID does not exist" containerID="dd716e645be3932ca219e76ba8d896a8cf27086bbbac8f5e68057873fa68a6f4" Dec 09 12:09:01 crc kubenswrapper[4703]: I1209 12:09:01.680715 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd716e645be3932ca219e76ba8d896a8cf27086bbbac8f5e68057873fa68a6f4"} err="failed to get container status \"dd716e645be3932ca219e76ba8d896a8cf27086bbbac8f5e68057873fa68a6f4\": rpc error: code = NotFound desc = could not find container \"dd716e645be3932ca219e76ba8d896a8cf27086bbbac8f5e68057873fa68a6f4\": container with ID starting with dd716e645be3932ca219e76ba8d896a8cf27086bbbac8f5e68057873fa68a6f4 not found: ID does not exist" Dec 09 12:09:01 crc kubenswrapper[4703]: I1209 12:09:01.680775 4703 scope.go:117] "RemoveContainer" containerID="917390e847b6ec24f370316f0fbc305db0f667eab946b4f0e749855a62112cc9" Dec 09 12:09:01 crc kubenswrapper[4703]: E1209 12:09:01.681206 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"917390e847b6ec24f370316f0fbc305db0f667eab946b4f0e749855a62112cc9\": container with ID starting with 917390e847b6ec24f370316f0fbc305db0f667eab946b4f0e749855a62112cc9 not found: ID does not exist" containerID="917390e847b6ec24f370316f0fbc305db0f667eab946b4f0e749855a62112cc9" Dec 09 12:09:01 crc kubenswrapper[4703]: I1209 12:09:01.681260 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"917390e847b6ec24f370316f0fbc305db0f667eab946b4f0e749855a62112cc9"} err="failed to get container status \"917390e847b6ec24f370316f0fbc305db0f667eab946b4f0e749855a62112cc9\": rpc error: code = NotFound desc = could not find container \"917390e847b6ec24f370316f0fbc305db0f667eab946b4f0e749855a62112cc9\": container with ID starting with 917390e847b6ec24f370316f0fbc305db0f667eab946b4f0e749855a62112cc9 not found: ID does not exist" Dec 09 12:09:01 crc kubenswrapper[4703]: I1209 12:09:01.681302 4703 scope.go:117] "RemoveContainer" containerID="8612defa917cc99574e7762a0260e1b670277ea754e0a66bca960ef6b5a97f2d" Dec 09 12:09:01 crc kubenswrapper[4703]: E1209 12:09:01.681684 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8612defa917cc99574e7762a0260e1b670277ea754e0a66bca960ef6b5a97f2d\": container with ID starting with 8612defa917cc99574e7762a0260e1b670277ea754e0a66bca960ef6b5a97f2d not found: ID does not exist" containerID="8612defa917cc99574e7762a0260e1b670277ea754e0a66bca960ef6b5a97f2d" Dec 09 12:09:01 crc kubenswrapper[4703]: I1209 12:09:01.681713 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8612defa917cc99574e7762a0260e1b670277ea754e0a66bca960ef6b5a97f2d"} err="failed to get container status \"8612defa917cc99574e7762a0260e1b670277ea754e0a66bca960ef6b5a97f2d\": rpc error: code = NotFound desc = could not find container \"8612defa917cc99574e7762a0260e1b670277ea754e0a66bca960ef6b5a97f2d\": container with ID starting with 8612defa917cc99574e7762a0260e1b670277ea754e0a66bca960ef6b5a97f2d not found: ID does not exist" Dec 09 12:09:01 crc kubenswrapper[4703]: I1209 12:09:01.681740 4703 scope.go:117] "RemoveContainer" containerID="4dc0c9f4fdf8ce1ac42d54d3823b0137e0fd8ddf4499236ac3b954f9bff7c491" Dec 09 12:09:01 crc kubenswrapper[4703]: E1209 12:09:01.681978 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4dc0c9f4fdf8ce1ac42d54d3823b0137e0fd8ddf4499236ac3b954f9bff7c491\": container with ID starting with 4dc0c9f4fdf8ce1ac42d54d3823b0137e0fd8ddf4499236ac3b954f9bff7c491 not found: ID does not exist" containerID="4dc0c9f4fdf8ce1ac42d54d3823b0137e0fd8ddf4499236ac3b954f9bff7c491" Dec 09 12:09:01 crc kubenswrapper[4703]: I1209 12:09:01.682012 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4dc0c9f4fdf8ce1ac42d54d3823b0137e0fd8ddf4499236ac3b954f9bff7c491"} err="failed to get container status \"4dc0c9f4fdf8ce1ac42d54d3823b0137e0fd8ddf4499236ac3b954f9bff7c491\": rpc error: code = NotFound desc = could not find container \"4dc0c9f4fdf8ce1ac42d54d3823b0137e0fd8ddf4499236ac3b954f9bff7c491\": container with ID starting with 4dc0c9f4fdf8ce1ac42d54d3823b0137e0fd8ddf4499236ac3b954f9bff7c491 not found: ID does not exist" Dec 09 12:09:01 crc kubenswrapper[4703]: I1209 12:09:01.682030 4703 scope.go:117] "RemoveContainer" containerID="cd9149c63c18f6178219d1f2e4f8e44bc861746b90fb9748105461192a870431" Dec 09 12:09:01 crc kubenswrapper[4703]: E1209 12:09:01.682288 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd9149c63c18f6178219d1f2e4f8e44bc861746b90fb9748105461192a870431\": container with ID starting with cd9149c63c18f6178219d1f2e4f8e44bc861746b90fb9748105461192a870431 not found: ID does not exist" containerID="cd9149c63c18f6178219d1f2e4f8e44bc861746b90fb9748105461192a870431" Dec 09 12:09:01 crc kubenswrapper[4703]: I1209 12:09:01.682313 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd9149c63c18f6178219d1f2e4f8e44bc861746b90fb9748105461192a870431"} err="failed to get container status \"cd9149c63c18f6178219d1f2e4f8e44bc861746b90fb9748105461192a870431\": rpc error: code = NotFound desc = could not find container \"cd9149c63c18f6178219d1f2e4f8e44bc861746b90fb9748105461192a870431\": container with ID starting with cd9149c63c18f6178219d1f2e4f8e44bc861746b90fb9748105461192a870431 not found: ID does not exist" Dec 09 12:09:01 crc kubenswrapper[4703]: I1209 12:09:01.682328 4703 scope.go:117] "RemoveContainer" containerID="55c7e1e49ab4af3fff08ccfa9a7d9b4f889fbdd1d60592bca6c87990f4623d5d" Dec 09 12:09:01 crc kubenswrapper[4703]: E1209 12:09:01.682529 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55c7e1e49ab4af3fff08ccfa9a7d9b4f889fbdd1d60592bca6c87990f4623d5d\": container with ID starting with 55c7e1e49ab4af3fff08ccfa9a7d9b4f889fbdd1d60592bca6c87990f4623d5d not found: ID does not exist" containerID="55c7e1e49ab4af3fff08ccfa9a7d9b4f889fbdd1d60592bca6c87990f4623d5d" Dec 09 12:09:01 crc kubenswrapper[4703]: I1209 12:09:01.682557 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55c7e1e49ab4af3fff08ccfa9a7d9b4f889fbdd1d60592bca6c87990f4623d5d"} err="failed to get container status \"55c7e1e49ab4af3fff08ccfa9a7d9b4f889fbdd1d60592bca6c87990f4623d5d\": rpc error: code = NotFound desc = could not find container \"55c7e1e49ab4af3fff08ccfa9a7d9b4f889fbdd1d60592bca6c87990f4623d5d\": container with ID starting with 55c7e1e49ab4af3fff08ccfa9a7d9b4f889fbdd1d60592bca6c87990f4623d5d not found: ID does not exist" Dec 09 12:09:01 crc kubenswrapper[4703]: I1209 12:09:01.915649 4703 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Dec 09 12:09:01 crc kubenswrapper[4703]: I1209 12:09:01.915926 4703 status_manager.go:851] "Failed to get status for pod" podUID="47f5e685-7028-4185-9345-fbb2aa35ca07" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Dec 09 12:09:01 crc kubenswrapper[4703]: I1209 12:09:01.916145 4703 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Dec 09 12:09:03 crc kubenswrapper[4703]: I1209 12:09:03.081638 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Dec 09 12:09:04 crc kubenswrapper[4703]: E1209 12:09:04.925008 4703 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.201:6443: connect: connection refused" Dec 09 12:09:04 crc kubenswrapper[4703]: E1209 12:09:04.925824 4703 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.201:6443: connect: connection refused" Dec 09 12:09:04 crc kubenswrapper[4703]: E1209 12:09:04.926227 4703 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.201:6443: connect: connection refused" Dec 09 12:09:04 crc kubenswrapper[4703]: E1209 12:09:04.926540 4703 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.201:6443: connect: connection refused" Dec 09 12:09:04 crc kubenswrapper[4703]: E1209 12:09:04.926838 4703 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.201:6443: connect: connection refused" Dec 09 12:09:04 crc kubenswrapper[4703]: I1209 12:09:04.926874 4703 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Dec 09 12:09:04 crc kubenswrapper[4703]: E1209 12:09:04.927131 4703 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.201:6443: connect: connection refused" interval="200ms" Dec 09 12:09:05 crc kubenswrapper[4703]: E1209 12:09:05.128158 4703 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.201:6443: connect: connection refused" interval="400ms" Dec 09 12:09:05 crc kubenswrapper[4703]: E1209 12:09:05.529042 4703 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.201:6443: connect: connection refused" interval="800ms" Dec 09 12:09:06 crc kubenswrapper[4703]: E1209 12:09:06.329656 4703 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.201:6443: connect: connection refused" interval="1.6s" Dec 09 12:09:07 crc kubenswrapper[4703]: E1209 12:09:07.930933 4703 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.201:6443: connect: connection refused" interval="3.2s" Dec 09 12:09:09 crc kubenswrapper[4703]: E1209 12:09:09.340119 4703 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.201:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187f8ac0c1b29461 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 12:08:59.440583777 +0000 UTC m=+238.689347286,LastTimestamp:2025-12-09 12:08:59.440583777 +0000 UTC m=+238.689347286,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 12:09:11 crc kubenswrapper[4703]: I1209 12:09:11.072833 4703 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Dec 09 12:09:11 crc kubenswrapper[4703]: I1209 12:09:11.073368 4703 status_manager.go:851] "Failed to get status for pod" podUID="47f5e685-7028-4185-9345-fbb2aa35ca07" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Dec 09 12:09:11 crc kubenswrapper[4703]: E1209 12:09:11.132019 4703 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.201:6443: connect: connection refused" interval="6.4s" Dec 09 12:09:12 crc kubenswrapper[4703]: I1209 12:09:12.069475 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 12:09:12 crc kubenswrapper[4703]: I1209 12:09:12.070509 4703 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Dec 09 12:09:12 crc kubenswrapper[4703]: I1209 12:09:12.070916 4703 status_manager.go:851] "Failed to get status for pod" podUID="47f5e685-7028-4185-9345-fbb2aa35ca07" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Dec 09 12:09:12 crc kubenswrapper[4703]: I1209 12:09:12.081710 4703 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="687e9c7e-e4b8-4bbf-871e-714260501e27" Dec 09 12:09:12 crc kubenswrapper[4703]: I1209 12:09:12.081742 4703 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="687e9c7e-e4b8-4bbf-871e-714260501e27" Dec 09 12:09:12 crc kubenswrapper[4703]: E1209 12:09:12.082196 4703 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 12:09:12 crc kubenswrapper[4703]: I1209 12:09:12.082864 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 12:09:12 crc kubenswrapper[4703]: I1209 12:09:12.662900 4703 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="956e544e8a993cab2f36cc49dd95c5e2cb16b5cb6254146c89874a4cc06340e8" exitCode=0 Dec 09 12:09:12 crc kubenswrapper[4703]: I1209 12:09:12.663007 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"956e544e8a993cab2f36cc49dd95c5e2cb16b5cb6254146c89874a4cc06340e8"} Dec 09 12:09:12 crc kubenswrapper[4703]: I1209 12:09:12.663263 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"7fdb128de17688db49e2456963c40d2286707f83a8c01b319454ef25492ef5ae"} Dec 09 12:09:12 crc kubenswrapper[4703]: I1209 12:09:12.663568 4703 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="687e9c7e-e4b8-4bbf-871e-714260501e27" Dec 09 12:09:12 crc kubenswrapper[4703]: I1209 12:09:12.663591 4703 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="687e9c7e-e4b8-4bbf-871e-714260501e27" Dec 09 12:09:12 crc kubenswrapper[4703]: E1209 12:09:12.663989 4703 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 12:09:12 crc kubenswrapper[4703]: I1209 12:09:12.664095 4703 status_manager.go:851] "Failed to get status for pod" podUID="47f5e685-7028-4185-9345-fbb2aa35ca07" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Dec 09 12:09:12 crc kubenswrapper[4703]: I1209 12:09:12.664453 4703 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Dec 09 12:09:12 crc kubenswrapper[4703]: I1209 12:09:12.667282 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 09 12:09:12 crc kubenswrapper[4703]: I1209 12:09:12.667325 4703 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="6b4933db7a36f445ad8289586a5de8f593c8f565d53b4b310292c2a6b436b86b" exitCode=1 Dec 09 12:09:12 crc kubenswrapper[4703]: I1209 12:09:12.667351 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"6b4933db7a36f445ad8289586a5de8f593c8f565d53b4b310292c2a6b436b86b"} Dec 09 12:09:12 crc kubenswrapper[4703]: I1209 12:09:12.667679 4703 scope.go:117] "RemoveContainer" containerID="6b4933db7a36f445ad8289586a5de8f593c8f565d53b4b310292c2a6b436b86b" Dec 09 12:09:12 crc kubenswrapper[4703]: I1209 12:09:12.667906 4703 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Dec 09 12:09:12 crc kubenswrapper[4703]: I1209 12:09:12.668155 4703 status_manager.go:851] "Failed to get status for pod" podUID="47f5e685-7028-4185-9345-fbb2aa35ca07" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Dec 09 12:09:12 crc kubenswrapper[4703]: I1209 12:09:12.668420 4703 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.201:6443: connect: connection refused" Dec 09 12:09:12 crc kubenswrapper[4703]: I1209 12:09:12.901621 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 12:09:13 crc kubenswrapper[4703]: I1209 12:09:13.674212 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 09 12:09:13 crc kubenswrapper[4703]: I1209 12:09:13.674529 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"d9fe1d46f27ee0e64fffccc8bc3e5da1af2857fb2c4cd1dbb44ce244204cbfee"} Dec 09 12:09:13 crc kubenswrapper[4703]: I1209 12:09:13.677473 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"7d884d323811004c92d6ae6b1bbfa137999514f8ae5b9cd2beba3ddb619f4b94"} Dec 09 12:09:13 crc kubenswrapper[4703]: I1209 12:09:13.677498 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"3504dd9e6c3e244ac49d91634e09afddb77c3fa42113a2b11ed4d46e2af33a5c"} Dec 09 12:09:13 crc kubenswrapper[4703]: I1209 12:09:13.677507 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"1ec6f6f27c662e8a1b1c62f7ef78cb3a1c1f828806ecf5d8fa074630b1b585b2"} Dec 09 12:09:13 crc kubenswrapper[4703]: I1209 12:09:13.677516 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"cd6d30d1d8bb663b84e20d41d11f5e78ddd741f19faf399a7825c7c731b1d500"} Dec 09 12:09:14 crc kubenswrapper[4703]: I1209 12:09:14.685818 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"36c49dd3395cdd6b6886d493dbb820b921501c2c753f2f9e23b5285e9b89485f"} Dec 09 12:09:14 crc kubenswrapper[4703]: I1209 12:09:14.686244 4703 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="687e9c7e-e4b8-4bbf-871e-714260501e27" Dec 09 12:09:14 crc kubenswrapper[4703]: I1209 12:09:14.686260 4703 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="687e9c7e-e4b8-4bbf-871e-714260501e27" Dec 09 12:09:17 crc kubenswrapper[4703]: I1209 12:09:17.083069 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 12:09:17 crc kubenswrapper[4703]: I1209 12:09:17.083414 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 12:09:17 crc kubenswrapper[4703]: I1209 12:09:17.089206 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 12:09:19 crc kubenswrapper[4703]: I1209 12:09:19.459244 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 12:09:19 crc kubenswrapper[4703]: I1209 12:09:19.745032 4703 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 12:09:20 crc kubenswrapper[4703]: I1209 12:09:20.725849 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 12:09:20 crc kubenswrapper[4703]: I1209 12:09:20.726723 4703 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="687e9c7e-e4b8-4bbf-871e-714260501e27" Dec 09 12:09:20 crc kubenswrapper[4703]: I1209 12:09:20.726831 4703 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="687e9c7e-e4b8-4bbf-871e-714260501e27" Dec 09 12:09:20 crc kubenswrapper[4703]: I1209 12:09:20.729732 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 12:09:21 crc kubenswrapper[4703]: I1209 12:09:21.118150 4703 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="61d0e35d-8e89-45e6-8b15-bba997beab19" Dec 09 12:09:21 crc kubenswrapper[4703]: I1209 12:09:21.402264 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-6hpw7" podUID="95fb5258-8faf-4e0a-ba69-319222cca40a" containerName="oauth-openshift" containerID="cri-o://2739fe7d949a7f3666fc2e44379bc0d675c8f4cec599a3f2885310c0bb2d2bfd" gracePeriod=15 Dec 09 12:09:21 crc kubenswrapper[4703]: I1209 12:09:21.726634 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-6hpw7" Dec 09 12:09:21 crc kubenswrapper[4703]: I1209 12:09:21.731975 4703 generic.go:334] "Generic (PLEG): container finished" podID="95fb5258-8faf-4e0a-ba69-319222cca40a" containerID="2739fe7d949a7f3666fc2e44379bc0d675c8f4cec599a3f2885310c0bb2d2bfd" exitCode=0 Dec 09 12:09:21 crc kubenswrapper[4703]: I1209 12:09:21.732034 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-6hpw7" event={"ID":"95fb5258-8faf-4e0a-ba69-319222cca40a","Type":"ContainerDied","Data":"2739fe7d949a7f3666fc2e44379bc0d675c8f4cec599a3f2885310c0bb2d2bfd"} Dec 09 12:09:21 crc kubenswrapper[4703]: I1209 12:09:21.732127 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-6hpw7" event={"ID":"95fb5258-8faf-4e0a-ba69-319222cca40a","Type":"ContainerDied","Data":"1474dbedbad30a504a63675f2b650563e1ee4f58cbeed97c5592d1dbad800a2f"} Dec 09 12:09:21 crc kubenswrapper[4703]: I1209 12:09:21.732009 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-6hpw7" Dec 09 12:09:21 crc kubenswrapper[4703]: I1209 12:09:21.732215 4703 scope.go:117] "RemoveContainer" containerID="2739fe7d949a7f3666fc2e44379bc0d675c8f4cec599a3f2885310c0bb2d2bfd" Dec 09 12:09:21 crc kubenswrapper[4703]: I1209 12:09:21.732588 4703 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="687e9c7e-e4b8-4bbf-871e-714260501e27" Dec 09 12:09:21 crc kubenswrapper[4703]: I1209 12:09:21.732607 4703 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="687e9c7e-e4b8-4bbf-871e-714260501e27" Dec 09 12:09:21 crc kubenswrapper[4703]: I1209 12:09:21.766687 4703 scope.go:117] "RemoveContainer" containerID="2739fe7d949a7f3666fc2e44379bc0d675c8f4cec599a3f2885310c0bb2d2bfd" Dec 09 12:09:21 crc kubenswrapper[4703]: I1209 12:09:21.767001 4703 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="61d0e35d-8e89-45e6-8b15-bba997beab19" Dec 09 12:09:21 crc kubenswrapper[4703]: E1209 12:09:21.767604 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2739fe7d949a7f3666fc2e44379bc0d675c8f4cec599a3f2885310c0bb2d2bfd\": container with ID starting with 2739fe7d949a7f3666fc2e44379bc0d675c8f4cec599a3f2885310c0bb2d2bfd not found: ID does not exist" containerID="2739fe7d949a7f3666fc2e44379bc0d675c8f4cec599a3f2885310c0bb2d2bfd" Dec 09 12:09:21 crc kubenswrapper[4703]: I1209 12:09:21.767670 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2739fe7d949a7f3666fc2e44379bc0d675c8f4cec599a3f2885310c0bb2d2bfd"} err="failed to get container status \"2739fe7d949a7f3666fc2e44379bc0d675c8f4cec599a3f2885310c0bb2d2bfd\": rpc error: code = NotFound desc = could not find container \"2739fe7d949a7f3666fc2e44379bc0d675c8f4cec599a3f2885310c0bb2d2bfd\": container with ID starting with 2739fe7d949a7f3666fc2e44379bc0d675c8f4cec599a3f2885310c0bb2d2bfd not found: ID does not exist" Dec 09 12:09:21 crc kubenswrapper[4703]: I1209 12:09:21.877102 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/95fb5258-8faf-4e0a-ba69-319222cca40a-v4-0-config-user-template-login\") pod \"95fb5258-8faf-4e0a-ba69-319222cca40a\" (UID: \"95fb5258-8faf-4e0a-ba69-319222cca40a\") " Dec 09 12:09:21 crc kubenswrapper[4703]: I1209 12:09:21.877158 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/95fb5258-8faf-4e0a-ba69-319222cca40a-v4-0-config-user-template-error\") pod \"95fb5258-8faf-4e0a-ba69-319222cca40a\" (UID: \"95fb5258-8faf-4e0a-ba69-319222cca40a\") " Dec 09 12:09:21 crc kubenswrapper[4703]: I1209 12:09:21.877181 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/95fb5258-8faf-4e0a-ba69-319222cca40a-v4-0-config-system-cliconfig\") pod \"95fb5258-8faf-4e0a-ba69-319222cca40a\" (UID: \"95fb5258-8faf-4e0a-ba69-319222cca40a\") " Dec 09 12:09:21 crc kubenswrapper[4703]: I1209 12:09:21.877241 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/95fb5258-8faf-4e0a-ba69-319222cca40a-audit-dir\") pod \"95fb5258-8faf-4e0a-ba69-319222cca40a\" (UID: \"95fb5258-8faf-4e0a-ba69-319222cca40a\") " Dec 09 12:09:21 crc kubenswrapper[4703]: I1209 12:09:21.877282 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/95fb5258-8faf-4e0a-ba69-319222cca40a-v4-0-config-system-router-certs\") pod \"95fb5258-8faf-4e0a-ba69-319222cca40a\" (UID: \"95fb5258-8faf-4e0a-ba69-319222cca40a\") " Dec 09 12:09:21 crc kubenswrapper[4703]: I1209 12:09:21.877312 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/95fb5258-8faf-4e0a-ba69-319222cca40a-v4-0-config-user-template-provider-selection\") pod \"95fb5258-8faf-4e0a-ba69-319222cca40a\" (UID: \"95fb5258-8faf-4e0a-ba69-319222cca40a\") " Dec 09 12:09:21 crc kubenswrapper[4703]: I1209 12:09:21.877346 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/95fb5258-8faf-4e0a-ba69-319222cca40a-v4-0-config-system-service-ca\") pod \"95fb5258-8faf-4e0a-ba69-319222cca40a\" (UID: \"95fb5258-8faf-4e0a-ba69-319222cca40a\") " Dec 09 12:09:21 crc kubenswrapper[4703]: I1209 12:09:21.877365 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/95fb5258-8faf-4e0a-ba69-319222cca40a-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "95fb5258-8faf-4e0a-ba69-319222cca40a" (UID: "95fb5258-8faf-4e0a-ba69-319222cca40a"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 12:09:21 crc kubenswrapper[4703]: I1209 12:09:21.877374 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dwfbs\" (UniqueName: \"kubernetes.io/projected/95fb5258-8faf-4e0a-ba69-319222cca40a-kube-api-access-dwfbs\") pod \"95fb5258-8faf-4e0a-ba69-319222cca40a\" (UID: \"95fb5258-8faf-4e0a-ba69-319222cca40a\") " Dec 09 12:09:21 crc kubenswrapper[4703]: I1209 12:09:21.877434 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/95fb5258-8faf-4e0a-ba69-319222cca40a-v4-0-config-system-serving-cert\") pod \"95fb5258-8faf-4e0a-ba69-319222cca40a\" (UID: \"95fb5258-8faf-4e0a-ba69-319222cca40a\") " Dec 09 12:09:21 crc kubenswrapper[4703]: I1209 12:09:21.877474 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/95fb5258-8faf-4e0a-ba69-319222cca40a-v4-0-config-system-ocp-branding-template\") pod \"95fb5258-8faf-4e0a-ba69-319222cca40a\" (UID: \"95fb5258-8faf-4e0a-ba69-319222cca40a\") " Dec 09 12:09:21 crc kubenswrapper[4703]: I1209 12:09:21.877503 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/95fb5258-8faf-4e0a-ba69-319222cca40a-v4-0-config-system-trusted-ca-bundle\") pod \"95fb5258-8faf-4e0a-ba69-319222cca40a\" (UID: \"95fb5258-8faf-4e0a-ba69-319222cca40a\") " Dec 09 12:09:21 crc kubenswrapper[4703]: I1209 12:09:21.877525 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/95fb5258-8faf-4e0a-ba69-319222cca40a-v4-0-config-user-idp-0-file-data\") pod \"95fb5258-8faf-4e0a-ba69-319222cca40a\" (UID: \"95fb5258-8faf-4e0a-ba69-319222cca40a\") " Dec 09 12:09:21 crc kubenswrapper[4703]: I1209 12:09:21.877566 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/95fb5258-8faf-4e0a-ba69-319222cca40a-v4-0-config-system-session\") pod \"95fb5258-8faf-4e0a-ba69-319222cca40a\" (UID: \"95fb5258-8faf-4e0a-ba69-319222cca40a\") " Dec 09 12:09:21 crc kubenswrapper[4703]: I1209 12:09:21.877681 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/95fb5258-8faf-4e0a-ba69-319222cca40a-audit-policies\") pod \"95fb5258-8faf-4e0a-ba69-319222cca40a\" (UID: \"95fb5258-8faf-4e0a-ba69-319222cca40a\") " Dec 09 12:09:21 crc kubenswrapper[4703]: I1209 12:09:21.878535 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95fb5258-8faf-4e0a-ba69-319222cca40a-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "95fb5258-8faf-4e0a-ba69-319222cca40a" (UID: "95fb5258-8faf-4e0a-ba69-319222cca40a"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:09:21 crc kubenswrapper[4703]: I1209 12:09:21.878676 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95fb5258-8faf-4e0a-ba69-319222cca40a-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "95fb5258-8faf-4e0a-ba69-319222cca40a" (UID: "95fb5258-8faf-4e0a-ba69-319222cca40a"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:09:21 crc kubenswrapper[4703]: I1209 12:09:21.879047 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95fb5258-8faf-4e0a-ba69-319222cca40a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "95fb5258-8faf-4e0a-ba69-319222cca40a" (UID: "95fb5258-8faf-4e0a-ba69-319222cca40a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:09:21 crc kubenswrapper[4703]: I1209 12:09:21.879179 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95fb5258-8faf-4e0a-ba69-319222cca40a-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "95fb5258-8faf-4e0a-ba69-319222cca40a" (UID: "95fb5258-8faf-4e0a-ba69-319222cca40a"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:09:21 crc kubenswrapper[4703]: I1209 12:09:21.879307 4703 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/95fb5258-8faf-4e0a-ba69-319222cca40a-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 09 12:09:21 crc kubenswrapper[4703]: I1209 12:09:21.879327 4703 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/95fb5258-8faf-4e0a-ba69-319222cca40a-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 09 12:09:21 crc kubenswrapper[4703]: I1209 12:09:21.879341 4703 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/95fb5258-8faf-4e0a-ba69-319222cca40a-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:09:21 crc kubenswrapper[4703]: I1209 12:09:21.879354 4703 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/95fb5258-8faf-4e0a-ba69-319222cca40a-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 09 12:09:21 crc kubenswrapper[4703]: I1209 12:09:21.879367 4703 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/95fb5258-8faf-4e0a-ba69-319222cca40a-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 09 12:09:21 crc kubenswrapper[4703]: I1209 12:09:21.887792 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95fb5258-8faf-4e0a-ba69-319222cca40a-kube-api-access-dwfbs" (OuterVolumeSpecName: "kube-api-access-dwfbs") pod "95fb5258-8faf-4e0a-ba69-319222cca40a" (UID: "95fb5258-8faf-4e0a-ba69-319222cca40a"). InnerVolumeSpecName "kube-api-access-dwfbs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:09:21 crc kubenswrapper[4703]: I1209 12:09:21.887956 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95fb5258-8faf-4e0a-ba69-319222cca40a-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "95fb5258-8faf-4e0a-ba69-319222cca40a" (UID: "95fb5258-8faf-4e0a-ba69-319222cca40a"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:09:21 crc kubenswrapper[4703]: I1209 12:09:21.888248 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95fb5258-8faf-4e0a-ba69-319222cca40a-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "95fb5258-8faf-4e0a-ba69-319222cca40a" (UID: "95fb5258-8faf-4e0a-ba69-319222cca40a"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:09:21 crc kubenswrapper[4703]: I1209 12:09:21.894479 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95fb5258-8faf-4e0a-ba69-319222cca40a-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "95fb5258-8faf-4e0a-ba69-319222cca40a" (UID: "95fb5258-8faf-4e0a-ba69-319222cca40a"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:09:21 crc kubenswrapper[4703]: I1209 12:09:21.894722 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95fb5258-8faf-4e0a-ba69-319222cca40a-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "95fb5258-8faf-4e0a-ba69-319222cca40a" (UID: "95fb5258-8faf-4e0a-ba69-319222cca40a"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:09:21 crc kubenswrapper[4703]: I1209 12:09:21.895054 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95fb5258-8faf-4e0a-ba69-319222cca40a-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "95fb5258-8faf-4e0a-ba69-319222cca40a" (UID: "95fb5258-8faf-4e0a-ba69-319222cca40a"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:09:21 crc kubenswrapper[4703]: I1209 12:09:21.895100 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95fb5258-8faf-4e0a-ba69-319222cca40a-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "95fb5258-8faf-4e0a-ba69-319222cca40a" (UID: "95fb5258-8faf-4e0a-ba69-319222cca40a"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:09:21 crc kubenswrapper[4703]: I1209 12:09:21.895349 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95fb5258-8faf-4e0a-ba69-319222cca40a-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "95fb5258-8faf-4e0a-ba69-319222cca40a" (UID: "95fb5258-8faf-4e0a-ba69-319222cca40a"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:09:21 crc kubenswrapper[4703]: I1209 12:09:21.895407 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95fb5258-8faf-4e0a-ba69-319222cca40a-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "95fb5258-8faf-4e0a-ba69-319222cca40a" (UID: "95fb5258-8faf-4e0a-ba69-319222cca40a"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:09:21 crc kubenswrapper[4703]: I1209 12:09:21.980798 4703 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/95fb5258-8faf-4e0a-ba69-319222cca40a-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 09 12:09:21 crc kubenswrapper[4703]: I1209 12:09:21.980830 4703 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/95fb5258-8faf-4e0a-ba69-319222cca40a-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 09 12:09:21 crc kubenswrapper[4703]: I1209 12:09:21.980841 4703 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/95fb5258-8faf-4e0a-ba69-319222cca40a-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 09 12:09:21 crc kubenswrapper[4703]: I1209 12:09:21.980852 4703 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/95fb5258-8faf-4e0a-ba69-319222cca40a-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 09 12:09:21 crc kubenswrapper[4703]: I1209 12:09:21.980864 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dwfbs\" (UniqueName: \"kubernetes.io/projected/95fb5258-8faf-4e0a-ba69-319222cca40a-kube-api-access-dwfbs\") on node \"crc\" DevicePath \"\"" Dec 09 12:09:21 crc kubenswrapper[4703]: I1209 12:09:21.980873 4703 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/95fb5258-8faf-4e0a-ba69-319222cca40a-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 12:09:21 crc kubenswrapper[4703]: I1209 12:09:21.980885 4703 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/95fb5258-8faf-4e0a-ba69-319222cca40a-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 09 12:09:21 crc kubenswrapper[4703]: I1209 12:09:21.980898 4703 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/95fb5258-8faf-4e0a-ba69-319222cca40a-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 09 12:09:21 crc kubenswrapper[4703]: I1209 12:09:21.980911 4703 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/95fb5258-8faf-4e0a-ba69-319222cca40a-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 09 12:09:22 crc kubenswrapper[4703]: I1209 12:09:22.901912 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 12:09:22 crc kubenswrapper[4703]: I1209 12:09:22.908033 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 12:09:23 crc kubenswrapper[4703]: I1209 12:09:23.748866 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 12:09:29 crc kubenswrapper[4703]: I1209 12:09:29.912306 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 09 12:09:30 crc kubenswrapper[4703]: I1209 12:09:30.077904 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 09 12:09:30 crc kubenswrapper[4703]: I1209 12:09:30.085718 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 09 12:09:30 crc kubenswrapper[4703]: I1209 12:09:30.572032 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 09 12:09:30 crc kubenswrapper[4703]: I1209 12:09:30.871962 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 09 12:09:30 crc kubenswrapper[4703]: I1209 12:09:30.910757 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 09 12:09:30 crc kubenswrapper[4703]: I1209 12:09:30.936619 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 09 12:09:31 crc kubenswrapper[4703]: I1209 12:09:31.037405 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Dec 09 12:09:31 crc kubenswrapper[4703]: I1209 12:09:31.085219 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 09 12:09:31 crc kubenswrapper[4703]: I1209 12:09:31.206731 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 09 12:09:31 crc kubenswrapper[4703]: I1209 12:09:31.344700 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 09 12:09:31 crc kubenswrapper[4703]: I1209 12:09:31.409269 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 09 12:09:31 crc kubenswrapper[4703]: I1209 12:09:31.516140 4703 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Dec 09 12:09:31 crc kubenswrapper[4703]: I1209 12:09:31.526022 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 09 12:09:31 crc kubenswrapper[4703]: I1209 12:09:31.869449 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Dec 09 12:09:31 crc kubenswrapper[4703]: I1209 12:09:31.998518 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 09 12:09:32 crc kubenswrapper[4703]: I1209 12:09:32.046575 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Dec 09 12:09:32 crc kubenswrapper[4703]: I1209 12:09:32.048151 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 09 12:09:32 crc kubenswrapper[4703]: I1209 12:09:32.253613 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Dec 09 12:09:32 crc kubenswrapper[4703]: I1209 12:09:32.484036 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 09 12:09:32 crc kubenswrapper[4703]: I1209 12:09:32.486560 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 09 12:09:32 crc kubenswrapper[4703]: I1209 12:09:32.519820 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 09 12:09:32 crc kubenswrapper[4703]: I1209 12:09:32.639471 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 09 12:09:32 crc kubenswrapper[4703]: I1209 12:09:32.680646 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Dec 09 12:09:32 crc kubenswrapper[4703]: I1209 12:09:32.692521 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 09 12:09:32 crc kubenswrapper[4703]: I1209 12:09:32.742284 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 09 12:09:32 crc kubenswrapper[4703]: I1209 12:09:32.751338 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 09 12:09:32 crc kubenswrapper[4703]: I1209 12:09:32.804671 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 09 12:09:32 crc kubenswrapper[4703]: I1209 12:09:32.890990 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 09 12:09:32 crc kubenswrapper[4703]: I1209 12:09:32.969064 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 09 12:09:32 crc kubenswrapper[4703]: I1209 12:09:32.981530 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 09 12:09:33 crc kubenswrapper[4703]: I1209 12:09:33.129067 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Dec 09 12:09:33 crc kubenswrapper[4703]: I1209 12:09:33.215036 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 09 12:09:33 crc kubenswrapper[4703]: I1209 12:09:33.221569 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 09 12:09:33 crc kubenswrapper[4703]: I1209 12:09:33.564116 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 09 12:09:33 crc kubenswrapper[4703]: I1209 12:09:33.609049 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 09 12:09:33 crc kubenswrapper[4703]: I1209 12:09:33.713130 4703 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 09 12:09:33 crc kubenswrapper[4703]: I1209 12:09:33.717465 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=34.717445177 podStartE2EDuration="34.717445177s" podCreationTimestamp="2025-12-09 12:08:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:09:19.812571512 +0000 UTC m=+259.061335031" watchObservedRunningTime="2025-12-09 12:09:33.717445177 +0000 UTC m=+272.966208706" Dec 09 12:09:33 crc kubenswrapper[4703]: I1209 12:09:33.718342 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-558db77b4-6hpw7"] Dec 09 12:09:33 crc kubenswrapper[4703]: I1209 12:09:33.718398 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 09 12:09:33 crc kubenswrapper[4703]: I1209 12:09:33.720793 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Dec 09 12:09:33 crc kubenswrapper[4703]: I1209 12:09:33.723872 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 12:09:33 crc kubenswrapper[4703]: I1209 12:09:33.739164 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=14.739146692 podStartE2EDuration="14.739146692s" podCreationTimestamp="2025-12-09 12:09:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:09:33.735677067 +0000 UTC m=+272.984440586" watchObservedRunningTime="2025-12-09 12:09:33.739146692 +0000 UTC m=+272.987910211" Dec 09 12:09:33 crc kubenswrapper[4703]: I1209 12:09:33.788323 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 09 12:09:33 crc kubenswrapper[4703]: I1209 12:09:33.886828 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Dec 09 12:09:33 crc kubenswrapper[4703]: I1209 12:09:33.956548 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 09 12:09:33 crc kubenswrapper[4703]: I1209 12:09:33.971934 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 09 12:09:33 crc kubenswrapper[4703]: I1209 12:09:33.974969 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 09 12:09:34 crc kubenswrapper[4703]: I1209 12:09:34.133025 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Dec 09 12:09:34 crc kubenswrapper[4703]: I1209 12:09:34.203730 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 09 12:09:34 crc kubenswrapper[4703]: I1209 12:09:34.222850 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Dec 09 12:09:34 crc kubenswrapper[4703]: I1209 12:09:34.223867 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 09 12:09:34 crc kubenswrapper[4703]: I1209 12:09:34.357648 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 09 12:09:34 crc kubenswrapper[4703]: I1209 12:09:34.685635 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Dec 09 12:09:34 crc kubenswrapper[4703]: I1209 12:09:34.813705 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 09 12:09:34 crc kubenswrapper[4703]: I1209 12:09:34.855121 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 09 12:09:34 crc kubenswrapper[4703]: I1209 12:09:34.882569 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Dec 09 12:09:34 crc kubenswrapper[4703]: I1209 12:09:34.950037 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 09 12:09:34 crc kubenswrapper[4703]: I1209 12:09:34.961063 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 09 12:09:35 crc kubenswrapper[4703]: I1209 12:09:35.049485 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 09 12:09:35 crc kubenswrapper[4703]: I1209 12:09:35.076656 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95fb5258-8faf-4e0a-ba69-319222cca40a" path="/var/lib/kubelet/pods/95fb5258-8faf-4e0a-ba69-319222cca40a/volumes" Dec 09 12:09:35 crc kubenswrapper[4703]: I1209 12:09:35.083671 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 09 12:09:35 crc kubenswrapper[4703]: I1209 12:09:35.121743 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 09 12:09:35 crc kubenswrapper[4703]: I1209 12:09:35.190368 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 09 12:09:35 crc kubenswrapper[4703]: I1209 12:09:35.213683 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 09 12:09:35 crc kubenswrapper[4703]: I1209 12:09:35.263911 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 09 12:09:35 crc kubenswrapper[4703]: I1209 12:09:35.287876 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 09 12:09:35 crc kubenswrapper[4703]: I1209 12:09:35.338760 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 09 12:09:35 crc kubenswrapper[4703]: I1209 12:09:35.374008 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 09 12:09:35 crc kubenswrapper[4703]: I1209 12:09:35.411379 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 09 12:09:35 crc kubenswrapper[4703]: I1209 12:09:35.463973 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 09 12:09:35 crc kubenswrapper[4703]: I1209 12:09:35.472649 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 09 12:09:35 crc kubenswrapper[4703]: I1209 12:09:35.511933 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 09 12:09:35 crc kubenswrapper[4703]: I1209 12:09:35.574695 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 09 12:09:35 crc kubenswrapper[4703]: I1209 12:09:35.606016 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 09 12:09:35 crc kubenswrapper[4703]: I1209 12:09:35.624605 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 09 12:09:35 crc kubenswrapper[4703]: I1209 12:09:35.718992 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 09 12:09:35 crc kubenswrapper[4703]: I1209 12:09:35.731319 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 09 12:09:35 crc kubenswrapper[4703]: I1209 12:09:35.762893 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 09 12:09:35 crc kubenswrapper[4703]: I1209 12:09:35.775256 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 09 12:09:35 crc kubenswrapper[4703]: I1209 12:09:35.872691 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 09 12:09:35 crc kubenswrapper[4703]: I1209 12:09:35.877714 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 09 12:09:35 crc kubenswrapper[4703]: I1209 12:09:35.907671 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Dec 09 12:09:35 crc kubenswrapper[4703]: I1209 12:09:35.948079 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 09 12:09:36 crc kubenswrapper[4703]: I1209 12:09:36.043320 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 09 12:09:36 crc kubenswrapper[4703]: I1209 12:09:36.065499 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 09 12:09:36 crc kubenswrapper[4703]: I1209 12:09:36.093119 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 09 12:09:36 crc kubenswrapper[4703]: I1209 12:09:36.127806 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 09 12:09:36 crc kubenswrapper[4703]: I1209 12:09:36.133400 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Dec 09 12:09:36 crc kubenswrapper[4703]: I1209 12:09:36.223808 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Dec 09 12:09:36 crc kubenswrapper[4703]: I1209 12:09:36.418105 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 09 12:09:36 crc kubenswrapper[4703]: I1209 12:09:36.456320 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 09 12:09:36 crc kubenswrapper[4703]: I1209 12:09:36.459660 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 09 12:09:36 crc kubenswrapper[4703]: I1209 12:09:36.491177 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 09 12:09:36 crc kubenswrapper[4703]: I1209 12:09:36.583275 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 09 12:09:36 crc kubenswrapper[4703]: I1209 12:09:36.592270 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 09 12:09:36 crc kubenswrapper[4703]: I1209 12:09:36.599424 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 09 12:09:36 crc kubenswrapper[4703]: I1209 12:09:36.692966 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 09 12:09:36 crc kubenswrapper[4703]: I1209 12:09:36.802147 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Dec 09 12:09:36 crc kubenswrapper[4703]: I1209 12:09:36.819845 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 09 12:09:36 crc kubenswrapper[4703]: I1209 12:09:36.841402 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 09 12:09:36 crc kubenswrapper[4703]: I1209 12:09:36.885694 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 09 12:09:36 crc kubenswrapper[4703]: I1209 12:09:36.903120 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 09 12:09:37 crc kubenswrapper[4703]: I1209 12:09:37.038352 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 09 12:09:37 crc kubenswrapper[4703]: I1209 12:09:37.088286 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Dec 09 12:09:37 crc kubenswrapper[4703]: I1209 12:09:37.169668 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 09 12:09:37 crc kubenswrapper[4703]: I1209 12:09:37.191546 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 09 12:09:37 crc kubenswrapper[4703]: I1209 12:09:37.192172 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 09 12:09:37 crc kubenswrapper[4703]: I1209 12:09:37.193946 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 09 12:09:37 crc kubenswrapper[4703]: I1209 12:09:37.248170 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 09 12:09:37 crc kubenswrapper[4703]: I1209 12:09:37.332085 4703 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 09 12:09:37 crc kubenswrapper[4703]: I1209 12:09:37.483267 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 09 12:09:37 crc kubenswrapper[4703]: I1209 12:09:37.799503 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 09 12:09:37 crc kubenswrapper[4703]: I1209 12:09:37.809650 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 09 12:09:37 crc kubenswrapper[4703]: I1209 12:09:37.839461 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 09 12:09:37 crc kubenswrapper[4703]: I1209 12:09:37.907485 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 09 12:09:37 crc kubenswrapper[4703]: I1209 12:09:37.921655 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 09 12:09:37 crc kubenswrapper[4703]: I1209 12:09:37.933428 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 09 12:09:38 crc kubenswrapper[4703]: I1209 12:09:38.041048 4703 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 09 12:09:38 crc kubenswrapper[4703]: I1209 12:09:38.087529 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Dec 09 12:09:38 crc kubenswrapper[4703]: I1209 12:09:38.195358 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 09 12:09:38 crc kubenswrapper[4703]: I1209 12:09:38.199181 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Dec 09 12:09:38 crc kubenswrapper[4703]: I1209 12:09:38.201803 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 09 12:09:38 crc kubenswrapper[4703]: I1209 12:09:38.423504 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 09 12:09:38 crc kubenswrapper[4703]: I1209 12:09:38.509810 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Dec 09 12:09:38 crc kubenswrapper[4703]: I1209 12:09:38.556122 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 09 12:09:38 crc kubenswrapper[4703]: I1209 12:09:38.613670 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Dec 09 12:09:38 crc kubenswrapper[4703]: I1209 12:09:38.627386 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 09 12:09:38 crc kubenswrapper[4703]: I1209 12:09:38.666835 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 09 12:09:38 crc kubenswrapper[4703]: I1209 12:09:38.748950 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 09 12:09:38 crc kubenswrapper[4703]: I1209 12:09:38.776527 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 09 12:09:39 crc kubenswrapper[4703]: I1209 12:09:39.036342 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 09 12:09:39 crc kubenswrapper[4703]: I1209 12:09:39.059405 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Dec 09 12:09:39 crc kubenswrapper[4703]: I1209 12:09:39.277218 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 09 12:09:39 crc kubenswrapper[4703]: I1209 12:09:39.357125 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 09 12:09:39 crc kubenswrapper[4703]: I1209 12:09:39.362942 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Dec 09 12:09:39 crc kubenswrapper[4703]: I1209 12:09:39.368180 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Dec 09 12:09:39 crc kubenswrapper[4703]: I1209 12:09:39.382914 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 09 12:09:39 crc kubenswrapper[4703]: I1209 12:09:39.464929 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 09 12:09:39 crc kubenswrapper[4703]: I1209 12:09:39.488205 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 09 12:09:39 crc kubenswrapper[4703]: I1209 12:09:39.489837 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 09 12:09:39 crc kubenswrapper[4703]: I1209 12:09:39.504008 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 09 12:09:39 crc kubenswrapper[4703]: I1209 12:09:39.505316 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-69fb88d4f9-lt8f4"] Dec 09 12:09:39 crc kubenswrapper[4703]: E1209 12:09:39.505670 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47f5e685-7028-4185-9345-fbb2aa35ca07" containerName="installer" Dec 09 12:09:39 crc kubenswrapper[4703]: I1209 12:09:39.505688 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="47f5e685-7028-4185-9345-fbb2aa35ca07" containerName="installer" Dec 09 12:09:39 crc kubenswrapper[4703]: E1209 12:09:39.505703 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95fb5258-8faf-4e0a-ba69-319222cca40a" containerName="oauth-openshift" Dec 09 12:09:39 crc kubenswrapper[4703]: I1209 12:09:39.505711 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="95fb5258-8faf-4e0a-ba69-319222cca40a" containerName="oauth-openshift" Dec 09 12:09:39 crc kubenswrapper[4703]: I1209 12:09:39.505849 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="47f5e685-7028-4185-9345-fbb2aa35ca07" containerName="installer" Dec 09 12:09:39 crc kubenswrapper[4703]: I1209 12:09:39.505866 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="95fb5258-8faf-4e0a-ba69-319222cca40a" containerName="oauth-openshift" Dec 09 12:09:39 crc kubenswrapper[4703]: I1209 12:09:39.506672 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-69fb88d4f9-lt8f4" Dec 09 12:09:39 crc kubenswrapper[4703]: I1209 12:09:39.510554 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 09 12:09:39 crc kubenswrapper[4703]: I1209 12:09:39.510812 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 09 12:09:39 crc kubenswrapper[4703]: I1209 12:09:39.510915 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 09 12:09:39 crc kubenswrapper[4703]: I1209 12:09:39.511407 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 09 12:09:39 crc kubenswrapper[4703]: I1209 12:09:39.511672 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 09 12:09:39 crc kubenswrapper[4703]: I1209 12:09:39.512814 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 09 12:09:39 crc kubenswrapper[4703]: I1209 12:09:39.512945 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 09 12:09:39 crc kubenswrapper[4703]: I1209 12:09:39.520055 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 09 12:09:39 crc kubenswrapper[4703]: I1209 12:09:39.520483 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 09 12:09:39 crc kubenswrapper[4703]: I1209 12:09:39.520723 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 09 12:09:39 crc kubenswrapper[4703]: I1209 12:09:39.521419 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 09 12:09:39 crc kubenswrapper[4703]: I1209 12:09:39.521642 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 09 12:09:39 crc kubenswrapper[4703]: I1209 12:09:39.522458 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 09 12:09:39 crc kubenswrapper[4703]: I1209 12:09:39.527128 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 09 12:09:39 crc kubenswrapper[4703]: I1209 12:09:39.527453 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 09 12:09:39 crc kubenswrapper[4703]: I1209 12:09:39.544012 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 09 12:09:39 crc kubenswrapper[4703]: I1209 12:09:39.545454 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 09 12:09:39 crc kubenswrapper[4703]: I1209 12:09:39.545657 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 09 12:09:39 crc kubenswrapper[4703]: I1209 12:09:39.550337 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-69fb88d4f9-lt8f4"] Dec 09 12:09:39 crc kubenswrapper[4703]: I1209 12:09:39.553451 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 09 12:09:39 crc kubenswrapper[4703]: I1209 12:09:39.556896 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 09 12:09:39 crc kubenswrapper[4703]: I1209 12:09:39.679255 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 09 12:09:39 crc kubenswrapper[4703]: I1209 12:09:39.683208 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 09 12:09:39 crc kubenswrapper[4703]: I1209 12:09:39.694635 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 09 12:09:39 crc kubenswrapper[4703]: I1209 12:09:39.696274 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/1ecfb3de-ba87-49c4-88a8-23d0ea397032-v4-0-config-user-template-error\") pod \"oauth-openshift-69fb88d4f9-lt8f4\" (UID: \"1ecfb3de-ba87-49c4-88a8-23d0ea397032\") " pod="openshift-authentication/oauth-openshift-69fb88d4f9-lt8f4" Dec 09 12:09:39 crc kubenswrapper[4703]: I1209 12:09:39.696314 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/1ecfb3de-ba87-49c4-88a8-23d0ea397032-v4-0-config-system-cliconfig\") pod \"oauth-openshift-69fb88d4f9-lt8f4\" (UID: \"1ecfb3de-ba87-49c4-88a8-23d0ea397032\") " pod="openshift-authentication/oauth-openshift-69fb88d4f9-lt8f4" Dec 09 12:09:39 crc kubenswrapper[4703]: I1209 12:09:39.696338 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/1ecfb3de-ba87-49c4-88a8-23d0ea397032-v4-0-config-system-service-ca\") pod \"oauth-openshift-69fb88d4f9-lt8f4\" (UID: \"1ecfb3de-ba87-49c4-88a8-23d0ea397032\") " pod="openshift-authentication/oauth-openshift-69fb88d4f9-lt8f4" Dec 09 12:09:39 crc kubenswrapper[4703]: I1209 12:09:39.696362 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/1ecfb3de-ba87-49c4-88a8-23d0ea397032-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-69fb88d4f9-lt8f4\" (UID: \"1ecfb3de-ba87-49c4-88a8-23d0ea397032\") " pod="openshift-authentication/oauth-openshift-69fb88d4f9-lt8f4" Dec 09 12:09:39 crc kubenswrapper[4703]: I1209 12:09:39.696397 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1ecfb3de-ba87-49c4-88a8-23d0ea397032-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-69fb88d4f9-lt8f4\" (UID: \"1ecfb3de-ba87-49c4-88a8-23d0ea397032\") " pod="openshift-authentication/oauth-openshift-69fb88d4f9-lt8f4" Dec 09 12:09:39 crc kubenswrapper[4703]: I1209 12:09:39.696419 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/1ecfb3de-ba87-49c4-88a8-23d0ea397032-v4-0-config-user-template-login\") pod \"oauth-openshift-69fb88d4f9-lt8f4\" (UID: \"1ecfb3de-ba87-49c4-88a8-23d0ea397032\") " pod="openshift-authentication/oauth-openshift-69fb88d4f9-lt8f4" Dec 09 12:09:39 crc kubenswrapper[4703]: I1209 12:09:39.696442 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1ecfb3de-ba87-49c4-88a8-23d0ea397032-audit-policies\") pod \"oauth-openshift-69fb88d4f9-lt8f4\" (UID: \"1ecfb3de-ba87-49c4-88a8-23d0ea397032\") " pod="openshift-authentication/oauth-openshift-69fb88d4f9-lt8f4" Dec 09 12:09:39 crc kubenswrapper[4703]: I1209 12:09:39.696471 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/1ecfb3de-ba87-49c4-88a8-23d0ea397032-v4-0-config-system-router-certs\") pod \"oauth-openshift-69fb88d4f9-lt8f4\" (UID: \"1ecfb3de-ba87-49c4-88a8-23d0ea397032\") " pod="openshift-authentication/oauth-openshift-69fb88d4f9-lt8f4" Dec 09 12:09:39 crc kubenswrapper[4703]: I1209 12:09:39.696504 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/1ecfb3de-ba87-49c4-88a8-23d0ea397032-v4-0-config-system-serving-cert\") pod \"oauth-openshift-69fb88d4f9-lt8f4\" (UID: \"1ecfb3de-ba87-49c4-88a8-23d0ea397032\") " pod="openshift-authentication/oauth-openshift-69fb88d4f9-lt8f4" Dec 09 12:09:39 crc kubenswrapper[4703]: I1209 12:09:39.696525 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1ecfb3de-ba87-49c4-88a8-23d0ea397032-audit-dir\") pod \"oauth-openshift-69fb88d4f9-lt8f4\" (UID: \"1ecfb3de-ba87-49c4-88a8-23d0ea397032\") " pod="openshift-authentication/oauth-openshift-69fb88d4f9-lt8f4" Dec 09 12:09:39 crc kubenswrapper[4703]: I1209 12:09:39.696551 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/1ecfb3de-ba87-49c4-88a8-23d0ea397032-v4-0-config-system-session\") pod \"oauth-openshift-69fb88d4f9-lt8f4\" (UID: \"1ecfb3de-ba87-49c4-88a8-23d0ea397032\") " pod="openshift-authentication/oauth-openshift-69fb88d4f9-lt8f4" Dec 09 12:09:39 crc kubenswrapper[4703]: I1209 12:09:39.696581 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/1ecfb3de-ba87-49c4-88a8-23d0ea397032-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-69fb88d4f9-lt8f4\" (UID: \"1ecfb3de-ba87-49c4-88a8-23d0ea397032\") " pod="openshift-authentication/oauth-openshift-69fb88d4f9-lt8f4" Dec 09 12:09:39 crc kubenswrapper[4703]: I1209 12:09:39.696649 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/1ecfb3de-ba87-49c4-88a8-23d0ea397032-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-69fb88d4f9-lt8f4\" (UID: \"1ecfb3de-ba87-49c4-88a8-23d0ea397032\") " pod="openshift-authentication/oauth-openshift-69fb88d4f9-lt8f4" Dec 09 12:09:39 crc kubenswrapper[4703]: I1209 12:09:39.696677 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5bm4\" (UniqueName: \"kubernetes.io/projected/1ecfb3de-ba87-49c4-88a8-23d0ea397032-kube-api-access-b5bm4\") pod \"oauth-openshift-69fb88d4f9-lt8f4\" (UID: \"1ecfb3de-ba87-49c4-88a8-23d0ea397032\") " pod="openshift-authentication/oauth-openshift-69fb88d4f9-lt8f4" Dec 09 12:09:39 crc kubenswrapper[4703]: I1209 12:09:39.784716 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 09 12:09:39 crc kubenswrapper[4703]: I1209 12:09:39.798346 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/1ecfb3de-ba87-49c4-88a8-23d0ea397032-v4-0-config-user-template-error\") pod \"oauth-openshift-69fb88d4f9-lt8f4\" (UID: \"1ecfb3de-ba87-49c4-88a8-23d0ea397032\") " pod="openshift-authentication/oauth-openshift-69fb88d4f9-lt8f4" Dec 09 12:09:39 crc kubenswrapper[4703]: I1209 12:09:39.798393 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/1ecfb3de-ba87-49c4-88a8-23d0ea397032-v4-0-config-system-cliconfig\") pod \"oauth-openshift-69fb88d4f9-lt8f4\" (UID: \"1ecfb3de-ba87-49c4-88a8-23d0ea397032\") " pod="openshift-authentication/oauth-openshift-69fb88d4f9-lt8f4" Dec 09 12:09:39 crc kubenswrapper[4703]: I1209 12:09:39.798410 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/1ecfb3de-ba87-49c4-88a8-23d0ea397032-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-69fb88d4f9-lt8f4\" (UID: \"1ecfb3de-ba87-49c4-88a8-23d0ea397032\") " pod="openshift-authentication/oauth-openshift-69fb88d4f9-lt8f4" Dec 09 12:09:39 crc kubenswrapper[4703]: I1209 12:09:39.798425 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/1ecfb3de-ba87-49c4-88a8-23d0ea397032-v4-0-config-system-service-ca\") pod \"oauth-openshift-69fb88d4f9-lt8f4\" (UID: \"1ecfb3de-ba87-49c4-88a8-23d0ea397032\") " pod="openshift-authentication/oauth-openshift-69fb88d4f9-lt8f4" Dec 09 12:09:39 crc kubenswrapper[4703]: I1209 12:09:39.798457 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1ecfb3de-ba87-49c4-88a8-23d0ea397032-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-69fb88d4f9-lt8f4\" (UID: \"1ecfb3de-ba87-49c4-88a8-23d0ea397032\") " pod="openshift-authentication/oauth-openshift-69fb88d4f9-lt8f4" Dec 09 12:09:39 crc kubenswrapper[4703]: I1209 12:09:39.798474 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/1ecfb3de-ba87-49c4-88a8-23d0ea397032-v4-0-config-user-template-login\") pod \"oauth-openshift-69fb88d4f9-lt8f4\" (UID: \"1ecfb3de-ba87-49c4-88a8-23d0ea397032\") " pod="openshift-authentication/oauth-openshift-69fb88d4f9-lt8f4" Dec 09 12:09:39 crc kubenswrapper[4703]: I1209 12:09:39.798492 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1ecfb3de-ba87-49c4-88a8-23d0ea397032-audit-policies\") pod \"oauth-openshift-69fb88d4f9-lt8f4\" (UID: \"1ecfb3de-ba87-49c4-88a8-23d0ea397032\") " pod="openshift-authentication/oauth-openshift-69fb88d4f9-lt8f4" Dec 09 12:09:39 crc kubenswrapper[4703]: I1209 12:09:39.798511 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/1ecfb3de-ba87-49c4-88a8-23d0ea397032-v4-0-config-system-router-certs\") pod \"oauth-openshift-69fb88d4f9-lt8f4\" (UID: \"1ecfb3de-ba87-49c4-88a8-23d0ea397032\") " pod="openshift-authentication/oauth-openshift-69fb88d4f9-lt8f4" Dec 09 12:09:39 crc kubenswrapper[4703]: I1209 12:09:39.798536 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/1ecfb3de-ba87-49c4-88a8-23d0ea397032-v4-0-config-system-serving-cert\") pod \"oauth-openshift-69fb88d4f9-lt8f4\" (UID: \"1ecfb3de-ba87-49c4-88a8-23d0ea397032\") " pod="openshift-authentication/oauth-openshift-69fb88d4f9-lt8f4" Dec 09 12:09:39 crc kubenswrapper[4703]: I1209 12:09:39.798553 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1ecfb3de-ba87-49c4-88a8-23d0ea397032-audit-dir\") pod \"oauth-openshift-69fb88d4f9-lt8f4\" (UID: \"1ecfb3de-ba87-49c4-88a8-23d0ea397032\") " pod="openshift-authentication/oauth-openshift-69fb88d4f9-lt8f4" Dec 09 12:09:39 crc kubenswrapper[4703]: I1209 12:09:39.798575 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/1ecfb3de-ba87-49c4-88a8-23d0ea397032-v4-0-config-system-session\") pod \"oauth-openshift-69fb88d4f9-lt8f4\" (UID: \"1ecfb3de-ba87-49c4-88a8-23d0ea397032\") " pod="openshift-authentication/oauth-openshift-69fb88d4f9-lt8f4" Dec 09 12:09:39 crc kubenswrapper[4703]: I1209 12:09:39.798597 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/1ecfb3de-ba87-49c4-88a8-23d0ea397032-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-69fb88d4f9-lt8f4\" (UID: \"1ecfb3de-ba87-49c4-88a8-23d0ea397032\") " pod="openshift-authentication/oauth-openshift-69fb88d4f9-lt8f4" Dec 09 12:09:39 crc kubenswrapper[4703]: I1209 12:09:39.798629 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/1ecfb3de-ba87-49c4-88a8-23d0ea397032-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-69fb88d4f9-lt8f4\" (UID: \"1ecfb3de-ba87-49c4-88a8-23d0ea397032\") " pod="openshift-authentication/oauth-openshift-69fb88d4f9-lt8f4" Dec 09 12:09:39 crc kubenswrapper[4703]: I1209 12:09:39.798645 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5bm4\" (UniqueName: \"kubernetes.io/projected/1ecfb3de-ba87-49c4-88a8-23d0ea397032-kube-api-access-b5bm4\") pod \"oauth-openshift-69fb88d4f9-lt8f4\" (UID: \"1ecfb3de-ba87-49c4-88a8-23d0ea397032\") " pod="openshift-authentication/oauth-openshift-69fb88d4f9-lt8f4" Dec 09 12:09:39 crc kubenswrapper[4703]: I1209 12:09:39.799753 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/1ecfb3de-ba87-49c4-88a8-23d0ea397032-v4-0-config-system-cliconfig\") pod \"oauth-openshift-69fb88d4f9-lt8f4\" (UID: \"1ecfb3de-ba87-49c4-88a8-23d0ea397032\") " pod="openshift-authentication/oauth-openshift-69fb88d4f9-lt8f4" Dec 09 12:09:39 crc kubenswrapper[4703]: I1209 12:09:39.800498 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1ecfb3de-ba87-49c4-88a8-23d0ea397032-audit-dir\") pod \"oauth-openshift-69fb88d4f9-lt8f4\" (UID: \"1ecfb3de-ba87-49c4-88a8-23d0ea397032\") " pod="openshift-authentication/oauth-openshift-69fb88d4f9-lt8f4" Dec 09 12:09:39 crc kubenswrapper[4703]: I1209 12:09:39.800857 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1ecfb3de-ba87-49c4-88a8-23d0ea397032-audit-policies\") pod \"oauth-openshift-69fb88d4f9-lt8f4\" (UID: \"1ecfb3de-ba87-49c4-88a8-23d0ea397032\") " pod="openshift-authentication/oauth-openshift-69fb88d4f9-lt8f4" Dec 09 12:09:39 crc kubenswrapper[4703]: I1209 12:09:39.801022 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/1ecfb3de-ba87-49c4-88a8-23d0ea397032-v4-0-config-system-service-ca\") pod \"oauth-openshift-69fb88d4f9-lt8f4\" (UID: \"1ecfb3de-ba87-49c4-88a8-23d0ea397032\") " pod="openshift-authentication/oauth-openshift-69fb88d4f9-lt8f4" Dec 09 12:09:39 crc kubenswrapper[4703]: I1209 12:09:39.801442 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1ecfb3de-ba87-49c4-88a8-23d0ea397032-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-69fb88d4f9-lt8f4\" (UID: \"1ecfb3de-ba87-49c4-88a8-23d0ea397032\") " pod="openshift-authentication/oauth-openshift-69fb88d4f9-lt8f4" Dec 09 12:09:39 crc kubenswrapper[4703]: I1209 12:09:39.804672 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/1ecfb3de-ba87-49c4-88a8-23d0ea397032-v4-0-config-user-template-error\") pod \"oauth-openshift-69fb88d4f9-lt8f4\" (UID: \"1ecfb3de-ba87-49c4-88a8-23d0ea397032\") " pod="openshift-authentication/oauth-openshift-69fb88d4f9-lt8f4" Dec 09 12:09:39 crc kubenswrapper[4703]: I1209 12:09:39.805180 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/1ecfb3de-ba87-49c4-88a8-23d0ea397032-v4-0-config-system-serving-cert\") pod \"oauth-openshift-69fb88d4f9-lt8f4\" (UID: \"1ecfb3de-ba87-49c4-88a8-23d0ea397032\") " pod="openshift-authentication/oauth-openshift-69fb88d4f9-lt8f4" Dec 09 12:09:39 crc kubenswrapper[4703]: I1209 12:09:39.805408 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/1ecfb3de-ba87-49c4-88a8-23d0ea397032-v4-0-config-user-template-login\") pod \"oauth-openshift-69fb88d4f9-lt8f4\" (UID: \"1ecfb3de-ba87-49c4-88a8-23d0ea397032\") " pod="openshift-authentication/oauth-openshift-69fb88d4f9-lt8f4" Dec 09 12:09:39 crc kubenswrapper[4703]: I1209 12:09:39.805462 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/1ecfb3de-ba87-49c4-88a8-23d0ea397032-v4-0-config-system-session\") pod \"oauth-openshift-69fb88d4f9-lt8f4\" (UID: \"1ecfb3de-ba87-49c4-88a8-23d0ea397032\") " pod="openshift-authentication/oauth-openshift-69fb88d4f9-lt8f4" Dec 09 12:09:39 crc kubenswrapper[4703]: I1209 12:09:39.806998 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/1ecfb3de-ba87-49c4-88a8-23d0ea397032-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-69fb88d4f9-lt8f4\" (UID: \"1ecfb3de-ba87-49c4-88a8-23d0ea397032\") " pod="openshift-authentication/oauth-openshift-69fb88d4f9-lt8f4" Dec 09 12:09:39 crc kubenswrapper[4703]: I1209 12:09:39.811588 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/1ecfb3de-ba87-49c4-88a8-23d0ea397032-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-69fb88d4f9-lt8f4\" (UID: \"1ecfb3de-ba87-49c4-88a8-23d0ea397032\") " pod="openshift-authentication/oauth-openshift-69fb88d4f9-lt8f4" Dec 09 12:09:39 crc kubenswrapper[4703]: I1209 12:09:39.813071 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/1ecfb3de-ba87-49c4-88a8-23d0ea397032-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-69fb88d4f9-lt8f4\" (UID: \"1ecfb3de-ba87-49c4-88a8-23d0ea397032\") " pod="openshift-authentication/oauth-openshift-69fb88d4f9-lt8f4" Dec 09 12:09:39 crc kubenswrapper[4703]: I1209 12:09:39.813931 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/1ecfb3de-ba87-49c4-88a8-23d0ea397032-v4-0-config-system-router-certs\") pod \"oauth-openshift-69fb88d4f9-lt8f4\" (UID: \"1ecfb3de-ba87-49c4-88a8-23d0ea397032\") " pod="openshift-authentication/oauth-openshift-69fb88d4f9-lt8f4" Dec 09 12:09:39 crc kubenswrapper[4703]: I1209 12:09:39.819023 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5bm4\" (UniqueName: \"kubernetes.io/projected/1ecfb3de-ba87-49c4-88a8-23d0ea397032-kube-api-access-b5bm4\") pod \"oauth-openshift-69fb88d4f9-lt8f4\" (UID: \"1ecfb3de-ba87-49c4-88a8-23d0ea397032\") " pod="openshift-authentication/oauth-openshift-69fb88d4f9-lt8f4" Dec 09 12:09:39 crc kubenswrapper[4703]: I1209 12:09:39.853176 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 09 12:09:39 crc kubenswrapper[4703]: I1209 12:09:39.857160 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-69fb88d4f9-lt8f4" Dec 09 12:09:39 crc kubenswrapper[4703]: I1209 12:09:39.883249 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Dec 09 12:09:40 crc kubenswrapper[4703]: I1209 12:09:40.343627 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 09 12:09:40 crc kubenswrapper[4703]: I1209 12:09:40.381161 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 09 12:09:40 crc kubenswrapper[4703]: I1209 12:09:40.422137 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 09 12:09:40 crc kubenswrapper[4703]: I1209 12:09:40.460486 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 09 12:09:40 crc kubenswrapper[4703]: I1209 12:09:40.476143 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 09 12:09:40 crc kubenswrapper[4703]: I1209 12:09:40.684983 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Dec 09 12:09:40 crc kubenswrapper[4703]: I1209 12:09:40.854074 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 09 12:09:40 crc kubenswrapper[4703]: I1209 12:09:40.855363 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Dec 09 12:09:40 crc kubenswrapper[4703]: I1209 12:09:40.856506 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 09 12:09:40 crc kubenswrapper[4703]: I1209 12:09:40.872415 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 09 12:09:40 crc kubenswrapper[4703]: I1209 12:09:40.990879 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 09 12:09:41 crc kubenswrapper[4703]: I1209 12:09:41.007010 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 09 12:09:41 crc kubenswrapper[4703]: I1209 12:09:41.072626 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Dec 09 12:09:41 crc kubenswrapper[4703]: I1209 12:09:41.115967 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 09 12:09:41 crc kubenswrapper[4703]: I1209 12:09:41.159864 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 09 12:09:41 crc kubenswrapper[4703]: I1209 12:09:41.174621 4703 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 09 12:09:41 crc kubenswrapper[4703]: I1209 12:09:41.174873 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://2a437abab8ce1740858840ebbe802fd6c9042dba1ac21b624c0b8bac56a47515" gracePeriod=5 Dec 09 12:09:41 crc kubenswrapper[4703]: I1209 12:09:41.180714 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Dec 09 12:09:41 crc kubenswrapper[4703]: I1209 12:09:41.203011 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 09 12:09:41 crc kubenswrapper[4703]: I1209 12:09:41.207479 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 09 12:09:41 crc kubenswrapper[4703]: I1209 12:09:41.300801 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 09 12:09:41 crc kubenswrapper[4703]: I1209 12:09:41.306579 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 09 12:09:41 crc kubenswrapper[4703]: I1209 12:09:41.406353 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 09 12:09:41 crc kubenswrapper[4703]: I1209 12:09:41.416704 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 09 12:09:41 crc kubenswrapper[4703]: I1209 12:09:41.555082 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Dec 09 12:09:41 crc kubenswrapper[4703]: I1209 12:09:41.645066 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 09 12:09:41 crc kubenswrapper[4703]: I1209 12:09:41.710129 4703 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 09 12:09:41 crc kubenswrapper[4703]: I1209 12:09:41.745058 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 09 12:09:41 crc kubenswrapper[4703]: I1209 12:09:41.756173 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 09 12:09:41 crc kubenswrapper[4703]: I1209 12:09:41.775426 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 09 12:09:41 crc kubenswrapper[4703]: I1209 12:09:41.780134 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 09 12:09:41 crc kubenswrapper[4703]: I1209 12:09:41.901146 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 09 12:09:41 crc kubenswrapper[4703]: I1209 12:09:41.906257 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 09 12:09:41 crc kubenswrapper[4703]: I1209 12:09:41.941958 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 09 12:09:42 crc kubenswrapper[4703]: I1209 12:09:42.021466 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 09 12:09:42 crc kubenswrapper[4703]: I1209 12:09:42.021530 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 09 12:09:42 crc kubenswrapper[4703]: I1209 12:09:42.025258 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 09 12:09:42 crc kubenswrapper[4703]: I1209 12:09:42.108752 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Dec 09 12:09:42 crc kubenswrapper[4703]: I1209 12:09:42.194783 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 09 12:09:42 crc kubenswrapper[4703]: I1209 12:09:42.212102 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 09 12:09:42 crc kubenswrapper[4703]: I1209 12:09:42.229040 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 09 12:09:42 crc kubenswrapper[4703]: I1209 12:09:42.308656 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 09 12:09:42 crc kubenswrapper[4703]: I1209 12:09:42.364761 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 09 12:09:42 crc kubenswrapper[4703]: I1209 12:09:42.385252 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Dec 09 12:09:42 crc kubenswrapper[4703]: I1209 12:09:42.569352 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 09 12:09:42 crc kubenswrapper[4703]: I1209 12:09:42.617857 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Dec 09 12:09:42 crc kubenswrapper[4703]: I1209 12:09:42.660374 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 09 12:09:42 crc kubenswrapper[4703]: I1209 12:09:42.686925 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 09 12:09:42 crc kubenswrapper[4703]: E1209 12:09:42.756943 4703 log.go:32] "RunPodSandbox from runtime service failed" err=< Dec 09 12:09:42 crc kubenswrapper[4703]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-69fb88d4f9-lt8f4_openshift-authentication_1ecfb3de-ba87-49c4-88a8-23d0ea397032_0(750859a4966dfe0c0012a18ca5ffd116d0b53ceeb8597ffb2d3fbf22f2ffd70f): error adding pod openshift-authentication_oauth-openshift-69fb88d4f9-lt8f4 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"750859a4966dfe0c0012a18ca5ffd116d0b53ceeb8597ffb2d3fbf22f2ffd70f" Netns:"/var/run/netns/ff868ac5-e508-475a-94a0-9324201058ae" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-69fb88d4f9-lt8f4;K8S_POD_INFRA_CONTAINER_ID=750859a4966dfe0c0012a18ca5ffd116d0b53ceeb8597ffb2d3fbf22f2ffd70f;K8S_POD_UID=1ecfb3de-ba87-49c4-88a8-23d0ea397032" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-69fb88d4f9-lt8f4] networking: Multus: [openshift-authentication/oauth-openshift-69fb88d4f9-lt8f4/1ecfb3de-ba87-49c4-88a8-23d0ea397032]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-69fb88d4f9-lt8f4 in out of cluster comm: pod "oauth-openshift-69fb88d4f9-lt8f4" not found Dec 09 12:09:42 crc kubenswrapper[4703]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Dec 09 12:09:42 crc kubenswrapper[4703]: > Dec 09 12:09:42 crc kubenswrapper[4703]: E1209 12:09:42.757081 4703 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Dec 09 12:09:42 crc kubenswrapper[4703]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-69fb88d4f9-lt8f4_openshift-authentication_1ecfb3de-ba87-49c4-88a8-23d0ea397032_0(750859a4966dfe0c0012a18ca5ffd116d0b53ceeb8597ffb2d3fbf22f2ffd70f): error adding pod openshift-authentication_oauth-openshift-69fb88d4f9-lt8f4 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"750859a4966dfe0c0012a18ca5ffd116d0b53ceeb8597ffb2d3fbf22f2ffd70f" Netns:"/var/run/netns/ff868ac5-e508-475a-94a0-9324201058ae" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-69fb88d4f9-lt8f4;K8S_POD_INFRA_CONTAINER_ID=750859a4966dfe0c0012a18ca5ffd116d0b53ceeb8597ffb2d3fbf22f2ffd70f;K8S_POD_UID=1ecfb3de-ba87-49c4-88a8-23d0ea397032" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-69fb88d4f9-lt8f4] networking: Multus: [openshift-authentication/oauth-openshift-69fb88d4f9-lt8f4/1ecfb3de-ba87-49c4-88a8-23d0ea397032]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-69fb88d4f9-lt8f4 in out of cluster comm: pod "oauth-openshift-69fb88d4f9-lt8f4" not found Dec 09 12:09:42 crc kubenswrapper[4703]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Dec 09 12:09:42 crc kubenswrapper[4703]: > pod="openshift-authentication/oauth-openshift-69fb88d4f9-lt8f4" Dec 09 12:09:42 crc kubenswrapper[4703]: E1209 12:09:42.757122 4703 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Dec 09 12:09:42 crc kubenswrapper[4703]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-69fb88d4f9-lt8f4_openshift-authentication_1ecfb3de-ba87-49c4-88a8-23d0ea397032_0(750859a4966dfe0c0012a18ca5ffd116d0b53ceeb8597ffb2d3fbf22f2ffd70f): error adding pod openshift-authentication_oauth-openshift-69fb88d4f9-lt8f4 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"750859a4966dfe0c0012a18ca5ffd116d0b53ceeb8597ffb2d3fbf22f2ffd70f" Netns:"/var/run/netns/ff868ac5-e508-475a-94a0-9324201058ae" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-69fb88d4f9-lt8f4;K8S_POD_INFRA_CONTAINER_ID=750859a4966dfe0c0012a18ca5ffd116d0b53ceeb8597ffb2d3fbf22f2ffd70f;K8S_POD_UID=1ecfb3de-ba87-49c4-88a8-23d0ea397032" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-69fb88d4f9-lt8f4] networking: Multus: [openshift-authentication/oauth-openshift-69fb88d4f9-lt8f4/1ecfb3de-ba87-49c4-88a8-23d0ea397032]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-69fb88d4f9-lt8f4 in out of cluster comm: pod "oauth-openshift-69fb88d4f9-lt8f4" not found Dec 09 12:09:42 crc kubenswrapper[4703]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Dec 09 12:09:42 crc kubenswrapper[4703]: > pod="openshift-authentication/oauth-openshift-69fb88d4f9-lt8f4" Dec 09 12:09:42 crc kubenswrapper[4703]: E1209 12:09:42.757270 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"oauth-openshift-69fb88d4f9-lt8f4_openshift-authentication(1ecfb3de-ba87-49c4-88a8-23d0ea397032)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"oauth-openshift-69fb88d4f9-lt8f4_openshift-authentication(1ecfb3de-ba87-49c4-88a8-23d0ea397032)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-69fb88d4f9-lt8f4_openshift-authentication_1ecfb3de-ba87-49c4-88a8-23d0ea397032_0(750859a4966dfe0c0012a18ca5ffd116d0b53ceeb8597ffb2d3fbf22f2ffd70f): error adding pod openshift-authentication_oauth-openshift-69fb88d4f9-lt8f4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"750859a4966dfe0c0012a18ca5ffd116d0b53ceeb8597ffb2d3fbf22f2ffd70f\\\" Netns:\\\"/var/run/netns/ff868ac5-e508-475a-94a0-9324201058ae\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-69fb88d4f9-lt8f4;K8S_POD_INFRA_CONTAINER_ID=750859a4966dfe0c0012a18ca5ffd116d0b53ceeb8597ffb2d3fbf22f2ffd70f;K8S_POD_UID=1ecfb3de-ba87-49c4-88a8-23d0ea397032\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-69fb88d4f9-lt8f4] networking: Multus: [openshift-authentication/oauth-openshift-69fb88d4f9-lt8f4/1ecfb3de-ba87-49c4-88a8-23d0ea397032]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-69fb88d4f9-lt8f4 in out of cluster comm: pod \\\"oauth-openshift-69fb88d4f9-lt8f4\\\" not found\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-authentication/oauth-openshift-69fb88d4f9-lt8f4" podUID="1ecfb3de-ba87-49c4-88a8-23d0ea397032" Dec 09 12:09:42 crc kubenswrapper[4703]: I1209 12:09:42.812957 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 09 12:09:42 crc kubenswrapper[4703]: I1209 12:09:42.834558 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 09 12:09:42 crc kubenswrapper[4703]: I1209 12:09:42.842088 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-69fb88d4f9-lt8f4" Dec 09 12:09:42 crc kubenswrapper[4703]: I1209 12:09:42.842785 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-69fb88d4f9-lt8f4" Dec 09 12:09:42 crc kubenswrapper[4703]: I1209 12:09:42.918371 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 09 12:09:43 crc kubenswrapper[4703]: I1209 12:09:43.079251 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 09 12:09:43 crc kubenswrapper[4703]: I1209 12:09:43.083850 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 09 12:09:43 crc kubenswrapper[4703]: I1209 12:09:43.127551 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 09 12:09:43 crc kubenswrapper[4703]: I1209 12:09:43.137690 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 09 12:09:43 crc kubenswrapper[4703]: I1209 12:09:43.236147 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 09 12:09:43 crc kubenswrapper[4703]: I1209 12:09:43.247635 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 09 12:09:43 crc kubenswrapper[4703]: I1209 12:09:43.330717 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-69fb88d4f9-lt8f4"] Dec 09 12:09:43 crc kubenswrapper[4703]: I1209 12:09:43.332036 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 09 12:09:43 crc kubenswrapper[4703]: I1209 12:09:43.351809 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Dec 09 12:09:43 crc kubenswrapper[4703]: I1209 12:09:43.434840 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 09 12:09:43 crc kubenswrapper[4703]: I1209 12:09:43.473212 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 09 12:09:43 crc kubenswrapper[4703]: I1209 12:09:43.494374 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 09 12:09:43 crc kubenswrapper[4703]: I1209 12:09:43.503602 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Dec 09 12:09:43 crc kubenswrapper[4703]: I1209 12:09:43.534359 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 09 12:09:43 crc kubenswrapper[4703]: I1209 12:09:43.617955 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 09 12:09:43 crc kubenswrapper[4703]: I1209 12:09:43.663767 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 09 12:09:43 crc kubenswrapper[4703]: I1209 12:09:43.751339 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 09 12:09:43 crc kubenswrapper[4703]: I1209 12:09:43.799434 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 09 12:09:43 crc kubenswrapper[4703]: I1209 12:09:43.812490 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 09 12:09:43 crc kubenswrapper[4703]: I1209 12:09:43.849940 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-69fb88d4f9-lt8f4" event={"ID":"1ecfb3de-ba87-49c4-88a8-23d0ea397032","Type":"ContainerStarted","Data":"79fa038ee561dc9292b8c2e3b9b949701fb60a8fb9102f028918790ea2192b6e"} Dec 09 12:09:43 crc kubenswrapper[4703]: I1209 12:09:43.850003 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-69fb88d4f9-lt8f4" event={"ID":"1ecfb3de-ba87-49c4-88a8-23d0ea397032","Type":"ContainerStarted","Data":"a83632193d505baa30319fd5d25d1eca9f2c95697d7a06b8c53777357bd6be70"} Dec 09 12:09:43 crc kubenswrapper[4703]: I1209 12:09:43.850340 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-69fb88d4f9-lt8f4" Dec 09 12:09:43 crc kubenswrapper[4703]: I1209 12:09:43.850767 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Dec 09 12:09:43 crc kubenswrapper[4703]: I1209 12:09:43.873366 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 09 12:09:43 crc kubenswrapper[4703]: I1209 12:09:43.874752 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-69fb88d4f9-lt8f4" podStartSLOduration=47.87473036 podStartE2EDuration="47.87473036s" podCreationTimestamp="2025-12-09 12:08:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:09:43.870267827 +0000 UTC m=+283.119031346" watchObservedRunningTime="2025-12-09 12:09:43.87473036 +0000 UTC m=+283.123493879" Dec 09 12:09:43 crc kubenswrapper[4703]: I1209 12:09:43.919940 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 09 12:09:44 crc kubenswrapper[4703]: I1209 12:09:44.013237 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 09 12:09:44 crc kubenswrapper[4703]: I1209 12:09:44.155751 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-69fb88d4f9-lt8f4" Dec 09 12:09:44 crc kubenswrapper[4703]: I1209 12:09:44.212893 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 09 12:09:44 crc kubenswrapper[4703]: I1209 12:09:44.243485 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 09 12:09:44 crc kubenswrapper[4703]: I1209 12:09:44.355797 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Dec 09 12:09:44 crc kubenswrapper[4703]: I1209 12:09:44.458242 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 09 12:09:44 crc kubenswrapper[4703]: I1209 12:09:44.542971 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 09 12:09:44 crc kubenswrapper[4703]: I1209 12:09:44.576676 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Dec 09 12:09:44 crc kubenswrapper[4703]: I1209 12:09:44.835046 4703 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 09 12:09:45 crc kubenswrapper[4703]: I1209 12:09:45.201562 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 09 12:09:45 crc kubenswrapper[4703]: I1209 12:09:45.227333 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 09 12:09:45 crc kubenswrapper[4703]: I1209 12:09:45.346005 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 09 12:09:45 crc kubenswrapper[4703]: I1209 12:09:45.422029 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 09 12:09:45 crc kubenswrapper[4703]: I1209 12:09:45.707961 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 09 12:09:46 crc kubenswrapper[4703]: I1209 12:09:46.131882 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 09 12:09:46 crc kubenswrapper[4703]: I1209 12:09:46.278943 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 09 12:09:46 crc kubenswrapper[4703]: I1209 12:09:46.730856 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 09 12:09:46 crc kubenswrapper[4703]: I1209 12:09:46.751141 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 09 12:09:46 crc kubenswrapper[4703]: I1209 12:09:46.751245 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 12:09:46 crc kubenswrapper[4703]: I1209 12:09:46.866008 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 09 12:09:46 crc kubenswrapper[4703]: I1209 12:09:46.866064 4703 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="2a437abab8ce1740858840ebbe802fd6c9042dba1ac21b624c0b8bac56a47515" exitCode=137 Dec 09 12:09:46 crc kubenswrapper[4703]: I1209 12:09:46.866114 4703 scope.go:117] "RemoveContainer" containerID="2a437abab8ce1740858840ebbe802fd6c9042dba1ac21b624c0b8bac56a47515" Dec 09 12:09:46 crc kubenswrapper[4703]: I1209 12:09:46.866257 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 12:09:46 crc kubenswrapper[4703]: I1209 12:09:46.885444 4703 scope.go:117] "RemoveContainer" containerID="2a437abab8ce1740858840ebbe802fd6c9042dba1ac21b624c0b8bac56a47515" Dec 09 12:09:46 crc kubenswrapper[4703]: E1209 12:09:46.885931 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a437abab8ce1740858840ebbe802fd6c9042dba1ac21b624c0b8bac56a47515\": container with ID starting with 2a437abab8ce1740858840ebbe802fd6c9042dba1ac21b624c0b8bac56a47515 not found: ID does not exist" containerID="2a437abab8ce1740858840ebbe802fd6c9042dba1ac21b624c0b8bac56a47515" Dec 09 12:09:46 crc kubenswrapper[4703]: I1209 12:09:46.885968 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a437abab8ce1740858840ebbe802fd6c9042dba1ac21b624c0b8bac56a47515"} err="failed to get container status \"2a437abab8ce1740858840ebbe802fd6c9042dba1ac21b624c0b8bac56a47515\": rpc error: code = NotFound desc = could not find container \"2a437abab8ce1740858840ebbe802fd6c9042dba1ac21b624c0b8bac56a47515\": container with ID starting with 2a437abab8ce1740858840ebbe802fd6c9042dba1ac21b624c0b8bac56a47515 not found: ID does not exist" Dec 09 12:09:46 crc kubenswrapper[4703]: I1209 12:09:46.935003 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 09 12:09:46 crc kubenswrapper[4703]: I1209 12:09:46.935105 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 09 12:09:46 crc kubenswrapper[4703]: I1209 12:09:46.935131 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 09 12:09:46 crc kubenswrapper[4703]: I1209 12:09:46.935151 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 12:09:46 crc kubenswrapper[4703]: I1209 12:09:46.935167 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 09 12:09:46 crc kubenswrapper[4703]: I1209 12:09:46.935230 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 09 12:09:46 crc kubenswrapper[4703]: I1209 12:09:46.935239 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 12:09:46 crc kubenswrapper[4703]: I1209 12:09:46.935309 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 12:09:46 crc kubenswrapper[4703]: I1209 12:09:46.935353 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 12:09:46 crc kubenswrapper[4703]: I1209 12:09:46.935633 4703 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Dec 09 12:09:46 crc kubenswrapper[4703]: I1209 12:09:46.935648 4703 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 09 12:09:46 crc kubenswrapper[4703]: I1209 12:09:46.935658 4703 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Dec 09 12:09:46 crc kubenswrapper[4703]: I1209 12:09:46.935666 4703 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Dec 09 12:09:46 crc kubenswrapper[4703]: I1209 12:09:46.942365 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 12:09:46 crc kubenswrapper[4703]: I1209 12:09:46.981700 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 09 12:09:47 crc kubenswrapper[4703]: I1209 12:09:47.036698 4703 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 09 12:09:47 crc kubenswrapper[4703]: I1209 12:09:47.070673 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 09 12:09:47 crc kubenswrapper[4703]: I1209 12:09:47.075687 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Dec 09 12:09:47 crc kubenswrapper[4703]: I1209 12:09:47.075979 4703 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Dec 09 12:09:47 crc kubenswrapper[4703]: I1209 12:09:47.087849 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 09 12:09:47 crc kubenswrapper[4703]: I1209 12:09:47.087887 4703 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="8fa5304a-20fd-4256-9754-9cbedd8cce91" Dec 09 12:09:47 crc kubenswrapper[4703]: I1209 12:09:47.092172 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 09 12:09:47 crc kubenswrapper[4703]: I1209 12:09:47.092224 4703 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="8fa5304a-20fd-4256-9754-9cbedd8cce91" Dec 09 12:09:48 crc kubenswrapper[4703]: I1209 12:09:48.058498 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 09 12:09:48 crc kubenswrapper[4703]: I1209 12:09:48.898016 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 09 12:10:04 crc kubenswrapper[4703]: I1209 12:10:04.972877 4703 generic.go:334] "Generic (PLEG): container finished" podID="d8699089-c9ff-4389-8a8e-72b5c976b5ae" containerID="a017079e8d9babf21e6a2956dc97b734c4fd3227220478313cf0a7b1ee0ecc20" exitCode=0 Dec 09 12:10:04 crc kubenswrapper[4703]: I1209 12:10:04.973010 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vlkrh" event={"ID":"d8699089-c9ff-4389-8a8e-72b5c976b5ae","Type":"ContainerDied","Data":"a017079e8d9babf21e6a2956dc97b734c4fd3227220478313cf0a7b1ee0ecc20"} Dec 09 12:10:04 crc kubenswrapper[4703]: I1209 12:10:04.974318 4703 scope.go:117] "RemoveContainer" containerID="a017079e8d9babf21e6a2956dc97b734c4fd3227220478313cf0a7b1ee0ecc20" Dec 09 12:10:05 crc kubenswrapper[4703]: I1209 12:10:05.980009 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vlkrh" event={"ID":"d8699089-c9ff-4389-8a8e-72b5c976b5ae","Type":"ContainerStarted","Data":"1282d2c65925d30c8139e902bd8fd2ab9fda375c81eab8a250cbb36e318fc681"} Dec 09 12:10:05 crc kubenswrapper[4703]: I1209 12:10:05.980995 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-vlkrh" Dec 09 12:10:05 crc kubenswrapper[4703]: I1209 12:10:05.985648 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-vlkrh" Dec 09 12:10:09 crc kubenswrapper[4703]: I1209 12:10:09.772938 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-2xlcr"] Dec 09 12:10:09 crc kubenswrapper[4703]: I1209 12:10:09.888875 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-2zgjr"] Dec 09 12:10:09 crc kubenswrapper[4703]: I1209 12:10:09.889420 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2zgjr" podUID="296d291a-3dbf-46c2-a60c-8646965dcbdc" containerName="route-controller-manager" containerID="cri-o://3005c287e8529b78364654c9c89fa69c3b777d16c80d42428c7e2a667603f355" gracePeriod=30 Dec 09 12:10:10 crc kubenswrapper[4703]: I1209 12:10:10.004759 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-2xlcr" podUID="5668acde-421e-4a2c-8172-0030b25db0f6" containerName="controller-manager" containerID="cri-o://024049faa8f130fe0cd0e94c171c0d3f53da21224f21dadb0086203a08df224e" gracePeriod=30 Dec 09 12:10:10 crc kubenswrapper[4703]: I1209 12:10:10.261677 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2zgjr" Dec 09 12:10:10 crc kubenswrapper[4703]: I1209 12:10:10.336683 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/296d291a-3dbf-46c2-a60c-8646965dcbdc-client-ca\") pod \"296d291a-3dbf-46c2-a60c-8646965dcbdc\" (UID: \"296d291a-3dbf-46c2-a60c-8646965dcbdc\") " Dec 09 12:10:10 crc kubenswrapper[4703]: I1209 12:10:10.337138 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/296d291a-3dbf-46c2-a60c-8646965dcbdc-config\") pod \"296d291a-3dbf-46c2-a60c-8646965dcbdc\" (UID: \"296d291a-3dbf-46c2-a60c-8646965dcbdc\") " Dec 09 12:10:10 crc kubenswrapper[4703]: I1209 12:10:10.337237 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/296d291a-3dbf-46c2-a60c-8646965dcbdc-serving-cert\") pod \"296d291a-3dbf-46c2-a60c-8646965dcbdc\" (UID: \"296d291a-3dbf-46c2-a60c-8646965dcbdc\") " Dec 09 12:10:10 crc kubenswrapper[4703]: I1209 12:10:10.337278 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nsgr4\" (UniqueName: \"kubernetes.io/projected/296d291a-3dbf-46c2-a60c-8646965dcbdc-kube-api-access-nsgr4\") pod \"296d291a-3dbf-46c2-a60c-8646965dcbdc\" (UID: \"296d291a-3dbf-46c2-a60c-8646965dcbdc\") " Dec 09 12:10:10 crc kubenswrapper[4703]: I1209 12:10:10.337664 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/296d291a-3dbf-46c2-a60c-8646965dcbdc-client-ca" (OuterVolumeSpecName: "client-ca") pod "296d291a-3dbf-46c2-a60c-8646965dcbdc" (UID: "296d291a-3dbf-46c2-a60c-8646965dcbdc"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:10:10 crc kubenswrapper[4703]: I1209 12:10:10.338233 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/296d291a-3dbf-46c2-a60c-8646965dcbdc-config" (OuterVolumeSpecName: "config") pod "296d291a-3dbf-46c2-a60c-8646965dcbdc" (UID: "296d291a-3dbf-46c2-a60c-8646965dcbdc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:10:10 crc kubenswrapper[4703]: I1209 12:10:10.363414 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/296d291a-3dbf-46c2-a60c-8646965dcbdc-kube-api-access-nsgr4" (OuterVolumeSpecName: "kube-api-access-nsgr4") pod "296d291a-3dbf-46c2-a60c-8646965dcbdc" (UID: "296d291a-3dbf-46c2-a60c-8646965dcbdc"). InnerVolumeSpecName "kube-api-access-nsgr4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:10:10 crc kubenswrapper[4703]: I1209 12:10:10.363584 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/296d291a-3dbf-46c2-a60c-8646965dcbdc-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "296d291a-3dbf-46c2-a60c-8646965dcbdc" (UID: "296d291a-3dbf-46c2-a60c-8646965dcbdc"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:10:10 crc kubenswrapper[4703]: I1209 12:10:10.393876 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-2xlcr" Dec 09 12:10:10 crc kubenswrapper[4703]: I1209 12:10:10.438015 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5668acde-421e-4a2c-8172-0030b25db0f6-serving-cert\") pod \"5668acde-421e-4a2c-8172-0030b25db0f6\" (UID: \"5668acde-421e-4a2c-8172-0030b25db0f6\") " Dec 09 12:10:10 crc kubenswrapper[4703]: I1209 12:10:10.438074 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5668acde-421e-4a2c-8172-0030b25db0f6-proxy-ca-bundles\") pod \"5668acde-421e-4a2c-8172-0030b25db0f6\" (UID: \"5668acde-421e-4a2c-8172-0030b25db0f6\") " Dec 09 12:10:10 crc kubenswrapper[4703]: I1209 12:10:10.438191 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j4n8p\" (UniqueName: \"kubernetes.io/projected/5668acde-421e-4a2c-8172-0030b25db0f6-kube-api-access-j4n8p\") pod \"5668acde-421e-4a2c-8172-0030b25db0f6\" (UID: \"5668acde-421e-4a2c-8172-0030b25db0f6\") " Dec 09 12:10:10 crc kubenswrapper[4703]: I1209 12:10:10.438265 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5668acde-421e-4a2c-8172-0030b25db0f6-config\") pod \"5668acde-421e-4a2c-8172-0030b25db0f6\" (UID: \"5668acde-421e-4a2c-8172-0030b25db0f6\") " Dec 09 12:10:10 crc kubenswrapper[4703]: I1209 12:10:10.438315 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5668acde-421e-4a2c-8172-0030b25db0f6-client-ca\") pod \"5668acde-421e-4a2c-8172-0030b25db0f6\" (UID: \"5668acde-421e-4a2c-8172-0030b25db0f6\") " Dec 09 12:10:10 crc kubenswrapper[4703]: I1209 12:10:10.438561 4703 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/296d291a-3dbf-46c2-a60c-8646965dcbdc-config\") on node \"crc\" DevicePath \"\"" Dec 09 12:10:10 crc kubenswrapper[4703]: I1209 12:10:10.438577 4703 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/296d291a-3dbf-46c2-a60c-8646965dcbdc-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 12:10:10 crc kubenswrapper[4703]: I1209 12:10:10.438588 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nsgr4\" (UniqueName: \"kubernetes.io/projected/296d291a-3dbf-46c2-a60c-8646965dcbdc-kube-api-access-nsgr4\") on node \"crc\" DevicePath \"\"" Dec 09 12:10:10 crc kubenswrapper[4703]: I1209 12:10:10.438600 4703 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/296d291a-3dbf-46c2-a60c-8646965dcbdc-client-ca\") on node \"crc\" DevicePath \"\"" Dec 09 12:10:10 crc kubenswrapper[4703]: I1209 12:10:10.439149 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5668acde-421e-4a2c-8172-0030b25db0f6-client-ca" (OuterVolumeSpecName: "client-ca") pod "5668acde-421e-4a2c-8172-0030b25db0f6" (UID: "5668acde-421e-4a2c-8172-0030b25db0f6"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:10:10 crc kubenswrapper[4703]: I1209 12:10:10.439340 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5668acde-421e-4a2c-8172-0030b25db0f6-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "5668acde-421e-4a2c-8172-0030b25db0f6" (UID: "5668acde-421e-4a2c-8172-0030b25db0f6"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:10:10 crc kubenswrapper[4703]: I1209 12:10:10.439737 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5668acde-421e-4a2c-8172-0030b25db0f6-config" (OuterVolumeSpecName: "config") pod "5668acde-421e-4a2c-8172-0030b25db0f6" (UID: "5668acde-421e-4a2c-8172-0030b25db0f6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:10:10 crc kubenswrapper[4703]: I1209 12:10:10.441751 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5668acde-421e-4a2c-8172-0030b25db0f6-kube-api-access-j4n8p" (OuterVolumeSpecName: "kube-api-access-j4n8p") pod "5668acde-421e-4a2c-8172-0030b25db0f6" (UID: "5668acde-421e-4a2c-8172-0030b25db0f6"). InnerVolumeSpecName "kube-api-access-j4n8p". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:10:10 crc kubenswrapper[4703]: I1209 12:10:10.442247 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5668acde-421e-4a2c-8172-0030b25db0f6-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5668acde-421e-4a2c-8172-0030b25db0f6" (UID: "5668acde-421e-4a2c-8172-0030b25db0f6"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:10:10 crc kubenswrapper[4703]: I1209 12:10:10.539547 4703 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5668acde-421e-4a2c-8172-0030b25db0f6-client-ca\") on node \"crc\" DevicePath \"\"" Dec 09 12:10:10 crc kubenswrapper[4703]: I1209 12:10:10.539578 4703 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5668acde-421e-4a2c-8172-0030b25db0f6-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 12:10:10 crc kubenswrapper[4703]: I1209 12:10:10.539587 4703 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5668acde-421e-4a2c-8172-0030b25db0f6-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 09 12:10:10 crc kubenswrapper[4703]: I1209 12:10:10.539601 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j4n8p\" (UniqueName: \"kubernetes.io/projected/5668acde-421e-4a2c-8172-0030b25db0f6-kube-api-access-j4n8p\") on node \"crc\" DevicePath \"\"" Dec 09 12:10:10 crc kubenswrapper[4703]: I1209 12:10:10.539611 4703 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5668acde-421e-4a2c-8172-0030b25db0f6-config\") on node \"crc\" DevicePath \"\"" Dec 09 12:10:11 crc kubenswrapper[4703]: I1209 12:10:11.012379 4703 generic.go:334] "Generic (PLEG): container finished" podID="5668acde-421e-4a2c-8172-0030b25db0f6" containerID="024049faa8f130fe0cd0e94c171c0d3f53da21224f21dadb0086203a08df224e" exitCode=0 Dec 09 12:10:11 crc kubenswrapper[4703]: I1209 12:10:11.012461 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-2xlcr" Dec 09 12:10:11 crc kubenswrapper[4703]: I1209 12:10:11.012465 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-2xlcr" event={"ID":"5668acde-421e-4a2c-8172-0030b25db0f6","Type":"ContainerDied","Data":"024049faa8f130fe0cd0e94c171c0d3f53da21224f21dadb0086203a08df224e"} Dec 09 12:10:11 crc kubenswrapper[4703]: I1209 12:10:11.012609 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-2xlcr" event={"ID":"5668acde-421e-4a2c-8172-0030b25db0f6","Type":"ContainerDied","Data":"49c045d7eb6f6a7b5584a7c9fd58ca72ebc2e5f8a80d297c8ecd5502ab36cd1f"} Dec 09 12:10:11 crc kubenswrapper[4703]: I1209 12:10:11.012654 4703 scope.go:117] "RemoveContainer" containerID="024049faa8f130fe0cd0e94c171c0d3f53da21224f21dadb0086203a08df224e" Dec 09 12:10:11 crc kubenswrapper[4703]: I1209 12:10:11.015388 4703 generic.go:334] "Generic (PLEG): container finished" podID="296d291a-3dbf-46c2-a60c-8646965dcbdc" containerID="3005c287e8529b78364654c9c89fa69c3b777d16c80d42428c7e2a667603f355" exitCode=0 Dec 09 12:10:11 crc kubenswrapper[4703]: I1209 12:10:11.015442 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2zgjr" Dec 09 12:10:11 crc kubenswrapper[4703]: I1209 12:10:11.015477 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2zgjr" event={"ID":"296d291a-3dbf-46c2-a60c-8646965dcbdc","Type":"ContainerDied","Data":"3005c287e8529b78364654c9c89fa69c3b777d16c80d42428c7e2a667603f355"} Dec 09 12:10:11 crc kubenswrapper[4703]: I1209 12:10:11.015557 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2zgjr" event={"ID":"296d291a-3dbf-46c2-a60c-8646965dcbdc","Type":"ContainerDied","Data":"4c4b4407e821a709599a2a2c765dfbafbfa33a0a0cca416a6029bb2250d64a1b"} Dec 09 12:10:11 crc kubenswrapper[4703]: I1209 12:10:11.042459 4703 scope.go:117] "RemoveContainer" containerID="024049faa8f130fe0cd0e94c171c0d3f53da21224f21dadb0086203a08df224e" Dec 09 12:10:11 crc kubenswrapper[4703]: E1209 12:10:11.043092 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"024049faa8f130fe0cd0e94c171c0d3f53da21224f21dadb0086203a08df224e\": container with ID starting with 024049faa8f130fe0cd0e94c171c0d3f53da21224f21dadb0086203a08df224e not found: ID does not exist" containerID="024049faa8f130fe0cd0e94c171c0d3f53da21224f21dadb0086203a08df224e" Dec 09 12:10:11 crc kubenswrapper[4703]: I1209 12:10:11.043212 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"024049faa8f130fe0cd0e94c171c0d3f53da21224f21dadb0086203a08df224e"} err="failed to get container status \"024049faa8f130fe0cd0e94c171c0d3f53da21224f21dadb0086203a08df224e\": rpc error: code = NotFound desc = could not find container \"024049faa8f130fe0cd0e94c171c0d3f53da21224f21dadb0086203a08df224e\": container with ID starting with 024049faa8f130fe0cd0e94c171c0d3f53da21224f21dadb0086203a08df224e not found: ID does not exist" Dec 09 12:10:11 crc kubenswrapper[4703]: I1209 12:10:11.043274 4703 scope.go:117] "RemoveContainer" containerID="3005c287e8529b78364654c9c89fa69c3b777d16c80d42428c7e2a667603f355" Dec 09 12:10:11 crc kubenswrapper[4703]: I1209 12:10:11.050555 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-2xlcr"] Dec 09 12:10:11 crc kubenswrapper[4703]: I1209 12:10:11.055501 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-2xlcr"] Dec 09 12:10:11 crc kubenswrapper[4703]: I1209 12:10:11.064752 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-2zgjr"] Dec 09 12:10:11 crc kubenswrapper[4703]: I1209 12:10:11.066437 4703 scope.go:117] "RemoveContainer" containerID="3005c287e8529b78364654c9c89fa69c3b777d16c80d42428c7e2a667603f355" Dec 09 12:10:11 crc kubenswrapper[4703]: I1209 12:10:11.068634 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-2zgjr"] Dec 09 12:10:11 crc kubenswrapper[4703]: E1209 12:10:11.069003 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3005c287e8529b78364654c9c89fa69c3b777d16c80d42428c7e2a667603f355\": container with ID starting with 3005c287e8529b78364654c9c89fa69c3b777d16c80d42428c7e2a667603f355 not found: ID does not exist" containerID="3005c287e8529b78364654c9c89fa69c3b777d16c80d42428c7e2a667603f355" Dec 09 12:10:11 crc kubenswrapper[4703]: I1209 12:10:11.069057 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3005c287e8529b78364654c9c89fa69c3b777d16c80d42428c7e2a667603f355"} err="failed to get container status \"3005c287e8529b78364654c9c89fa69c3b777d16c80d42428c7e2a667603f355\": rpc error: code = NotFound desc = could not find container \"3005c287e8529b78364654c9c89fa69c3b777d16c80d42428c7e2a667603f355\": container with ID starting with 3005c287e8529b78364654c9c89fa69c3b777d16c80d42428c7e2a667603f355 not found: ID does not exist" Dec 09 12:10:11 crc kubenswrapper[4703]: I1209 12:10:11.078240 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="296d291a-3dbf-46c2-a60c-8646965dcbdc" path="/var/lib/kubelet/pods/296d291a-3dbf-46c2-a60c-8646965dcbdc/volumes" Dec 09 12:10:11 crc kubenswrapper[4703]: I1209 12:10:11.078805 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5668acde-421e-4a2c-8172-0030b25db0f6" path="/var/lib/kubelet/pods/5668acde-421e-4a2c-8172-0030b25db0f6/volumes" Dec 09 12:10:11 crc kubenswrapper[4703]: I1209 12:10:11.528160 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-76cfd8b9cd-z7ps4"] Dec 09 12:10:11 crc kubenswrapper[4703]: E1209 12:10:11.528822 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5668acde-421e-4a2c-8172-0030b25db0f6" containerName="controller-manager" Dec 09 12:10:11 crc kubenswrapper[4703]: I1209 12:10:11.529014 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="5668acde-421e-4a2c-8172-0030b25db0f6" containerName="controller-manager" Dec 09 12:10:11 crc kubenswrapper[4703]: E1209 12:10:11.529088 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="296d291a-3dbf-46c2-a60c-8646965dcbdc" containerName="route-controller-manager" Dec 09 12:10:11 crc kubenswrapper[4703]: I1209 12:10:11.529141 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="296d291a-3dbf-46c2-a60c-8646965dcbdc" containerName="route-controller-manager" Dec 09 12:10:11 crc kubenswrapper[4703]: E1209 12:10:11.529231 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 09 12:10:11 crc kubenswrapper[4703]: I1209 12:10:11.529289 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 09 12:10:11 crc kubenswrapper[4703]: I1209 12:10:11.529435 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 09 12:10:11 crc kubenswrapper[4703]: I1209 12:10:11.529516 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="296d291a-3dbf-46c2-a60c-8646965dcbdc" containerName="route-controller-manager" Dec 09 12:10:11 crc kubenswrapper[4703]: I1209 12:10:11.529581 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="5668acde-421e-4a2c-8172-0030b25db0f6" containerName="controller-manager" Dec 09 12:10:11 crc kubenswrapper[4703]: I1209 12:10:11.530874 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-76cfd8b9cd-z7ps4" Dec 09 12:10:11 crc kubenswrapper[4703]: I1209 12:10:11.536484 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 09 12:10:11 crc kubenswrapper[4703]: I1209 12:10:11.536999 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 09 12:10:11 crc kubenswrapper[4703]: I1209 12:10:11.537145 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 09 12:10:11 crc kubenswrapper[4703]: I1209 12:10:11.537595 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 09 12:10:11 crc kubenswrapper[4703]: I1209 12:10:11.538644 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 09 12:10:11 crc kubenswrapper[4703]: I1209 12:10:11.538813 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 09 12:10:11 crc kubenswrapper[4703]: I1209 12:10:11.540472 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f5c4bb775-fblrl"] Dec 09 12:10:11 crc kubenswrapper[4703]: I1209 12:10:11.541741 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5f5c4bb775-fblrl" Dec 09 12:10:11 crc kubenswrapper[4703]: I1209 12:10:11.545814 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 09 12:10:11 crc kubenswrapper[4703]: I1209 12:10:11.545891 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 09 12:10:11 crc kubenswrapper[4703]: I1209 12:10:11.546172 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 09 12:10:11 crc kubenswrapper[4703]: I1209 12:10:11.546349 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 09 12:10:11 crc kubenswrapper[4703]: I1209 12:10:11.546479 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 09 12:10:11 crc kubenswrapper[4703]: I1209 12:10:11.546640 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 09 12:10:11 crc kubenswrapper[4703]: I1209 12:10:11.546996 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 09 12:10:11 crc kubenswrapper[4703]: I1209 12:10:11.559964 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-76cfd8b9cd-z7ps4"] Dec 09 12:10:11 crc kubenswrapper[4703]: I1209 12:10:11.567772 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f5c4bb775-fblrl"] Dec 09 12:10:11 crc kubenswrapper[4703]: I1209 12:10:11.654814 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/388c4861-4b06-46fa-821d-f67043af91a9-config\") pod \"route-controller-manager-5f5c4bb775-fblrl\" (UID: \"388c4861-4b06-46fa-821d-f67043af91a9\") " pod="openshift-route-controller-manager/route-controller-manager-5f5c4bb775-fblrl" Dec 09 12:10:11 crc kubenswrapper[4703]: I1209 12:10:11.654905 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/717f04f2-6f94-4573-bf6d-3f873ef7744e-serving-cert\") pod \"controller-manager-76cfd8b9cd-z7ps4\" (UID: \"717f04f2-6f94-4573-bf6d-3f873ef7744e\") " pod="openshift-controller-manager/controller-manager-76cfd8b9cd-z7ps4" Dec 09 12:10:11 crc kubenswrapper[4703]: I1209 12:10:11.654942 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/388c4861-4b06-46fa-821d-f67043af91a9-client-ca\") pod \"route-controller-manager-5f5c4bb775-fblrl\" (UID: \"388c4861-4b06-46fa-821d-f67043af91a9\") " pod="openshift-route-controller-manager/route-controller-manager-5f5c4bb775-fblrl" Dec 09 12:10:11 crc kubenswrapper[4703]: I1209 12:10:11.654991 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/717f04f2-6f94-4573-bf6d-3f873ef7744e-config\") pod \"controller-manager-76cfd8b9cd-z7ps4\" (UID: \"717f04f2-6f94-4573-bf6d-3f873ef7744e\") " pod="openshift-controller-manager/controller-manager-76cfd8b9cd-z7ps4" Dec 09 12:10:11 crc kubenswrapper[4703]: I1209 12:10:11.655276 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skznq\" (UniqueName: \"kubernetes.io/projected/717f04f2-6f94-4573-bf6d-3f873ef7744e-kube-api-access-skznq\") pod \"controller-manager-76cfd8b9cd-z7ps4\" (UID: \"717f04f2-6f94-4573-bf6d-3f873ef7744e\") " pod="openshift-controller-manager/controller-manager-76cfd8b9cd-z7ps4" Dec 09 12:10:11 crc kubenswrapper[4703]: I1209 12:10:11.655388 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/388c4861-4b06-46fa-821d-f67043af91a9-serving-cert\") pod \"route-controller-manager-5f5c4bb775-fblrl\" (UID: \"388c4861-4b06-46fa-821d-f67043af91a9\") " pod="openshift-route-controller-manager/route-controller-manager-5f5c4bb775-fblrl" Dec 09 12:10:11 crc kubenswrapper[4703]: I1209 12:10:11.655511 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/717f04f2-6f94-4573-bf6d-3f873ef7744e-proxy-ca-bundles\") pod \"controller-manager-76cfd8b9cd-z7ps4\" (UID: \"717f04f2-6f94-4573-bf6d-3f873ef7744e\") " pod="openshift-controller-manager/controller-manager-76cfd8b9cd-z7ps4" Dec 09 12:10:11 crc kubenswrapper[4703]: I1209 12:10:11.655677 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzkpj\" (UniqueName: \"kubernetes.io/projected/388c4861-4b06-46fa-821d-f67043af91a9-kube-api-access-dzkpj\") pod \"route-controller-manager-5f5c4bb775-fblrl\" (UID: \"388c4861-4b06-46fa-821d-f67043af91a9\") " pod="openshift-route-controller-manager/route-controller-manager-5f5c4bb775-fblrl" Dec 09 12:10:11 crc kubenswrapper[4703]: I1209 12:10:11.655762 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/717f04f2-6f94-4573-bf6d-3f873ef7744e-client-ca\") pod \"controller-manager-76cfd8b9cd-z7ps4\" (UID: \"717f04f2-6f94-4573-bf6d-3f873ef7744e\") " pod="openshift-controller-manager/controller-manager-76cfd8b9cd-z7ps4" Dec 09 12:10:11 crc kubenswrapper[4703]: I1209 12:10:11.757017 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/717f04f2-6f94-4573-bf6d-3f873ef7744e-config\") pod \"controller-manager-76cfd8b9cd-z7ps4\" (UID: \"717f04f2-6f94-4573-bf6d-3f873ef7744e\") " pod="openshift-controller-manager/controller-manager-76cfd8b9cd-z7ps4" Dec 09 12:10:11 crc kubenswrapper[4703]: I1209 12:10:11.757104 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skznq\" (UniqueName: \"kubernetes.io/projected/717f04f2-6f94-4573-bf6d-3f873ef7744e-kube-api-access-skznq\") pod \"controller-manager-76cfd8b9cd-z7ps4\" (UID: \"717f04f2-6f94-4573-bf6d-3f873ef7744e\") " pod="openshift-controller-manager/controller-manager-76cfd8b9cd-z7ps4" Dec 09 12:10:11 crc kubenswrapper[4703]: I1209 12:10:11.757139 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/388c4861-4b06-46fa-821d-f67043af91a9-serving-cert\") pod \"route-controller-manager-5f5c4bb775-fblrl\" (UID: \"388c4861-4b06-46fa-821d-f67043af91a9\") " pod="openshift-route-controller-manager/route-controller-manager-5f5c4bb775-fblrl" Dec 09 12:10:11 crc kubenswrapper[4703]: I1209 12:10:11.757174 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/717f04f2-6f94-4573-bf6d-3f873ef7744e-proxy-ca-bundles\") pod \"controller-manager-76cfd8b9cd-z7ps4\" (UID: \"717f04f2-6f94-4573-bf6d-3f873ef7744e\") " pod="openshift-controller-manager/controller-manager-76cfd8b9cd-z7ps4" Dec 09 12:10:11 crc kubenswrapper[4703]: I1209 12:10:11.757227 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzkpj\" (UniqueName: \"kubernetes.io/projected/388c4861-4b06-46fa-821d-f67043af91a9-kube-api-access-dzkpj\") pod \"route-controller-manager-5f5c4bb775-fblrl\" (UID: \"388c4861-4b06-46fa-821d-f67043af91a9\") " pod="openshift-route-controller-manager/route-controller-manager-5f5c4bb775-fblrl" Dec 09 12:10:11 crc kubenswrapper[4703]: I1209 12:10:11.757251 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/717f04f2-6f94-4573-bf6d-3f873ef7744e-client-ca\") pod \"controller-manager-76cfd8b9cd-z7ps4\" (UID: \"717f04f2-6f94-4573-bf6d-3f873ef7744e\") " pod="openshift-controller-manager/controller-manager-76cfd8b9cd-z7ps4" Dec 09 12:10:11 crc kubenswrapper[4703]: I1209 12:10:11.757286 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/388c4861-4b06-46fa-821d-f67043af91a9-config\") pod \"route-controller-manager-5f5c4bb775-fblrl\" (UID: \"388c4861-4b06-46fa-821d-f67043af91a9\") " pod="openshift-route-controller-manager/route-controller-manager-5f5c4bb775-fblrl" Dec 09 12:10:11 crc kubenswrapper[4703]: I1209 12:10:11.757316 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/717f04f2-6f94-4573-bf6d-3f873ef7744e-serving-cert\") pod \"controller-manager-76cfd8b9cd-z7ps4\" (UID: \"717f04f2-6f94-4573-bf6d-3f873ef7744e\") " pod="openshift-controller-manager/controller-manager-76cfd8b9cd-z7ps4" Dec 09 12:10:11 crc kubenswrapper[4703]: I1209 12:10:11.757345 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/388c4861-4b06-46fa-821d-f67043af91a9-client-ca\") pod \"route-controller-manager-5f5c4bb775-fblrl\" (UID: \"388c4861-4b06-46fa-821d-f67043af91a9\") " pod="openshift-route-controller-manager/route-controller-manager-5f5c4bb775-fblrl" Dec 09 12:10:11 crc kubenswrapper[4703]: I1209 12:10:11.758580 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/388c4861-4b06-46fa-821d-f67043af91a9-client-ca\") pod \"route-controller-manager-5f5c4bb775-fblrl\" (UID: \"388c4861-4b06-46fa-821d-f67043af91a9\") " pod="openshift-route-controller-manager/route-controller-manager-5f5c4bb775-fblrl" Dec 09 12:10:11 crc kubenswrapper[4703]: I1209 12:10:11.759621 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/717f04f2-6f94-4573-bf6d-3f873ef7744e-proxy-ca-bundles\") pod \"controller-manager-76cfd8b9cd-z7ps4\" (UID: \"717f04f2-6f94-4573-bf6d-3f873ef7744e\") " pod="openshift-controller-manager/controller-manager-76cfd8b9cd-z7ps4" Dec 09 12:10:11 crc kubenswrapper[4703]: I1209 12:10:11.759835 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/388c4861-4b06-46fa-821d-f67043af91a9-config\") pod \"route-controller-manager-5f5c4bb775-fblrl\" (UID: \"388c4861-4b06-46fa-821d-f67043af91a9\") " pod="openshift-route-controller-manager/route-controller-manager-5f5c4bb775-fblrl" Dec 09 12:10:11 crc kubenswrapper[4703]: I1209 12:10:11.759998 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/717f04f2-6f94-4573-bf6d-3f873ef7744e-client-ca\") pod \"controller-manager-76cfd8b9cd-z7ps4\" (UID: \"717f04f2-6f94-4573-bf6d-3f873ef7744e\") " pod="openshift-controller-manager/controller-manager-76cfd8b9cd-z7ps4" Dec 09 12:10:11 crc kubenswrapper[4703]: I1209 12:10:11.761667 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/717f04f2-6f94-4573-bf6d-3f873ef7744e-config\") pod \"controller-manager-76cfd8b9cd-z7ps4\" (UID: \"717f04f2-6f94-4573-bf6d-3f873ef7744e\") " pod="openshift-controller-manager/controller-manager-76cfd8b9cd-z7ps4" Dec 09 12:10:11 crc kubenswrapper[4703]: I1209 12:10:11.764573 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/388c4861-4b06-46fa-821d-f67043af91a9-serving-cert\") pod \"route-controller-manager-5f5c4bb775-fblrl\" (UID: \"388c4861-4b06-46fa-821d-f67043af91a9\") " pod="openshift-route-controller-manager/route-controller-manager-5f5c4bb775-fblrl" Dec 09 12:10:11 crc kubenswrapper[4703]: I1209 12:10:11.766472 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/717f04f2-6f94-4573-bf6d-3f873ef7744e-serving-cert\") pod \"controller-manager-76cfd8b9cd-z7ps4\" (UID: \"717f04f2-6f94-4573-bf6d-3f873ef7744e\") " pod="openshift-controller-manager/controller-manager-76cfd8b9cd-z7ps4" Dec 09 12:10:11 crc kubenswrapper[4703]: I1209 12:10:11.776842 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skznq\" (UniqueName: \"kubernetes.io/projected/717f04f2-6f94-4573-bf6d-3f873ef7744e-kube-api-access-skznq\") pod \"controller-manager-76cfd8b9cd-z7ps4\" (UID: \"717f04f2-6f94-4573-bf6d-3f873ef7744e\") " pod="openshift-controller-manager/controller-manager-76cfd8b9cd-z7ps4" Dec 09 12:10:11 crc kubenswrapper[4703]: I1209 12:10:11.777464 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzkpj\" (UniqueName: \"kubernetes.io/projected/388c4861-4b06-46fa-821d-f67043af91a9-kube-api-access-dzkpj\") pod \"route-controller-manager-5f5c4bb775-fblrl\" (UID: \"388c4861-4b06-46fa-821d-f67043af91a9\") " pod="openshift-route-controller-manager/route-controller-manager-5f5c4bb775-fblrl" Dec 09 12:10:11 crc kubenswrapper[4703]: I1209 12:10:11.854980 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-76cfd8b9cd-z7ps4" Dec 09 12:10:11 crc kubenswrapper[4703]: I1209 12:10:11.879050 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5f5c4bb775-fblrl" Dec 09 12:10:12 crc kubenswrapper[4703]: I1209 12:10:12.075958 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-76cfd8b9cd-z7ps4"] Dec 09 12:10:12 crc kubenswrapper[4703]: I1209 12:10:12.104887 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f5c4bb775-fblrl"] Dec 09 12:10:12 crc kubenswrapper[4703]: W1209 12:10:12.120052 4703 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod388c4861_4b06_46fa_821d_f67043af91a9.slice/crio-a9cb14b817f8bfbe33b19738bf52c387f6b73c426660b540e37377e4596189fa WatchSource:0}: Error finding container a9cb14b817f8bfbe33b19738bf52c387f6b73c426660b540e37377e4596189fa: Status 404 returned error can't find the container with id a9cb14b817f8bfbe33b19738bf52c387f6b73c426660b540e37377e4596189fa Dec 09 12:10:13 crc kubenswrapper[4703]: I1209 12:10:13.031676 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5f5c4bb775-fblrl" event={"ID":"388c4861-4b06-46fa-821d-f67043af91a9","Type":"ContainerStarted","Data":"c4c1c67b5a01f8175611a93f178b5e0cd56b13fe0c4ed4cbf4279218bd8eaa31"} Dec 09 12:10:13 crc kubenswrapper[4703]: I1209 12:10:13.032321 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5f5c4bb775-fblrl" Dec 09 12:10:13 crc kubenswrapper[4703]: I1209 12:10:13.032344 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5f5c4bb775-fblrl" event={"ID":"388c4861-4b06-46fa-821d-f67043af91a9","Type":"ContainerStarted","Data":"a9cb14b817f8bfbe33b19738bf52c387f6b73c426660b540e37377e4596189fa"} Dec 09 12:10:13 crc kubenswrapper[4703]: I1209 12:10:13.033105 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-76cfd8b9cd-z7ps4" event={"ID":"717f04f2-6f94-4573-bf6d-3f873ef7744e","Type":"ContainerStarted","Data":"6102fdb7faeb5e22123810b33147cc37b3929006fcabbacd8bc1079775931867"} Dec 09 12:10:13 crc kubenswrapper[4703]: I1209 12:10:13.033174 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-76cfd8b9cd-z7ps4" event={"ID":"717f04f2-6f94-4573-bf6d-3f873ef7744e","Type":"ContainerStarted","Data":"246d8ac3b93329eeb557e86b7385f0666cac412113bf783833014d384f8ae697"} Dec 09 12:10:13 crc kubenswrapper[4703]: I1209 12:10:13.033522 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-76cfd8b9cd-z7ps4" Dec 09 12:10:13 crc kubenswrapper[4703]: I1209 12:10:13.039063 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-76cfd8b9cd-z7ps4" Dec 09 12:10:13 crc kubenswrapper[4703]: I1209 12:10:13.048087 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5f5c4bb775-fblrl" Dec 09 12:10:13 crc kubenswrapper[4703]: I1209 12:10:13.064634 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5f5c4bb775-fblrl" podStartSLOduration=4.064619866 podStartE2EDuration="4.064619866s" podCreationTimestamp="2025-12-09 12:10:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:10:13.061312135 +0000 UTC m=+312.310075664" watchObservedRunningTime="2025-12-09 12:10:13.064619866 +0000 UTC m=+312.313383385" Dec 09 12:10:13 crc kubenswrapper[4703]: I1209 12:10:13.090898 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-76cfd8b9cd-z7ps4" podStartSLOduration=4.090879626 podStartE2EDuration="4.090879626s" podCreationTimestamp="2025-12-09 12:10:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:10:13.089990512 +0000 UTC m=+312.338754031" watchObservedRunningTime="2025-12-09 12:10:13.090879626 +0000 UTC m=+312.339643145" Dec 09 12:10:29 crc kubenswrapper[4703]: I1209 12:10:29.763824 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f5c4bb775-fblrl"] Dec 09 12:10:29 crc kubenswrapper[4703]: I1209 12:10:29.764562 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5f5c4bb775-fblrl" podUID="388c4861-4b06-46fa-821d-f67043af91a9" containerName="route-controller-manager" containerID="cri-o://c4c1c67b5a01f8175611a93f178b5e0cd56b13fe0c4ed4cbf4279218bd8eaa31" gracePeriod=30 Dec 09 12:10:30 crc kubenswrapper[4703]: I1209 12:10:30.083249 4703 patch_prober.go:28] interesting pod/machine-config-daemon-q8sfk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 12:10:30 crc kubenswrapper[4703]: I1209 12:10:30.083306 4703 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 12:10:30 crc kubenswrapper[4703]: I1209 12:10:30.124137 4703 generic.go:334] "Generic (PLEG): container finished" podID="388c4861-4b06-46fa-821d-f67043af91a9" containerID="c4c1c67b5a01f8175611a93f178b5e0cd56b13fe0c4ed4cbf4279218bd8eaa31" exitCode=0 Dec 09 12:10:30 crc kubenswrapper[4703]: I1209 12:10:30.124183 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5f5c4bb775-fblrl" event={"ID":"388c4861-4b06-46fa-821d-f67043af91a9","Type":"ContainerDied","Data":"c4c1c67b5a01f8175611a93f178b5e0cd56b13fe0c4ed4cbf4279218bd8eaa31"} Dec 09 12:10:30 crc kubenswrapper[4703]: I1209 12:10:30.692224 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5f5c4bb775-fblrl" Dec 09 12:10:30 crc kubenswrapper[4703]: I1209 12:10:30.724648 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/388c4861-4b06-46fa-821d-f67043af91a9-client-ca\") pod \"388c4861-4b06-46fa-821d-f67043af91a9\" (UID: \"388c4861-4b06-46fa-821d-f67043af91a9\") " Dec 09 12:10:30 crc kubenswrapper[4703]: I1209 12:10:30.724739 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dzkpj\" (UniqueName: \"kubernetes.io/projected/388c4861-4b06-46fa-821d-f67043af91a9-kube-api-access-dzkpj\") pod \"388c4861-4b06-46fa-821d-f67043af91a9\" (UID: \"388c4861-4b06-46fa-821d-f67043af91a9\") " Dec 09 12:10:30 crc kubenswrapper[4703]: I1209 12:10:30.724787 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/388c4861-4b06-46fa-821d-f67043af91a9-serving-cert\") pod \"388c4861-4b06-46fa-821d-f67043af91a9\" (UID: \"388c4861-4b06-46fa-821d-f67043af91a9\") " Dec 09 12:10:30 crc kubenswrapper[4703]: I1209 12:10:30.724872 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/388c4861-4b06-46fa-821d-f67043af91a9-config\") pod \"388c4861-4b06-46fa-821d-f67043af91a9\" (UID: \"388c4861-4b06-46fa-821d-f67043af91a9\") " Dec 09 12:10:30 crc kubenswrapper[4703]: I1209 12:10:30.726495 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/388c4861-4b06-46fa-821d-f67043af91a9-config" (OuterVolumeSpecName: "config") pod "388c4861-4b06-46fa-821d-f67043af91a9" (UID: "388c4861-4b06-46fa-821d-f67043af91a9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:10:30 crc kubenswrapper[4703]: I1209 12:10:30.726524 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/388c4861-4b06-46fa-821d-f67043af91a9-client-ca" (OuterVolumeSpecName: "client-ca") pod "388c4861-4b06-46fa-821d-f67043af91a9" (UID: "388c4861-4b06-46fa-821d-f67043af91a9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:10:30 crc kubenswrapper[4703]: I1209 12:10:30.730708 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/388c4861-4b06-46fa-821d-f67043af91a9-kube-api-access-dzkpj" (OuterVolumeSpecName: "kube-api-access-dzkpj") pod "388c4861-4b06-46fa-821d-f67043af91a9" (UID: "388c4861-4b06-46fa-821d-f67043af91a9"). InnerVolumeSpecName "kube-api-access-dzkpj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:10:30 crc kubenswrapper[4703]: I1209 12:10:30.731010 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/388c4861-4b06-46fa-821d-f67043af91a9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "388c4861-4b06-46fa-821d-f67043af91a9" (UID: "388c4861-4b06-46fa-821d-f67043af91a9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:10:30 crc kubenswrapper[4703]: I1209 12:10:30.826259 4703 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/388c4861-4b06-46fa-821d-f67043af91a9-client-ca\") on node \"crc\" DevicePath \"\"" Dec 09 12:10:30 crc kubenswrapper[4703]: I1209 12:10:30.826293 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dzkpj\" (UniqueName: \"kubernetes.io/projected/388c4861-4b06-46fa-821d-f67043af91a9-kube-api-access-dzkpj\") on node \"crc\" DevicePath \"\"" Dec 09 12:10:30 crc kubenswrapper[4703]: I1209 12:10:30.826307 4703 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/388c4861-4b06-46fa-821d-f67043af91a9-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 12:10:30 crc kubenswrapper[4703]: I1209 12:10:30.826318 4703 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/388c4861-4b06-46fa-821d-f67043af91a9-config\") on node \"crc\" DevicePath \"\"" Dec 09 12:10:31 crc kubenswrapper[4703]: I1209 12:10:31.130577 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5f5c4bb775-fblrl" event={"ID":"388c4861-4b06-46fa-821d-f67043af91a9","Type":"ContainerDied","Data":"a9cb14b817f8bfbe33b19738bf52c387f6b73c426660b540e37377e4596189fa"} Dec 09 12:10:31 crc kubenswrapper[4703]: I1209 12:10:31.130633 4703 scope.go:117] "RemoveContainer" containerID="c4c1c67b5a01f8175611a93f178b5e0cd56b13fe0c4ed4cbf4279218bd8eaa31" Dec 09 12:10:31 crc kubenswrapper[4703]: I1209 12:10:31.130666 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5f5c4bb775-fblrl" Dec 09 12:10:31 crc kubenswrapper[4703]: I1209 12:10:31.151118 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f5c4bb775-fblrl"] Dec 09 12:10:31 crc kubenswrapper[4703]: I1209 12:10:31.158170 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f5c4bb775-fblrl"] Dec 09 12:10:31 crc kubenswrapper[4703]: I1209 12:10:31.538319 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5557787bdc-hprhv"] Dec 09 12:10:31 crc kubenswrapper[4703]: E1209 12:10:31.538535 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="388c4861-4b06-46fa-821d-f67043af91a9" containerName="route-controller-manager" Dec 09 12:10:31 crc kubenswrapper[4703]: I1209 12:10:31.538547 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="388c4861-4b06-46fa-821d-f67043af91a9" containerName="route-controller-manager" Dec 09 12:10:31 crc kubenswrapper[4703]: I1209 12:10:31.538648 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="388c4861-4b06-46fa-821d-f67043af91a9" containerName="route-controller-manager" Dec 09 12:10:31 crc kubenswrapper[4703]: I1209 12:10:31.539032 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5557787bdc-hprhv" Dec 09 12:10:31 crc kubenswrapper[4703]: I1209 12:10:31.545282 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 09 12:10:31 crc kubenswrapper[4703]: I1209 12:10:31.545532 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 09 12:10:31 crc kubenswrapper[4703]: I1209 12:10:31.547015 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 09 12:10:31 crc kubenswrapper[4703]: I1209 12:10:31.547674 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 09 12:10:31 crc kubenswrapper[4703]: I1209 12:10:31.548024 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 09 12:10:31 crc kubenswrapper[4703]: I1209 12:10:31.548362 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 09 12:10:31 crc kubenswrapper[4703]: I1209 12:10:31.566484 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5557787bdc-hprhv"] Dec 09 12:10:31 crc kubenswrapper[4703]: I1209 12:10:31.636299 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/242bc2df-a599-4b03-80f7-189657dbfc4e-config\") pod \"route-controller-manager-5557787bdc-hprhv\" (UID: \"242bc2df-a599-4b03-80f7-189657dbfc4e\") " pod="openshift-route-controller-manager/route-controller-manager-5557787bdc-hprhv" Dec 09 12:10:31 crc kubenswrapper[4703]: I1209 12:10:31.636616 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/242bc2df-a599-4b03-80f7-189657dbfc4e-client-ca\") pod \"route-controller-manager-5557787bdc-hprhv\" (UID: \"242bc2df-a599-4b03-80f7-189657dbfc4e\") " pod="openshift-route-controller-manager/route-controller-manager-5557787bdc-hprhv" Dec 09 12:10:31 crc kubenswrapper[4703]: I1209 12:10:31.636645 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrmd6\" (UniqueName: \"kubernetes.io/projected/242bc2df-a599-4b03-80f7-189657dbfc4e-kube-api-access-qrmd6\") pod \"route-controller-manager-5557787bdc-hprhv\" (UID: \"242bc2df-a599-4b03-80f7-189657dbfc4e\") " pod="openshift-route-controller-manager/route-controller-manager-5557787bdc-hprhv" Dec 09 12:10:31 crc kubenswrapper[4703]: I1209 12:10:31.636673 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/242bc2df-a599-4b03-80f7-189657dbfc4e-serving-cert\") pod \"route-controller-manager-5557787bdc-hprhv\" (UID: \"242bc2df-a599-4b03-80f7-189657dbfc4e\") " pod="openshift-route-controller-manager/route-controller-manager-5557787bdc-hprhv" Dec 09 12:10:31 crc kubenswrapper[4703]: I1209 12:10:31.738283 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrmd6\" (UniqueName: \"kubernetes.io/projected/242bc2df-a599-4b03-80f7-189657dbfc4e-kube-api-access-qrmd6\") pod \"route-controller-manager-5557787bdc-hprhv\" (UID: \"242bc2df-a599-4b03-80f7-189657dbfc4e\") " pod="openshift-route-controller-manager/route-controller-manager-5557787bdc-hprhv" Dec 09 12:10:31 crc kubenswrapper[4703]: I1209 12:10:31.738371 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/242bc2df-a599-4b03-80f7-189657dbfc4e-serving-cert\") pod \"route-controller-manager-5557787bdc-hprhv\" (UID: \"242bc2df-a599-4b03-80f7-189657dbfc4e\") " pod="openshift-route-controller-manager/route-controller-manager-5557787bdc-hprhv" Dec 09 12:10:31 crc kubenswrapper[4703]: I1209 12:10:31.738438 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/242bc2df-a599-4b03-80f7-189657dbfc4e-config\") pod \"route-controller-manager-5557787bdc-hprhv\" (UID: \"242bc2df-a599-4b03-80f7-189657dbfc4e\") " pod="openshift-route-controller-manager/route-controller-manager-5557787bdc-hprhv" Dec 09 12:10:31 crc kubenswrapper[4703]: I1209 12:10:31.738460 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/242bc2df-a599-4b03-80f7-189657dbfc4e-client-ca\") pod \"route-controller-manager-5557787bdc-hprhv\" (UID: \"242bc2df-a599-4b03-80f7-189657dbfc4e\") " pod="openshift-route-controller-manager/route-controller-manager-5557787bdc-hprhv" Dec 09 12:10:31 crc kubenswrapper[4703]: I1209 12:10:31.739413 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/242bc2df-a599-4b03-80f7-189657dbfc4e-client-ca\") pod \"route-controller-manager-5557787bdc-hprhv\" (UID: \"242bc2df-a599-4b03-80f7-189657dbfc4e\") " pod="openshift-route-controller-manager/route-controller-manager-5557787bdc-hprhv" Dec 09 12:10:31 crc kubenswrapper[4703]: I1209 12:10:31.739583 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/242bc2df-a599-4b03-80f7-189657dbfc4e-config\") pod \"route-controller-manager-5557787bdc-hprhv\" (UID: \"242bc2df-a599-4b03-80f7-189657dbfc4e\") " pod="openshift-route-controller-manager/route-controller-manager-5557787bdc-hprhv" Dec 09 12:10:31 crc kubenswrapper[4703]: I1209 12:10:31.744158 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/242bc2df-a599-4b03-80f7-189657dbfc4e-serving-cert\") pod \"route-controller-manager-5557787bdc-hprhv\" (UID: \"242bc2df-a599-4b03-80f7-189657dbfc4e\") " pod="openshift-route-controller-manager/route-controller-manager-5557787bdc-hprhv" Dec 09 12:10:31 crc kubenswrapper[4703]: I1209 12:10:31.755818 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrmd6\" (UniqueName: \"kubernetes.io/projected/242bc2df-a599-4b03-80f7-189657dbfc4e-kube-api-access-qrmd6\") pod \"route-controller-manager-5557787bdc-hprhv\" (UID: \"242bc2df-a599-4b03-80f7-189657dbfc4e\") " pod="openshift-route-controller-manager/route-controller-manager-5557787bdc-hprhv" Dec 09 12:10:31 crc kubenswrapper[4703]: I1209 12:10:31.876831 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5557787bdc-hprhv" Dec 09 12:10:32 crc kubenswrapper[4703]: I1209 12:10:32.290014 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5557787bdc-hprhv"] Dec 09 12:10:32 crc kubenswrapper[4703]: W1209 12:10:32.296806 4703 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod242bc2df_a599_4b03_80f7_189657dbfc4e.slice/crio-abdea72c4948ec86a9ffe975813135d76e2850645912582fdaab9bcd2832eaec WatchSource:0}: Error finding container abdea72c4948ec86a9ffe975813135d76e2850645912582fdaab9bcd2832eaec: Status 404 returned error can't find the container with id abdea72c4948ec86a9ffe975813135d76e2850645912582fdaab9bcd2832eaec Dec 09 12:10:33 crc kubenswrapper[4703]: I1209 12:10:33.075632 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="388c4861-4b06-46fa-821d-f67043af91a9" path="/var/lib/kubelet/pods/388c4861-4b06-46fa-821d-f67043af91a9/volumes" Dec 09 12:10:33 crc kubenswrapper[4703]: I1209 12:10:33.142373 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5557787bdc-hprhv" event={"ID":"242bc2df-a599-4b03-80f7-189657dbfc4e","Type":"ContainerStarted","Data":"cb9c9644c8554863433edd49d43e253a2d588461b1fa4f23786c141c5ca183db"} Dec 09 12:10:33 crc kubenswrapper[4703]: I1209 12:10:33.142415 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5557787bdc-hprhv" event={"ID":"242bc2df-a599-4b03-80f7-189657dbfc4e","Type":"ContainerStarted","Data":"abdea72c4948ec86a9ffe975813135d76e2850645912582fdaab9bcd2832eaec"} Dec 09 12:10:33 crc kubenswrapper[4703]: I1209 12:10:33.142632 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5557787bdc-hprhv" Dec 09 12:10:33 crc kubenswrapper[4703]: I1209 12:10:33.147801 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5557787bdc-hprhv" Dec 09 12:10:33 crc kubenswrapper[4703]: I1209 12:10:33.160547 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5557787bdc-hprhv" podStartSLOduration=4.160529572 podStartE2EDuration="4.160529572s" podCreationTimestamp="2025-12-09 12:10:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:10:33.157152888 +0000 UTC m=+332.405916417" watchObservedRunningTime="2025-12-09 12:10:33.160529572 +0000 UTC m=+332.409293091" Dec 09 12:11:00 crc kubenswrapper[4703]: I1209 12:11:00.083934 4703 patch_prober.go:28] interesting pod/machine-config-daemon-q8sfk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 12:11:00 crc kubenswrapper[4703]: I1209 12:11:00.084509 4703 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 12:11:09 crc kubenswrapper[4703]: I1209 12:11:09.134386 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-x9c56"] Dec 09 12:11:09 crc kubenswrapper[4703]: I1209 12:11:09.135608 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-x9c56" Dec 09 12:11:09 crc kubenswrapper[4703]: I1209 12:11:09.156700 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-x9c56"] Dec 09 12:11:09 crc kubenswrapper[4703]: I1209 12:11:09.233624 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/6c96259d-9667-4d1a-a328-ef61b5853d68-registry-certificates\") pod \"image-registry-66df7c8f76-x9c56\" (UID: \"6c96259d-9667-4d1a-a328-ef61b5853d68\") " pod="openshift-image-registry/image-registry-66df7c8f76-x9c56" Dec 09 12:11:09 crc kubenswrapper[4703]: I1209 12:11:09.233689 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-x9c56\" (UID: \"6c96259d-9667-4d1a-a328-ef61b5853d68\") " pod="openshift-image-registry/image-registry-66df7c8f76-x9c56" Dec 09 12:11:09 crc kubenswrapper[4703]: I1209 12:11:09.233713 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6c96259d-9667-4d1a-a328-ef61b5853d68-bound-sa-token\") pod \"image-registry-66df7c8f76-x9c56\" (UID: \"6c96259d-9667-4d1a-a328-ef61b5853d68\") " pod="openshift-image-registry/image-registry-66df7c8f76-x9c56" Dec 09 12:11:09 crc kubenswrapper[4703]: I1209 12:11:09.233809 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6c96259d-9667-4d1a-a328-ef61b5853d68-trusted-ca\") pod \"image-registry-66df7c8f76-x9c56\" (UID: \"6c96259d-9667-4d1a-a328-ef61b5853d68\") " pod="openshift-image-registry/image-registry-66df7c8f76-x9c56" Dec 09 12:11:09 crc kubenswrapper[4703]: I1209 12:11:09.233860 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hkmz\" (UniqueName: \"kubernetes.io/projected/6c96259d-9667-4d1a-a328-ef61b5853d68-kube-api-access-4hkmz\") pod \"image-registry-66df7c8f76-x9c56\" (UID: \"6c96259d-9667-4d1a-a328-ef61b5853d68\") " pod="openshift-image-registry/image-registry-66df7c8f76-x9c56" Dec 09 12:11:09 crc kubenswrapper[4703]: I1209 12:11:09.234018 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/6c96259d-9667-4d1a-a328-ef61b5853d68-registry-tls\") pod \"image-registry-66df7c8f76-x9c56\" (UID: \"6c96259d-9667-4d1a-a328-ef61b5853d68\") " pod="openshift-image-registry/image-registry-66df7c8f76-x9c56" Dec 09 12:11:09 crc kubenswrapper[4703]: I1209 12:11:09.234145 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/6c96259d-9667-4d1a-a328-ef61b5853d68-ca-trust-extracted\") pod \"image-registry-66df7c8f76-x9c56\" (UID: \"6c96259d-9667-4d1a-a328-ef61b5853d68\") " pod="openshift-image-registry/image-registry-66df7c8f76-x9c56" Dec 09 12:11:09 crc kubenswrapper[4703]: I1209 12:11:09.234180 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/6c96259d-9667-4d1a-a328-ef61b5853d68-installation-pull-secrets\") pod \"image-registry-66df7c8f76-x9c56\" (UID: \"6c96259d-9667-4d1a-a328-ef61b5853d68\") " pod="openshift-image-registry/image-registry-66df7c8f76-x9c56" Dec 09 12:11:09 crc kubenswrapper[4703]: I1209 12:11:09.256827 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-x9c56\" (UID: \"6c96259d-9667-4d1a-a328-ef61b5853d68\") " pod="openshift-image-registry/image-registry-66df7c8f76-x9c56" Dec 09 12:11:09 crc kubenswrapper[4703]: I1209 12:11:09.335560 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/6c96259d-9667-4d1a-a328-ef61b5853d68-installation-pull-secrets\") pod \"image-registry-66df7c8f76-x9c56\" (UID: \"6c96259d-9667-4d1a-a328-ef61b5853d68\") " pod="openshift-image-registry/image-registry-66df7c8f76-x9c56" Dec 09 12:11:09 crc kubenswrapper[4703]: I1209 12:11:09.335632 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/6c96259d-9667-4d1a-a328-ef61b5853d68-registry-certificates\") pod \"image-registry-66df7c8f76-x9c56\" (UID: \"6c96259d-9667-4d1a-a328-ef61b5853d68\") " pod="openshift-image-registry/image-registry-66df7c8f76-x9c56" Dec 09 12:11:09 crc kubenswrapper[4703]: I1209 12:11:09.335669 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6c96259d-9667-4d1a-a328-ef61b5853d68-bound-sa-token\") pod \"image-registry-66df7c8f76-x9c56\" (UID: \"6c96259d-9667-4d1a-a328-ef61b5853d68\") " pod="openshift-image-registry/image-registry-66df7c8f76-x9c56" Dec 09 12:11:09 crc kubenswrapper[4703]: I1209 12:11:09.335699 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6c96259d-9667-4d1a-a328-ef61b5853d68-trusted-ca\") pod \"image-registry-66df7c8f76-x9c56\" (UID: \"6c96259d-9667-4d1a-a328-ef61b5853d68\") " pod="openshift-image-registry/image-registry-66df7c8f76-x9c56" Dec 09 12:11:09 crc kubenswrapper[4703]: I1209 12:11:09.335723 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hkmz\" (UniqueName: \"kubernetes.io/projected/6c96259d-9667-4d1a-a328-ef61b5853d68-kube-api-access-4hkmz\") pod \"image-registry-66df7c8f76-x9c56\" (UID: \"6c96259d-9667-4d1a-a328-ef61b5853d68\") " pod="openshift-image-registry/image-registry-66df7c8f76-x9c56" Dec 09 12:11:09 crc kubenswrapper[4703]: I1209 12:11:09.335746 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/6c96259d-9667-4d1a-a328-ef61b5853d68-registry-tls\") pod \"image-registry-66df7c8f76-x9c56\" (UID: \"6c96259d-9667-4d1a-a328-ef61b5853d68\") " pod="openshift-image-registry/image-registry-66df7c8f76-x9c56" Dec 09 12:11:09 crc kubenswrapper[4703]: I1209 12:11:09.335774 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/6c96259d-9667-4d1a-a328-ef61b5853d68-ca-trust-extracted\") pod \"image-registry-66df7c8f76-x9c56\" (UID: \"6c96259d-9667-4d1a-a328-ef61b5853d68\") " pod="openshift-image-registry/image-registry-66df7c8f76-x9c56" Dec 09 12:11:09 crc kubenswrapper[4703]: I1209 12:11:09.336289 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/6c96259d-9667-4d1a-a328-ef61b5853d68-ca-trust-extracted\") pod \"image-registry-66df7c8f76-x9c56\" (UID: \"6c96259d-9667-4d1a-a328-ef61b5853d68\") " pod="openshift-image-registry/image-registry-66df7c8f76-x9c56" Dec 09 12:11:09 crc kubenswrapper[4703]: I1209 12:11:09.336855 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/6c96259d-9667-4d1a-a328-ef61b5853d68-registry-certificates\") pod \"image-registry-66df7c8f76-x9c56\" (UID: \"6c96259d-9667-4d1a-a328-ef61b5853d68\") " pod="openshift-image-registry/image-registry-66df7c8f76-x9c56" Dec 09 12:11:09 crc kubenswrapper[4703]: I1209 12:11:09.336874 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6c96259d-9667-4d1a-a328-ef61b5853d68-trusted-ca\") pod \"image-registry-66df7c8f76-x9c56\" (UID: \"6c96259d-9667-4d1a-a328-ef61b5853d68\") " pod="openshift-image-registry/image-registry-66df7c8f76-x9c56" Dec 09 12:11:09 crc kubenswrapper[4703]: I1209 12:11:09.343193 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/6c96259d-9667-4d1a-a328-ef61b5853d68-registry-tls\") pod \"image-registry-66df7c8f76-x9c56\" (UID: \"6c96259d-9667-4d1a-a328-ef61b5853d68\") " pod="openshift-image-registry/image-registry-66df7c8f76-x9c56" Dec 09 12:11:09 crc kubenswrapper[4703]: I1209 12:11:09.344784 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/6c96259d-9667-4d1a-a328-ef61b5853d68-installation-pull-secrets\") pod \"image-registry-66df7c8f76-x9c56\" (UID: \"6c96259d-9667-4d1a-a328-ef61b5853d68\") " pod="openshift-image-registry/image-registry-66df7c8f76-x9c56" Dec 09 12:11:09 crc kubenswrapper[4703]: I1209 12:11:09.360869 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6c96259d-9667-4d1a-a328-ef61b5853d68-bound-sa-token\") pod \"image-registry-66df7c8f76-x9c56\" (UID: \"6c96259d-9667-4d1a-a328-ef61b5853d68\") " pod="openshift-image-registry/image-registry-66df7c8f76-x9c56" Dec 09 12:11:09 crc kubenswrapper[4703]: I1209 12:11:09.361631 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hkmz\" (UniqueName: \"kubernetes.io/projected/6c96259d-9667-4d1a-a328-ef61b5853d68-kube-api-access-4hkmz\") pod \"image-registry-66df7c8f76-x9c56\" (UID: \"6c96259d-9667-4d1a-a328-ef61b5853d68\") " pod="openshift-image-registry/image-registry-66df7c8f76-x9c56" Dec 09 12:11:09 crc kubenswrapper[4703]: I1209 12:11:09.450053 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-x9c56" Dec 09 12:11:09 crc kubenswrapper[4703]: I1209 12:11:09.857323 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-x9c56"] Dec 09 12:11:10 crc kubenswrapper[4703]: I1209 12:11:10.341079 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-x9c56" event={"ID":"6c96259d-9667-4d1a-a328-ef61b5853d68","Type":"ContainerStarted","Data":"726e6230b5e5fd1e750270944b5b640556de65167c8632ce4a9f1d957f539ea2"} Dec 09 12:11:10 crc kubenswrapper[4703]: I1209 12:11:10.341434 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-x9c56" event={"ID":"6c96259d-9667-4d1a-a328-ef61b5853d68","Type":"ContainerStarted","Data":"4bd061bb827fd842ccd6deaab4399e80da0c1efaac770640933b6b6673c7e321"} Dec 09 12:11:10 crc kubenswrapper[4703]: I1209 12:11:10.341480 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-x9c56" Dec 09 12:11:10 crc kubenswrapper[4703]: I1209 12:11:10.359982 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-x9c56" podStartSLOduration=1.359954116 podStartE2EDuration="1.359954116s" podCreationTimestamp="2025-12-09 12:11:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:11:10.359636747 +0000 UTC m=+369.608400266" watchObservedRunningTime="2025-12-09 12:11:10.359954116 +0000 UTC m=+369.608717635" Dec 09 12:11:17 crc kubenswrapper[4703]: I1209 12:11:17.609386 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-p2dxk"] Dec 09 12:11:17 crc kubenswrapper[4703]: I1209 12:11:17.610124 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-p2dxk" podUID="a6d97ca9-dbfc-4bb7-9784-32152f514675" containerName="registry-server" containerID="cri-o://ce23f2cecd5ef604f45a01af6cc4cbc196b5c36a17becd4aa72b345c05fc6499" gracePeriod=30 Dec 09 12:11:17 crc kubenswrapper[4703]: I1209 12:11:17.622721 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8xkww"] Dec 09 12:11:17 crc kubenswrapper[4703]: I1209 12:11:17.623376 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-8xkww" podUID="621283ab-7eb7-4952-9059-c3c4209bca7b" containerName="registry-server" containerID="cri-o://c37c9e3e8358b2782077cadf7336e0fc1751fa099d459ac7b6070a1aba612859" gracePeriod=30 Dec 09 12:11:17 crc kubenswrapper[4703]: I1209 12:11:17.627665 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vlkrh"] Dec 09 12:11:17 crc kubenswrapper[4703]: I1209 12:11:17.627881 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-vlkrh" podUID="d8699089-c9ff-4389-8a8e-72b5c976b5ae" containerName="marketplace-operator" containerID="cri-o://1282d2c65925d30c8139e902bd8fd2ab9fda375c81eab8a250cbb36e318fc681" gracePeriod=30 Dec 09 12:11:17 crc kubenswrapper[4703]: I1209 12:11:17.639794 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bkmjk"] Dec 09 12:11:17 crc kubenswrapper[4703]: I1209 12:11:17.640052 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-bkmjk" podUID="5508ed7b-36a7-4d73-a489-6d7eb6eb6c3e" containerName="registry-server" containerID="cri-o://583dcb944ad38886efe2ead42442edcae149f2128528491d31a626f8c7707de2" gracePeriod=30 Dec 09 12:11:17 crc kubenswrapper[4703]: I1209 12:11:17.647029 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9nc2d"] Dec 09 12:11:17 crc kubenswrapper[4703]: I1209 12:11:17.647354 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9nc2d" podUID="2e4c8435-609b-49fe-9f13-17547856b18a" containerName="registry-server" containerID="cri-o://b018e667e0212de01425abc6cf5678df03b6ca3461762d4085ec79680cf93f0f" gracePeriod=30 Dec 09 12:11:17 crc kubenswrapper[4703]: I1209 12:11:17.651856 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2s2wq"] Dec 09 12:11:17 crc kubenswrapper[4703]: I1209 12:11:17.656804 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-2s2wq" Dec 09 12:11:17 crc kubenswrapper[4703]: I1209 12:11:17.663403 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z48rr\" (UniqueName: \"kubernetes.io/projected/da71670f-b188-4a71-ac05-55ad7e238d62-kube-api-access-z48rr\") pod \"marketplace-operator-79b997595-2s2wq\" (UID: \"da71670f-b188-4a71-ac05-55ad7e238d62\") " pod="openshift-marketplace/marketplace-operator-79b997595-2s2wq" Dec 09 12:11:17 crc kubenswrapper[4703]: I1209 12:11:17.663703 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/da71670f-b188-4a71-ac05-55ad7e238d62-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-2s2wq\" (UID: \"da71670f-b188-4a71-ac05-55ad7e238d62\") " pod="openshift-marketplace/marketplace-operator-79b997595-2s2wq" Dec 09 12:11:17 crc kubenswrapper[4703]: I1209 12:11:17.663832 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/da71670f-b188-4a71-ac05-55ad7e238d62-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-2s2wq\" (UID: \"da71670f-b188-4a71-ac05-55ad7e238d62\") " pod="openshift-marketplace/marketplace-operator-79b997595-2s2wq" Dec 09 12:11:17 crc kubenswrapper[4703]: I1209 12:11:17.671883 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2s2wq"] Dec 09 12:11:17 crc kubenswrapper[4703]: I1209 12:11:17.765344 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z48rr\" (UniqueName: \"kubernetes.io/projected/da71670f-b188-4a71-ac05-55ad7e238d62-kube-api-access-z48rr\") pod \"marketplace-operator-79b997595-2s2wq\" (UID: \"da71670f-b188-4a71-ac05-55ad7e238d62\") " pod="openshift-marketplace/marketplace-operator-79b997595-2s2wq" Dec 09 12:11:17 crc kubenswrapper[4703]: I1209 12:11:17.765432 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/da71670f-b188-4a71-ac05-55ad7e238d62-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-2s2wq\" (UID: \"da71670f-b188-4a71-ac05-55ad7e238d62\") " pod="openshift-marketplace/marketplace-operator-79b997595-2s2wq" Dec 09 12:11:17 crc kubenswrapper[4703]: I1209 12:11:17.765484 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/da71670f-b188-4a71-ac05-55ad7e238d62-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-2s2wq\" (UID: \"da71670f-b188-4a71-ac05-55ad7e238d62\") " pod="openshift-marketplace/marketplace-operator-79b997595-2s2wq" Dec 09 12:11:17 crc kubenswrapper[4703]: I1209 12:11:17.767432 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/da71670f-b188-4a71-ac05-55ad7e238d62-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-2s2wq\" (UID: \"da71670f-b188-4a71-ac05-55ad7e238d62\") " pod="openshift-marketplace/marketplace-operator-79b997595-2s2wq" Dec 09 12:11:17 crc kubenswrapper[4703]: I1209 12:11:17.779537 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/da71670f-b188-4a71-ac05-55ad7e238d62-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-2s2wq\" (UID: \"da71670f-b188-4a71-ac05-55ad7e238d62\") " pod="openshift-marketplace/marketplace-operator-79b997595-2s2wq" Dec 09 12:11:17 crc kubenswrapper[4703]: I1209 12:11:17.782031 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z48rr\" (UniqueName: \"kubernetes.io/projected/da71670f-b188-4a71-ac05-55ad7e238d62-kube-api-access-z48rr\") pod \"marketplace-operator-79b997595-2s2wq\" (UID: \"da71670f-b188-4a71-ac05-55ad7e238d62\") " pod="openshift-marketplace/marketplace-operator-79b997595-2s2wq" Dec 09 12:11:17 crc kubenswrapper[4703]: I1209 12:11:17.988052 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-2s2wq" Dec 09 12:11:18 crc kubenswrapper[4703]: I1209 12:11:18.378601 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2s2wq"] Dec 09 12:11:18 crc kubenswrapper[4703]: I1209 12:11:18.390712 4703 generic.go:334] "Generic (PLEG): container finished" podID="5508ed7b-36a7-4d73-a489-6d7eb6eb6c3e" containerID="583dcb944ad38886efe2ead42442edcae149f2128528491d31a626f8c7707de2" exitCode=0 Dec 09 12:11:18 crc kubenswrapper[4703]: I1209 12:11:18.390819 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bkmjk" event={"ID":"5508ed7b-36a7-4d73-a489-6d7eb6eb6c3e","Type":"ContainerDied","Data":"583dcb944ad38886efe2ead42442edcae149f2128528491d31a626f8c7707de2"} Dec 09 12:11:18 crc kubenswrapper[4703]: I1209 12:11:18.393711 4703 generic.go:334] "Generic (PLEG): container finished" podID="d8699089-c9ff-4389-8a8e-72b5c976b5ae" containerID="1282d2c65925d30c8139e902bd8fd2ab9fda375c81eab8a250cbb36e318fc681" exitCode=0 Dec 09 12:11:18 crc kubenswrapper[4703]: I1209 12:11:18.393835 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vlkrh" event={"ID":"d8699089-c9ff-4389-8a8e-72b5c976b5ae","Type":"ContainerDied","Data":"1282d2c65925d30c8139e902bd8fd2ab9fda375c81eab8a250cbb36e318fc681"} Dec 09 12:11:18 crc kubenswrapper[4703]: I1209 12:11:18.394403 4703 scope.go:117] "RemoveContainer" containerID="a017079e8d9babf21e6a2956dc97b734c4fd3227220478313cf0a7b1ee0ecc20" Dec 09 12:11:18 crc kubenswrapper[4703]: I1209 12:11:18.406885 4703 generic.go:334] "Generic (PLEG): container finished" podID="2e4c8435-609b-49fe-9f13-17547856b18a" containerID="b018e667e0212de01425abc6cf5678df03b6ca3461762d4085ec79680cf93f0f" exitCode=0 Dec 09 12:11:18 crc kubenswrapper[4703]: I1209 12:11:18.406954 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9nc2d" event={"ID":"2e4c8435-609b-49fe-9f13-17547856b18a","Type":"ContainerDied","Data":"b018e667e0212de01425abc6cf5678df03b6ca3461762d4085ec79680cf93f0f"} Dec 09 12:11:18 crc kubenswrapper[4703]: I1209 12:11:18.410546 4703 generic.go:334] "Generic (PLEG): container finished" podID="a6d97ca9-dbfc-4bb7-9784-32152f514675" containerID="ce23f2cecd5ef604f45a01af6cc4cbc196b5c36a17becd4aa72b345c05fc6499" exitCode=0 Dec 09 12:11:18 crc kubenswrapper[4703]: I1209 12:11:18.410609 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p2dxk" event={"ID":"a6d97ca9-dbfc-4bb7-9784-32152f514675","Type":"ContainerDied","Data":"ce23f2cecd5ef604f45a01af6cc4cbc196b5c36a17becd4aa72b345c05fc6499"} Dec 09 12:11:18 crc kubenswrapper[4703]: I1209 12:11:18.412677 4703 generic.go:334] "Generic (PLEG): container finished" podID="621283ab-7eb7-4952-9059-c3c4209bca7b" containerID="c37c9e3e8358b2782077cadf7336e0fc1751fa099d459ac7b6070a1aba612859" exitCode=0 Dec 09 12:11:18 crc kubenswrapper[4703]: I1209 12:11:18.412702 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8xkww" event={"ID":"621283ab-7eb7-4952-9059-c3c4209bca7b","Type":"ContainerDied","Data":"c37c9e3e8358b2782077cadf7336e0fc1751fa099d459ac7b6070a1aba612859"} Dec 09 12:11:18 crc kubenswrapper[4703]: I1209 12:11:18.504052 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p2dxk" Dec 09 12:11:18 crc kubenswrapper[4703]: I1209 12:11:18.577806 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6d97ca9-dbfc-4bb7-9784-32152f514675-utilities\") pod \"a6d97ca9-dbfc-4bb7-9784-32152f514675\" (UID: \"a6d97ca9-dbfc-4bb7-9784-32152f514675\") " Dec 09 12:11:18 crc kubenswrapper[4703]: I1209 12:11:18.577865 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6d97ca9-dbfc-4bb7-9784-32152f514675-catalog-content\") pod \"a6d97ca9-dbfc-4bb7-9784-32152f514675\" (UID: \"a6d97ca9-dbfc-4bb7-9784-32152f514675\") " Dec 09 12:11:18 crc kubenswrapper[4703]: I1209 12:11:18.577922 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9r94k\" (UniqueName: \"kubernetes.io/projected/a6d97ca9-dbfc-4bb7-9784-32152f514675-kube-api-access-9r94k\") pod \"a6d97ca9-dbfc-4bb7-9784-32152f514675\" (UID: \"a6d97ca9-dbfc-4bb7-9784-32152f514675\") " Dec 09 12:11:18 crc kubenswrapper[4703]: I1209 12:11:18.578507 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6d97ca9-dbfc-4bb7-9784-32152f514675-utilities" (OuterVolumeSpecName: "utilities") pod "a6d97ca9-dbfc-4bb7-9784-32152f514675" (UID: "a6d97ca9-dbfc-4bb7-9784-32152f514675"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:11:18 crc kubenswrapper[4703]: I1209 12:11:18.589164 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6d97ca9-dbfc-4bb7-9784-32152f514675-kube-api-access-9r94k" (OuterVolumeSpecName: "kube-api-access-9r94k") pod "a6d97ca9-dbfc-4bb7-9784-32152f514675" (UID: "a6d97ca9-dbfc-4bb7-9784-32152f514675"). InnerVolumeSpecName "kube-api-access-9r94k". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:11:18 crc kubenswrapper[4703]: I1209 12:11:18.604447 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-vlkrh" Dec 09 12:11:18 crc kubenswrapper[4703]: E1209 12:11:18.604823 4703 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b018e667e0212de01425abc6cf5678df03b6ca3461762d4085ec79680cf93f0f is running failed: container process not found" containerID="b018e667e0212de01425abc6cf5678df03b6ca3461762d4085ec79680cf93f0f" cmd=["grpc_health_probe","-addr=:50051"] Dec 09 12:11:18 crc kubenswrapper[4703]: E1209 12:11:18.605117 4703 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b018e667e0212de01425abc6cf5678df03b6ca3461762d4085ec79680cf93f0f is running failed: container process not found" containerID="b018e667e0212de01425abc6cf5678df03b6ca3461762d4085ec79680cf93f0f" cmd=["grpc_health_probe","-addr=:50051"] Dec 09 12:11:18 crc kubenswrapper[4703]: E1209 12:11:18.605407 4703 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b018e667e0212de01425abc6cf5678df03b6ca3461762d4085ec79680cf93f0f is running failed: container process not found" containerID="b018e667e0212de01425abc6cf5678df03b6ca3461762d4085ec79680cf93f0f" cmd=["grpc_health_probe","-addr=:50051"] Dec 09 12:11:18 crc kubenswrapper[4703]: E1209 12:11:18.605528 4703 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b018e667e0212de01425abc6cf5678df03b6ca3461762d4085ec79680cf93f0f is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-9nc2d" podUID="2e4c8435-609b-49fe-9f13-17547856b18a" containerName="registry-server" Dec 09 12:11:18 crc kubenswrapper[4703]: I1209 12:11:18.612531 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9nc2d" Dec 09 12:11:18 crc kubenswrapper[4703]: I1209 12:11:18.632038 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bkmjk" Dec 09 12:11:18 crc kubenswrapper[4703]: I1209 12:11:18.639896 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8xkww" Dec 09 12:11:18 crc kubenswrapper[4703]: I1209 12:11:18.641943 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6d97ca9-dbfc-4bb7-9784-32152f514675-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a6d97ca9-dbfc-4bb7-9784-32152f514675" (UID: "a6d97ca9-dbfc-4bb7-9784-32152f514675"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:11:18 crc kubenswrapper[4703]: I1209 12:11:18.678946 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d8699089-c9ff-4389-8a8e-72b5c976b5ae-marketplace-operator-metrics\") pod \"d8699089-c9ff-4389-8a8e-72b5c976b5ae\" (UID: \"d8699089-c9ff-4389-8a8e-72b5c976b5ae\") " Dec 09 12:11:18 crc kubenswrapper[4703]: I1209 12:11:18.678999 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5508ed7b-36a7-4d73-a489-6d7eb6eb6c3e-utilities\") pod \"5508ed7b-36a7-4d73-a489-6d7eb6eb6c3e\" (UID: \"5508ed7b-36a7-4d73-a489-6d7eb6eb6c3e\") " Dec 09 12:11:18 crc kubenswrapper[4703]: I1209 12:11:18.679052 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/621283ab-7eb7-4952-9059-c3c4209bca7b-catalog-content\") pod \"621283ab-7eb7-4952-9059-c3c4209bca7b\" (UID: \"621283ab-7eb7-4952-9059-c3c4209bca7b\") " Dec 09 12:11:18 crc kubenswrapper[4703]: I1209 12:11:18.679075 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e4c8435-609b-49fe-9f13-17547856b18a-catalog-content\") pod \"2e4c8435-609b-49fe-9f13-17547856b18a\" (UID: \"2e4c8435-609b-49fe-9f13-17547856b18a\") " Dec 09 12:11:18 crc kubenswrapper[4703]: I1209 12:11:18.679097 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5508ed7b-36a7-4d73-a489-6d7eb6eb6c3e-catalog-content\") pod \"5508ed7b-36a7-4d73-a489-6d7eb6eb6c3e\" (UID: \"5508ed7b-36a7-4d73-a489-6d7eb6eb6c3e\") " Dec 09 12:11:18 crc kubenswrapper[4703]: I1209 12:11:18.679121 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n7gxl\" (UniqueName: \"kubernetes.io/projected/2e4c8435-609b-49fe-9f13-17547856b18a-kube-api-access-n7gxl\") pod \"2e4c8435-609b-49fe-9f13-17547856b18a\" (UID: \"2e4c8435-609b-49fe-9f13-17547856b18a\") " Dec 09 12:11:18 crc kubenswrapper[4703]: I1209 12:11:18.679149 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d8699089-c9ff-4389-8a8e-72b5c976b5ae-marketplace-trusted-ca\") pod \"d8699089-c9ff-4389-8a8e-72b5c976b5ae\" (UID: \"d8699089-c9ff-4389-8a8e-72b5c976b5ae\") " Dec 09 12:11:18 crc kubenswrapper[4703]: I1209 12:11:18.679213 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qb4cq\" (UniqueName: \"kubernetes.io/projected/621283ab-7eb7-4952-9059-c3c4209bca7b-kube-api-access-qb4cq\") pod \"621283ab-7eb7-4952-9059-c3c4209bca7b\" (UID: \"621283ab-7eb7-4952-9059-c3c4209bca7b\") " Dec 09 12:11:18 crc kubenswrapper[4703]: I1209 12:11:18.679252 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-67ddz\" (UniqueName: \"kubernetes.io/projected/d8699089-c9ff-4389-8a8e-72b5c976b5ae-kube-api-access-67ddz\") pod \"d8699089-c9ff-4389-8a8e-72b5c976b5ae\" (UID: \"d8699089-c9ff-4389-8a8e-72b5c976b5ae\") " Dec 09 12:11:18 crc kubenswrapper[4703]: I1209 12:11:18.679274 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/621283ab-7eb7-4952-9059-c3c4209bca7b-utilities\") pod \"621283ab-7eb7-4952-9059-c3c4209bca7b\" (UID: \"621283ab-7eb7-4952-9059-c3c4209bca7b\") " Dec 09 12:11:18 crc kubenswrapper[4703]: I1209 12:11:18.679296 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-klflg\" (UniqueName: \"kubernetes.io/projected/5508ed7b-36a7-4d73-a489-6d7eb6eb6c3e-kube-api-access-klflg\") pod \"5508ed7b-36a7-4d73-a489-6d7eb6eb6c3e\" (UID: \"5508ed7b-36a7-4d73-a489-6d7eb6eb6c3e\") " Dec 09 12:11:18 crc kubenswrapper[4703]: I1209 12:11:18.679327 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e4c8435-609b-49fe-9f13-17547856b18a-utilities\") pod \"2e4c8435-609b-49fe-9f13-17547856b18a\" (UID: \"2e4c8435-609b-49fe-9f13-17547856b18a\") " Dec 09 12:11:18 crc kubenswrapper[4703]: I1209 12:11:18.679889 4703 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6d97ca9-dbfc-4bb7-9784-32152f514675-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 12:11:18 crc kubenswrapper[4703]: I1209 12:11:18.679906 4703 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6d97ca9-dbfc-4bb7-9784-32152f514675-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 12:11:18 crc kubenswrapper[4703]: I1209 12:11:18.679920 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9r94k\" (UniqueName: \"kubernetes.io/projected/a6d97ca9-dbfc-4bb7-9784-32152f514675-kube-api-access-9r94k\") on node \"crc\" DevicePath \"\"" Dec 09 12:11:18 crc kubenswrapper[4703]: I1209 12:11:18.681406 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e4c8435-609b-49fe-9f13-17547856b18a-utilities" (OuterVolumeSpecName: "utilities") pod "2e4c8435-609b-49fe-9f13-17547856b18a" (UID: "2e4c8435-609b-49fe-9f13-17547856b18a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:11:18 crc kubenswrapper[4703]: I1209 12:11:18.682893 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/621283ab-7eb7-4952-9059-c3c4209bca7b-utilities" (OuterVolumeSpecName: "utilities") pod "621283ab-7eb7-4952-9059-c3c4209bca7b" (UID: "621283ab-7eb7-4952-9059-c3c4209bca7b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:11:18 crc kubenswrapper[4703]: I1209 12:11:18.684072 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e4c8435-609b-49fe-9f13-17547856b18a-kube-api-access-n7gxl" (OuterVolumeSpecName: "kube-api-access-n7gxl") pod "2e4c8435-609b-49fe-9f13-17547856b18a" (UID: "2e4c8435-609b-49fe-9f13-17547856b18a"). InnerVolumeSpecName "kube-api-access-n7gxl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:11:18 crc kubenswrapper[4703]: I1209 12:11:18.684669 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5508ed7b-36a7-4d73-a489-6d7eb6eb6c3e-utilities" (OuterVolumeSpecName: "utilities") pod "5508ed7b-36a7-4d73-a489-6d7eb6eb6c3e" (UID: "5508ed7b-36a7-4d73-a489-6d7eb6eb6c3e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:11:18 crc kubenswrapper[4703]: I1209 12:11:18.685296 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8699089-c9ff-4389-8a8e-72b5c976b5ae-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "d8699089-c9ff-4389-8a8e-72b5c976b5ae" (UID: "d8699089-c9ff-4389-8a8e-72b5c976b5ae"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:11:18 crc kubenswrapper[4703]: I1209 12:11:18.690739 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8699089-c9ff-4389-8a8e-72b5c976b5ae-kube-api-access-67ddz" (OuterVolumeSpecName: "kube-api-access-67ddz") pod "d8699089-c9ff-4389-8a8e-72b5c976b5ae" (UID: "d8699089-c9ff-4389-8a8e-72b5c976b5ae"). InnerVolumeSpecName "kube-api-access-67ddz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:11:18 crc kubenswrapper[4703]: I1209 12:11:18.691576 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/621283ab-7eb7-4952-9059-c3c4209bca7b-kube-api-access-qb4cq" (OuterVolumeSpecName: "kube-api-access-qb4cq") pod "621283ab-7eb7-4952-9059-c3c4209bca7b" (UID: "621283ab-7eb7-4952-9059-c3c4209bca7b"). InnerVolumeSpecName "kube-api-access-qb4cq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:11:18 crc kubenswrapper[4703]: I1209 12:11:18.705403 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5508ed7b-36a7-4d73-a489-6d7eb6eb6c3e-kube-api-access-klflg" (OuterVolumeSpecName: "kube-api-access-klflg") pod "5508ed7b-36a7-4d73-a489-6d7eb6eb6c3e" (UID: "5508ed7b-36a7-4d73-a489-6d7eb6eb6c3e"). InnerVolumeSpecName "kube-api-access-klflg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:11:18 crc kubenswrapper[4703]: I1209 12:11:18.705511 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8699089-c9ff-4389-8a8e-72b5c976b5ae-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "d8699089-c9ff-4389-8a8e-72b5c976b5ae" (UID: "d8699089-c9ff-4389-8a8e-72b5c976b5ae"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:11:18 crc kubenswrapper[4703]: I1209 12:11:18.707624 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5508ed7b-36a7-4d73-a489-6d7eb6eb6c3e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5508ed7b-36a7-4d73-a489-6d7eb6eb6c3e" (UID: "5508ed7b-36a7-4d73-a489-6d7eb6eb6c3e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:11:18 crc kubenswrapper[4703]: I1209 12:11:18.755277 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/621283ab-7eb7-4952-9059-c3c4209bca7b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "621283ab-7eb7-4952-9059-c3c4209bca7b" (UID: "621283ab-7eb7-4952-9059-c3c4209bca7b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:11:18 crc kubenswrapper[4703]: I1209 12:11:18.781242 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qb4cq\" (UniqueName: \"kubernetes.io/projected/621283ab-7eb7-4952-9059-c3c4209bca7b-kube-api-access-qb4cq\") on node \"crc\" DevicePath \"\"" Dec 09 12:11:18 crc kubenswrapper[4703]: I1209 12:11:18.781276 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-67ddz\" (UniqueName: \"kubernetes.io/projected/d8699089-c9ff-4389-8a8e-72b5c976b5ae-kube-api-access-67ddz\") on node \"crc\" DevicePath \"\"" Dec 09 12:11:18 crc kubenswrapper[4703]: I1209 12:11:18.781287 4703 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/621283ab-7eb7-4952-9059-c3c4209bca7b-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 12:11:18 crc kubenswrapper[4703]: I1209 12:11:18.781298 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-klflg\" (UniqueName: \"kubernetes.io/projected/5508ed7b-36a7-4d73-a489-6d7eb6eb6c3e-kube-api-access-klflg\") on node \"crc\" DevicePath \"\"" Dec 09 12:11:18 crc kubenswrapper[4703]: I1209 12:11:18.781309 4703 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e4c8435-609b-49fe-9f13-17547856b18a-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 12:11:18 crc kubenswrapper[4703]: I1209 12:11:18.781327 4703 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5508ed7b-36a7-4d73-a489-6d7eb6eb6c3e-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 12:11:18 crc kubenswrapper[4703]: I1209 12:11:18.781336 4703 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d8699089-c9ff-4389-8a8e-72b5c976b5ae-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 09 12:11:18 crc kubenswrapper[4703]: I1209 12:11:18.781345 4703 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/621283ab-7eb7-4952-9059-c3c4209bca7b-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 12:11:18 crc kubenswrapper[4703]: I1209 12:11:18.781359 4703 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5508ed7b-36a7-4d73-a489-6d7eb6eb6c3e-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 12:11:18 crc kubenswrapper[4703]: I1209 12:11:18.781368 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n7gxl\" (UniqueName: \"kubernetes.io/projected/2e4c8435-609b-49fe-9f13-17547856b18a-kube-api-access-n7gxl\") on node \"crc\" DevicePath \"\"" Dec 09 12:11:18 crc kubenswrapper[4703]: I1209 12:11:18.781404 4703 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d8699089-c9ff-4389-8a8e-72b5c976b5ae-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 09 12:11:18 crc kubenswrapper[4703]: I1209 12:11:18.812773 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e4c8435-609b-49fe-9f13-17547856b18a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2e4c8435-609b-49fe-9f13-17547856b18a" (UID: "2e4c8435-609b-49fe-9f13-17547856b18a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:11:18 crc kubenswrapper[4703]: I1209 12:11:18.882143 4703 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e4c8435-609b-49fe-9f13-17547856b18a-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 12:11:19 crc kubenswrapper[4703]: I1209 12:11:19.420437 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8xkww" event={"ID":"621283ab-7eb7-4952-9059-c3c4209bca7b","Type":"ContainerDied","Data":"76c26b9c6992ccadd1f1481349a151213db3c8473caf4210bba3c1032cf3e551"} Dec 09 12:11:19 crc kubenswrapper[4703]: I1209 12:11:19.420514 4703 scope.go:117] "RemoveContainer" containerID="c37c9e3e8358b2782077cadf7336e0fc1751fa099d459ac7b6070a1aba612859" Dec 09 12:11:19 crc kubenswrapper[4703]: I1209 12:11:19.420860 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8xkww" Dec 09 12:11:19 crc kubenswrapper[4703]: I1209 12:11:19.422943 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bkmjk" event={"ID":"5508ed7b-36a7-4d73-a489-6d7eb6eb6c3e","Type":"ContainerDied","Data":"6c84979f6638f5504aeb7d56a2788bb369a2d8d857e6600da4354ad528dae1cf"} Dec 09 12:11:19 crc kubenswrapper[4703]: I1209 12:11:19.422998 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bkmjk" Dec 09 12:11:19 crc kubenswrapper[4703]: I1209 12:11:19.426110 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vlkrh" event={"ID":"d8699089-c9ff-4389-8a8e-72b5c976b5ae","Type":"ContainerDied","Data":"94e01773e15f34b8c49f65171fae43bac88817c80fb70a0d8317cfdff376b1e7"} Dec 09 12:11:19 crc kubenswrapper[4703]: I1209 12:11:19.426236 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-vlkrh" Dec 09 12:11:19 crc kubenswrapper[4703]: I1209 12:11:19.428751 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-2s2wq" event={"ID":"da71670f-b188-4a71-ac05-55ad7e238d62","Type":"ContainerStarted","Data":"c832d8cd431cb5f2fe05602ba4064c07c4393c33f87a7c1a7779c11843ee1947"} Dec 09 12:11:19 crc kubenswrapper[4703]: I1209 12:11:19.428809 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-2s2wq" event={"ID":"da71670f-b188-4a71-ac05-55ad7e238d62","Type":"ContainerStarted","Data":"b7db5bf43dbf4f36e8fed6ff0a4137aca73ca0fdcbdf0b321ada242e582ec13b"} Dec 09 12:11:19 crc kubenswrapper[4703]: I1209 12:11:19.429888 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-2s2wq" Dec 09 12:11:19 crc kubenswrapper[4703]: I1209 12:11:19.434648 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-2s2wq" Dec 09 12:11:19 crc kubenswrapper[4703]: I1209 12:11:19.439497 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9nc2d" event={"ID":"2e4c8435-609b-49fe-9f13-17547856b18a","Type":"ContainerDied","Data":"dabb28cd7f7044484b4b997a8195183b37b30faec42c038faa7a16534bc48458"} Dec 09 12:11:19 crc kubenswrapper[4703]: I1209 12:11:19.439670 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9nc2d" Dec 09 12:11:19 crc kubenswrapper[4703]: I1209 12:11:19.445395 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p2dxk" event={"ID":"a6d97ca9-dbfc-4bb7-9784-32152f514675","Type":"ContainerDied","Data":"ef3b0ccc9a4e865c5a147c61fa65fe22c3e0430da19335e7f753e35382f5a5c9"} Dec 09 12:11:19 crc kubenswrapper[4703]: I1209 12:11:19.445535 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p2dxk" Dec 09 12:11:19 crc kubenswrapper[4703]: I1209 12:11:19.451312 4703 scope.go:117] "RemoveContainer" containerID="d6a59fab2942575e02d88bde0007eb122ee8e310ec2ab013a6b0ce5ba5f418eb" Dec 09 12:11:19 crc kubenswrapper[4703]: I1209 12:11:19.457352 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8xkww"] Dec 09 12:11:19 crc kubenswrapper[4703]: I1209 12:11:19.469459 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-8xkww"] Dec 09 12:11:19 crc kubenswrapper[4703]: I1209 12:11:19.472672 4703 scope.go:117] "RemoveContainer" containerID="8b3b7159d30f5014913848d5ca60f2c27acd05e0825d83215a43926cc1833fd3" Dec 09 12:11:19 crc kubenswrapper[4703]: I1209 12:11:19.479776 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bkmjk"] Dec 09 12:11:19 crc kubenswrapper[4703]: I1209 12:11:19.491137 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-bkmjk"] Dec 09 12:11:19 crc kubenswrapper[4703]: I1209 12:11:19.500524 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-2s2wq" podStartSLOduration=2.5005058719999997 podStartE2EDuration="2.500505872s" podCreationTimestamp="2025-12-09 12:11:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:11:19.480154122 +0000 UTC m=+378.728917651" watchObservedRunningTime="2025-12-09 12:11:19.500505872 +0000 UTC m=+378.749269391" Dec 09 12:11:19 crc kubenswrapper[4703]: I1209 12:11:19.506053 4703 scope.go:117] "RemoveContainer" containerID="583dcb944ad38886efe2ead42442edcae149f2128528491d31a626f8c7707de2" Dec 09 12:11:19 crc kubenswrapper[4703]: I1209 12:11:19.520996 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9nc2d"] Dec 09 12:11:19 crc kubenswrapper[4703]: I1209 12:11:19.529173 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9nc2d"] Dec 09 12:11:19 crc kubenswrapper[4703]: I1209 12:11:19.530261 4703 scope.go:117] "RemoveContainer" containerID="674065bafe35563c804dee9435387e8d63129a31218dbf24208de54954d20cf3" Dec 09 12:11:19 crc kubenswrapper[4703]: I1209 12:11:19.538318 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vlkrh"] Dec 09 12:11:19 crc kubenswrapper[4703]: I1209 12:11:19.543707 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vlkrh"] Dec 09 12:11:19 crc kubenswrapper[4703]: I1209 12:11:19.550514 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-p2dxk"] Dec 09 12:11:19 crc kubenswrapper[4703]: I1209 12:11:19.554815 4703 scope.go:117] "RemoveContainer" containerID="f78172f60d26bf604615d807f573f6f6c127e409eee7646867d7de318a7d1ca4" Dec 09 12:11:19 crc kubenswrapper[4703]: I1209 12:11:19.555495 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-p2dxk"] Dec 09 12:11:19 crc kubenswrapper[4703]: I1209 12:11:19.568945 4703 scope.go:117] "RemoveContainer" containerID="1282d2c65925d30c8139e902bd8fd2ab9fda375c81eab8a250cbb36e318fc681" Dec 09 12:11:19 crc kubenswrapper[4703]: I1209 12:11:19.582645 4703 scope.go:117] "RemoveContainer" containerID="b018e667e0212de01425abc6cf5678df03b6ca3461762d4085ec79680cf93f0f" Dec 09 12:11:19 crc kubenswrapper[4703]: I1209 12:11:19.595066 4703 scope.go:117] "RemoveContainer" containerID="e59bdddbb96e033179062515ac6627b681b00afb0f97221563fad30b1774fee2" Dec 09 12:11:19 crc kubenswrapper[4703]: I1209 12:11:19.609559 4703 scope.go:117] "RemoveContainer" containerID="0fcc5b77b0e92f6d4c0c2c2ecdafcb0a1fa9ee0a6d6a11288f19934db17806d6" Dec 09 12:11:19 crc kubenswrapper[4703]: I1209 12:11:19.622966 4703 scope.go:117] "RemoveContainer" containerID="ce23f2cecd5ef604f45a01af6cc4cbc196b5c36a17becd4aa72b345c05fc6499" Dec 09 12:11:19 crc kubenswrapper[4703]: I1209 12:11:19.646409 4703 scope.go:117] "RemoveContainer" containerID="337d292b3c04637c2858d0f0d502bde68329ed3d7fb1c0b8eb00dc3f4baf5856" Dec 09 12:11:19 crc kubenswrapper[4703]: I1209 12:11:19.661019 4703 scope.go:117] "RemoveContainer" containerID="650f55920ad556417d2ae71fcae01687332d9d5271cc9c84da9eebe55d781c47" Dec 09 12:11:20 crc kubenswrapper[4703]: I1209 12:11:20.551008 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5d764"] Dec 09 12:11:20 crc kubenswrapper[4703]: E1209 12:11:20.551232 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e4c8435-609b-49fe-9f13-17547856b18a" containerName="extract-utilities" Dec 09 12:11:20 crc kubenswrapper[4703]: I1209 12:11:20.551243 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e4c8435-609b-49fe-9f13-17547856b18a" containerName="extract-utilities" Dec 09 12:11:20 crc kubenswrapper[4703]: E1209 12:11:20.551252 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="621283ab-7eb7-4952-9059-c3c4209bca7b" containerName="extract-utilities" Dec 09 12:11:20 crc kubenswrapper[4703]: I1209 12:11:20.551257 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="621283ab-7eb7-4952-9059-c3c4209bca7b" containerName="extract-utilities" Dec 09 12:11:20 crc kubenswrapper[4703]: E1209 12:11:20.551267 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="621283ab-7eb7-4952-9059-c3c4209bca7b" containerName="extract-content" Dec 09 12:11:20 crc kubenswrapper[4703]: I1209 12:11:20.551272 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="621283ab-7eb7-4952-9059-c3c4209bca7b" containerName="extract-content" Dec 09 12:11:20 crc kubenswrapper[4703]: E1209 12:11:20.551280 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5508ed7b-36a7-4d73-a489-6d7eb6eb6c3e" containerName="registry-server" Dec 09 12:11:20 crc kubenswrapper[4703]: I1209 12:11:20.551286 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="5508ed7b-36a7-4d73-a489-6d7eb6eb6c3e" containerName="registry-server" Dec 09 12:11:20 crc kubenswrapper[4703]: E1209 12:11:20.551295 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8699089-c9ff-4389-8a8e-72b5c976b5ae" containerName="marketplace-operator" Dec 09 12:11:20 crc kubenswrapper[4703]: I1209 12:11:20.551302 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8699089-c9ff-4389-8a8e-72b5c976b5ae" containerName="marketplace-operator" Dec 09 12:11:20 crc kubenswrapper[4703]: E1209 12:11:20.551309 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5508ed7b-36a7-4d73-a489-6d7eb6eb6c3e" containerName="extract-utilities" Dec 09 12:11:20 crc kubenswrapper[4703]: I1209 12:11:20.551314 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="5508ed7b-36a7-4d73-a489-6d7eb6eb6c3e" containerName="extract-utilities" Dec 09 12:11:20 crc kubenswrapper[4703]: E1209 12:11:20.551322 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8699089-c9ff-4389-8a8e-72b5c976b5ae" containerName="marketplace-operator" Dec 09 12:11:20 crc kubenswrapper[4703]: I1209 12:11:20.551328 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8699089-c9ff-4389-8a8e-72b5c976b5ae" containerName="marketplace-operator" Dec 09 12:11:20 crc kubenswrapper[4703]: E1209 12:11:20.551336 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e4c8435-609b-49fe-9f13-17547856b18a" containerName="registry-server" Dec 09 12:11:20 crc kubenswrapper[4703]: I1209 12:11:20.551341 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e4c8435-609b-49fe-9f13-17547856b18a" containerName="registry-server" Dec 09 12:11:20 crc kubenswrapper[4703]: E1209 12:11:20.551348 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6d97ca9-dbfc-4bb7-9784-32152f514675" containerName="extract-utilities" Dec 09 12:11:20 crc kubenswrapper[4703]: I1209 12:11:20.551355 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6d97ca9-dbfc-4bb7-9784-32152f514675" containerName="extract-utilities" Dec 09 12:11:20 crc kubenswrapper[4703]: E1209 12:11:20.551365 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e4c8435-609b-49fe-9f13-17547856b18a" containerName="extract-content" Dec 09 12:11:20 crc kubenswrapper[4703]: I1209 12:11:20.551370 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e4c8435-609b-49fe-9f13-17547856b18a" containerName="extract-content" Dec 09 12:11:20 crc kubenswrapper[4703]: E1209 12:11:20.551378 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="621283ab-7eb7-4952-9059-c3c4209bca7b" containerName="registry-server" Dec 09 12:11:20 crc kubenswrapper[4703]: I1209 12:11:20.551384 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="621283ab-7eb7-4952-9059-c3c4209bca7b" containerName="registry-server" Dec 09 12:11:20 crc kubenswrapper[4703]: E1209 12:11:20.551400 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6d97ca9-dbfc-4bb7-9784-32152f514675" containerName="extract-content" Dec 09 12:11:20 crc kubenswrapper[4703]: I1209 12:11:20.551411 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6d97ca9-dbfc-4bb7-9784-32152f514675" containerName="extract-content" Dec 09 12:11:20 crc kubenswrapper[4703]: E1209 12:11:20.551421 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5508ed7b-36a7-4d73-a489-6d7eb6eb6c3e" containerName="extract-content" Dec 09 12:11:20 crc kubenswrapper[4703]: I1209 12:11:20.551428 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="5508ed7b-36a7-4d73-a489-6d7eb6eb6c3e" containerName="extract-content" Dec 09 12:11:20 crc kubenswrapper[4703]: E1209 12:11:20.551438 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6d97ca9-dbfc-4bb7-9784-32152f514675" containerName="registry-server" Dec 09 12:11:20 crc kubenswrapper[4703]: I1209 12:11:20.551445 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6d97ca9-dbfc-4bb7-9784-32152f514675" containerName="registry-server" Dec 09 12:11:20 crc kubenswrapper[4703]: I1209 12:11:20.551535 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="5508ed7b-36a7-4d73-a489-6d7eb6eb6c3e" containerName="registry-server" Dec 09 12:11:20 crc kubenswrapper[4703]: I1209 12:11:20.551547 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6d97ca9-dbfc-4bb7-9784-32152f514675" containerName="registry-server" Dec 09 12:11:20 crc kubenswrapper[4703]: I1209 12:11:20.551556 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8699089-c9ff-4389-8a8e-72b5c976b5ae" containerName="marketplace-operator" Dec 09 12:11:20 crc kubenswrapper[4703]: I1209 12:11:20.551564 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e4c8435-609b-49fe-9f13-17547856b18a" containerName="registry-server" Dec 09 12:11:20 crc kubenswrapper[4703]: I1209 12:11:20.551571 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8699089-c9ff-4389-8a8e-72b5c976b5ae" containerName="marketplace-operator" Dec 09 12:11:20 crc kubenswrapper[4703]: I1209 12:11:20.551578 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="621283ab-7eb7-4952-9059-c3c4209bca7b" containerName="registry-server" Dec 09 12:11:20 crc kubenswrapper[4703]: I1209 12:11:20.552399 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5d764" Dec 09 12:11:20 crc kubenswrapper[4703]: I1209 12:11:20.554293 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 09 12:11:20 crc kubenswrapper[4703]: I1209 12:11:20.566733 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5d764"] Dec 09 12:11:20 crc kubenswrapper[4703]: I1209 12:11:20.711778 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99hhf\" (UniqueName: \"kubernetes.io/projected/0d4f1675-f3ef-46ce-a4d3-24f581829298-kube-api-access-99hhf\") pod \"certified-operators-5d764\" (UID: \"0d4f1675-f3ef-46ce-a4d3-24f581829298\") " pod="openshift-marketplace/certified-operators-5d764" Dec 09 12:11:20 crc kubenswrapper[4703]: I1209 12:11:20.711899 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d4f1675-f3ef-46ce-a4d3-24f581829298-utilities\") pod \"certified-operators-5d764\" (UID: \"0d4f1675-f3ef-46ce-a4d3-24f581829298\") " pod="openshift-marketplace/certified-operators-5d764" Dec 09 12:11:20 crc kubenswrapper[4703]: I1209 12:11:20.711927 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d4f1675-f3ef-46ce-a4d3-24f581829298-catalog-content\") pod \"certified-operators-5d764\" (UID: \"0d4f1675-f3ef-46ce-a4d3-24f581829298\") " pod="openshift-marketplace/certified-operators-5d764" Dec 09 12:11:20 crc kubenswrapper[4703]: I1209 12:11:20.812984 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d4f1675-f3ef-46ce-a4d3-24f581829298-utilities\") pod \"certified-operators-5d764\" (UID: \"0d4f1675-f3ef-46ce-a4d3-24f581829298\") " pod="openshift-marketplace/certified-operators-5d764" Dec 09 12:11:20 crc kubenswrapper[4703]: I1209 12:11:20.813041 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d4f1675-f3ef-46ce-a4d3-24f581829298-catalog-content\") pod \"certified-operators-5d764\" (UID: \"0d4f1675-f3ef-46ce-a4d3-24f581829298\") " pod="openshift-marketplace/certified-operators-5d764" Dec 09 12:11:20 crc kubenswrapper[4703]: I1209 12:11:20.813081 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99hhf\" (UniqueName: \"kubernetes.io/projected/0d4f1675-f3ef-46ce-a4d3-24f581829298-kube-api-access-99hhf\") pod \"certified-operators-5d764\" (UID: \"0d4f1675-f3ef-46ce-a4d3-24f581829298\") " pod="openshift-marketplace/certified-operators-5d764" Dec 09 12:11:20 crc kubenswrapper[4703]: I1209 12:11:20.813585 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d4f1675-f3ef-46ce-a4d3-24f581829298-utilities\") pod \"certified-operators-5d764\" (UID: \"0d4f1675-f3ef-46ce-a4d3-24f581829298\") " pod="openshift-marketplace/certified-operators-5d764" Dec 09 12:11:20 crc kubenswrapper[4703]: I1209 12:11:20.813699 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d4f1675-f3ef-46ce-a4d3-24f581829298-catalog-content\") pod \"certified-operators-5d764\" (UID: \"0d4f1675-f3ef-46ce-a4d3-24f581829298\") " pod="openshift-marketplace/certified-operators-5d764" Dec 09 12:11:20 crc kubenswrapper[4703]: I1209 12:11:20.835636 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99hhf\" (UniqueName: \"kubernetes.io/projected/0d4f1675-f3ef-46ce-a4d3-24f581829298-kube-api-access-99hhf\") pod \"certified-operators-5d764\" (UID: \"0d4f1675-f3ef-46ce-a4d3-24f581829298\") " pod="openshift-marketplace/certified-operators-5d764" Dec 09 12:11:20 crc kubenswrapper[4703]: I1209 12:11:20.869492 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5d764" Dec 09 12:11:21 crc kubenswrapper[4703]: I1209 12:11:21.080773 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e4c8435-609b-49fe-9f13-17547856b18a" path="/var/lib/kubelet/pods/2e4c8435-609b-49fe-9f13-17547856b18a/volumes" Dec 09 12:11:21 crc kubenswrapper[4703]: I1209 12:11:21.081821 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5508ed7b-36a7-4d73-a489-6d7eb6eb6c3e" path="/var/lib/kubelet/pods/5508ed7b-36a7-4d73-a489-6d7eb6eb6c3e/volumes" Dec 09 12:11:21 crc kubenswrapper[4703]: I1209 12:11:21.082398 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="621283ab-7eb7-4952-9059-c3c4209bca7b" path="/var/lib/kubelet/pods/621283ab-7eb7-4952-9059-c3c4209bca7b/volumes" Dec 09 12:11:21 crc kubenswrapper[4703]: I1209 12:11:21.083601 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6d97ca9-dbfc-4bb7-9784-32152f514675" path="/var/lib/kubelet/pods/a6d97ca9-dbfc-4bb7-9784-32152f514675/volumes" Dec 09 12:11:21 crc kubenswrapper[4703]: I1209 12:11:21.084179 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8699089-c9ff-4389-8a8e-72b5c976b5ae" path="/var/lib/kubelet/pods/d8699089-c9ff-4389-8a8e-72b5c976b5ae/volumes" Dec 09 12:11:21 crc kubenswrapper[4703]: I1209 12:11:21.260432 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5d764"] Dec 09 12:11:21 crc kubenswrapper[4703]: W1209 12:11:21.263107 4703 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0d4f1675_f3ef_46ce_a4d3_24f581829298.slice/crio-f30168627a37e01d968e7ca050e7ff0f8365817e859b8c1f31e38e368f3e4019 WatchSource:0}: Error finding container f30168627a37e01d968e7ca050e7ff0f8365817e859b8c1f31e38e368f3e4019: Status 404 returned error can't find the container with id f30168627a37e01d968e7ca050e7ff0f8365817e859b8c1f31e38e368f3e4019 Dec 09 12:11:21 crc kubenswrapper[4703]: I1209 12:11:21.470924 4703 generic.go:334] "Generic (PLEG): container finished" podID="0d4f1675-f3ef-46ce-a4d3-24f581829298" containerID="10b1852da64f22d31c1bf1f64d0532c5f9d2a0f2bbd2489ac213d821bf8c50bc" exitCode=0 Dec 09 12:11:21 crc kubenswrapper[4703]: I1209 12:11:21.471015 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5d764" event={"ID":"0d4f1675-f3ef-46ce-a4d3-24f581829298","Type":"ContainerDied","Data":"10b1852da64f22d31c1bf1f64d0532c5f9d2a0f2bbd2489ac213d821bf8c50bc"} Dec 09 12:11:21 crc kubenswrapper[4703]: I1209 12:11:21.471313 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5d764" event={"ID":"0d4f1675-f3ef-46ce-a4d3-24f581829298","Type":"ContainerStarted","Data":"f30168627a37e01d968e7ca050e7ff0f8365817e859b8c1f31e38e368f3e4019"} Dec 09 12:11:21 crc kubenswrapper[4703]: I1209 12:11:21.561708 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8rlrm"] Dec 09 12:11:21 crc kubenswrapper[4703]: I1209 12:11:21.562986 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8rlrm" Dec 09 12:11:21 crc kubenswrapper[4703]: I1209 12:11:21.565227 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 09 12:11:21 crc kubenswrapper[4703]: I1209 12:11:21.566439 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8rlrm"] Dec 09 12:11:21 crc kubenswrapper[4703]: I1209 12:11:21.723800 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5c5b466-1538-4449-b93c-abeee2b2c8ff-utilities\") pod \"community-operators-8rlrm\" (UID: \"e5c5b466-1538-4449-b93c-abeee2b2c8ff\") " pod="openshift-marketplace/community-operators-8rlrm" Dec 09 12:11:21 crc kubenswrapper[4703]: I1209 12:11:21.724894 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5c5b466-1538-4449-b93c-abeee2b2c8ff-catalog-content\") pod \"community-operators-8rlrm\" (UID: \"e5c5b466-1538-4449-b93c-abeee2b2c8ff\") " pod="openshift-marketplace/community-operators-8rlrm" Dec 09 12:11:21 crc kubenswrapper[4703]: I1209 12:11:21.725037 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rskrl\" (UniqueName: \"kubernetes.io/projected/e5c5b466-1538-4449-b93c-abeee2b2c8ff-kube-api-access-rskrl\") pod \"community-operators-8rlrm\" (UID: \"e5c5b466-1538-4449-b93c-abeee2b2c8ff\") " pod="openshift-marketplace/community-operators-8rlrm" Dec 09 12:11:21 crc kubenswrapper[4703]: I1209 12:11:21.825892 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5c5b466-1538-4449-b93c-abeee2b2c8ff-catalog-content\") pod \"community-operators-8rlrm\" (UID: \"e5c5b466-1538-4449-b93c-abeee2b2c8ff\") " pod="openshift-marketplace/community-operators-8rlrm" Dec 09 12:11:21 crc kubenswrapper[4703]: I1209 12:11:21.825956 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rskrl\" (UniqueName: \"kubernetes.io/projected/e5c5b466-1538-4449-b93c-abeee2b2c8ff-kube-api-access-rskrl\") pod \"community-operators-8rlrm\" (UID: \"e5c5b466-1538-4449-b93c-abeee2b2c8ff\") " pod="openshift-marketplace/community-operators-8rlrm" Dec 09 12:11:21 crc kubenswrapper[4703]: I1209 12:11:21.826017 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5c5b466-1538-4449-b93c-abeee2b2c8ff-utilities\") pod \"community-operators-8rlrm\" (UID: \"e5c5b466-1538-4449-b93c-abeee2b2c8ff\") " pod="openshift-marketplace/community-operators-8rlrm" Dec 09 12:11:21 crc kubenswrapper[4703]: I1209 12:11:21.826501 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5c5b466-1538-4449-b93c-abeee2b2c8ff-catalog-content\") pod \"community-operators-8rlrm\" (UID: \"e5c5b466-1538-4449-b93c-abeee2b2c8ff\") " pod="openshift-marketplace/community-operators-8rlrm" Dec 09 12:11:21 crc kubenswrapper[4703]: I1209 12:11:21.826534 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5c5b466-1538-4449-b93c-abeee2b2c8ff-utilities\") pod \"community-operators-8rlrm\" (UID: \"e5c5b466-1538-4449-b93c-abeee2b2c8ff\") " pod="openshift-marketplace/community-operators-8rlrm" Dec 09 12:11:21 crc kubenswrapper[4703]: I1209 12:11:21.844341 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rskrl\" (UniqueName: \"kubernetes.io/projected/e5c5b466-1538-4449-b93c-abeee2b2c8ff-kube-api-access-rskrl\") pod \"community-operators-8rlrm\" (UID: \"e5c5b466-1538-4449-b93c-abeee2b2c8ff\") " pod="openshift-marketplace/community-operators-8rlrm" Dec 09 12:11:21 crc kubenswrapper[4703]: I1209 12:11:21.883985 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8rlrm" Dec 09 12:11:22 crc kubenswrapper[4703]: I1209 12:11:22.282737 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8rlrm"] Dec 09 12:11:22 crc kubenswrapper[4703]: I1209 12:11:22.479132 4703 generic.go:334] "Generic (PLEG): container finished" podID="e5c5b466-1538-4449-b93c-abeee2b2c8ff" containerID="7e64df490d102b1ff70ff36f8d8667d10f65073949aba6a78cbc91866a0f2959" exitCode=0 Dec 09 12:11:22 crc kubenswrapper[4703]: I1209 12:11:22.479242 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8rlrm" event={"ID":"e5c5b466-1538-4449-b93c-abeee2b2c8ff","Type":"ContainerDied","Data":"7e64df490d102b1ff70ff36f8d8667d10f65073949aba6a78cbc91866a0f2959"} Dec 09 12:11:22 crc kubenswrapper[4703]: I1209 12:11:22.479824 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8rlrm" event={"ID":"e5c5b466-1538-4449-b93c-abeee2b2c8ff","Type":"ContainerStarted","Data":"f10260d83003a841cec5dc1e2763b75354d14dbaa5061ce6189354b0c07dd9e7"} Dec 09 12:11:22 crc kubenswrapper[4703]: I1209 12:11:22.487655 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5d764" event={"ID":"0d4f1675-f3ef-46ce-a4d3-24f581829298","Type":"ContainerStarted","Data":"de178cf5ac9455227e06a845671bf5f8b530f4a7e890683a77504cf9a68743ba"} Dec 09 12:11:22 crc kubenswrapper[4703]: I1209 12:11:22.952716 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-kvftx"] Dec 09 12:11:22 crc kubenswrapper[4703]: I1209 12:11:22.954209 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kvftx" Dec 09 12:11:22 crc kubenswrapper[4703]: I1209 12:11:22.958855 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 09 12:11:22 crc kubenswrapper[4703]: I1209 12:11:22.966913 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kvftx"] Dec 09 12:11:23 crc kubenswrapper[4703]: I1209 12:11:23.046671 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcldw\" (UniqueName: \"kubernetes.io/projected/71b2ce30-0051-4329-b922-c8647bb87bb1-kube-api-access-rcldw\") pod \"redhat-marketplace-kvftx\" (UID: \"71b2ce30-0051-4329-b922-c8647bb87bb1\") " pod="openshift-marketplace/redhat-marketplace-kvftx" Dec 09 12:11:23 crc kubenswrapper[4703]: I1209 12:11:23.046739 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71b2ce30-0051-4329-b922-c8647bb87bb1-catalog-content\") pod \"redhat-marketplace-kvftx\" (UID: \"71b2ce30-0051-4329-b922-c8647bb87bb1\") " pod="openshift-marketplace/redhat-marketplace-kvftx" Dec 09 12:11:23 crc kubenswrapper[4703]: I1209 12:11:23.046812 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71b2ce30-0051-4329-b922-c8647bb87bb1-utilities\") pod \"redhat-marketplace-kvftx\" (UID: \"71b2ce30-0051-4329-b922-c8647bb87bb1\") " pod="openshift-marketplace/redhat-marketplace-kvftx" Dec 09 12:11:23 crc kubenswrapper[4703]: I1209 12:11:23.147496 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71b2ce30-0051-4329-b922-c8647bb87bb1-utilities\") pod \"redhat-marketplace-kvftx\" (UID: \"71b2ce30-0051-4329-b922-c8647bb87bb1\") " pod="openshift-marketplace/redhat-marketplace-kvftx" Dec 09 12:11:23 crc kubenswrapper[4703]: I1209 12:11:23.148157 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcldw\" (UniqueName: \"kubernetes.io/projected/71b2ce30-0051-4329-b922-c8647bb87bb1-kube-api-access-rcldw\") pod \"redhat-marketplace-kvftx\" (UID: \"71b2ce30-0051-4329-b922-c8647bb87bb1\") " pod="openshift-marketplace/redhat-marketplace-kvftx" Dec 09 12:11:23 crc kubenswrapper[4703]: I1209 12:11:23.148290 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71b2ce30-0051-4329-b922-c8647bb87bb1-catalog-content\") pod \"redhat-marketplace-kvftx\" (UID: \"71b2ce30-0051-4329-b922-c8647bb87bb1\") " pod="openshift-marketplace/redhat-marketplace-kvftx" Dec 09 12:11:23 crc kubenswrapper[4703]: I1209 12:11:23.148710 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71b2ce30-0051-4329-b922-c8647bb87bb1-utilities\") pod \"redhat-marketplace-kvftx\" (UID: \"71b2ce30-0051-4329-b922-c8647bb87bb1\") " pod="openshift-marketplace/redhat-marketplace-kvftx" Dec 09 12:11:23 crc kubenswrapper[4703]: I1209 12:11:23.148794 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71b2ce30-0051-4329-b922-c8647bb87bb1-catalog-content\") pod \"redhat-marketplace-kvftx\" (UID: \"71b2ce30-0051-4329-b922-c8647bb87bb1\") " pod="openshift-marketplace/redhat-marketplace-kvftx" Dec 09 12:11:23 crc kubenswrapper[4703]: I1209 12:11:23.170641 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcldw\" (UniqueName: \"kubernetes.io/projected/71b2ce30-0051-4329-b922-c8647bb87bb1-kube-api-access-rcldw\") pod \"redhat-marketplace-kvftx\" (UID: \"71b2ce30-0051-4329-b922-c8647bb87bb1\") " pod="openshift-marketplace/redhat-marketplace-kvftx" Dec 09 12:11:23 crc kubenswrapper[4703]: I1209 12:11:23.285056 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kvftx" Dec 09 12:11:23 crc kubenswrapper[4703]: I1209 12:11:23.488945 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kvftx"] Dec 09 12:11:23 crc kubenswrapper[4703]: I1209 12:11:23.498220 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8rlrm" event={"ID":"e5c5b466-1538-4449-b93c-abeee2b2c8ff","Type":"ContainerStarted","Data":"3410a4f71c361623f329b933b7cd0bf4bed3168e08cc828589b937bf16810a6b"} Dec 09 12:11:23 crc kubenswrapper[4703]: I1209 12:11:23.500382 4703 generic.go:334] "Generic (PLEG): container finished" podID="0d4f1675-f3ef-46ce-a4d3-24f581829298" containerID="de178cf5ac9455227e06a845671bf5f8b530f4a7e890683a77504cf9a68743ba" exitCode=0 Dec 09 12:11:23 crc kubenswrapper[4703]: I1209 12:11:23.500426 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5d764" event={"ID":"0d4f1675-f3ef-46ce-a4d3-24f581829298","Type":"ContainerDied","Data":"de178cf5ac9455227e06a845671bf5f8b530f4a7e890683a77504cf9a68743ba"} Dec 09 12:11:23 crc kubenswrapper[4703]: W1209 12:11:23.500523 4703 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71b2ce30_0051_4329_b922_c8647bb87bb1.slice/crio-73a018905a67606ccaaeef9b33bb2f981e0551762fe9e0910c5dfe4d8f092d62 WatchSource:0}: Error finding container 73a018905a67606ccaaeef9b33bb2f981e0551762fe9e0910c5dfe4d8f092d62: Status 404 returned error can't find the container with id 73a018905a67606ccaaeef9b33bb2f981e0551762fe9e0910c5dfe4d8f092d62 Dec 09 12:11:24 crc kubenswrapper[4703]: I1209 12:11:24.155350 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-t6tnb"] Dec 09 12:11:24 crc kubenswrapper[4703]: I1209 12:11:24.157517 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t6tnb" Dec 09 12:11:24 crc kubenswrapper[4703]: I1209 12:11:24.161913 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpcrp\" (UniqueName: \"kubernetes.io/projected/fd23e539-b882-4063-88f9-2927e5439ade-kube-api-access-rpcrp\") pod \"redhat-operators-t6tnb\" (UID: \"fd23e539-b882-4063-88f9-2927e5439ade\") " pod="openshift-marketplace/redhat-operators-t6tnb" Dec 09 12:11:24 crc kubenswrapper[4703]: I1209 12:11:24.162107 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd23e539-b882-4063-88f9-2927e5439ade-catalog-content\") pod \"redhat-operators-t6tnb\" (UID: \"fd23e539-b882-4063-88f9-2927e5439ade\") " pod="openshift-marketplace/redhat-operators-t6tnb" Dec 09 12:11:24 crc kubenswrapper[4703]: I1209 12:11:24.162158 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd23e539-b882-4063-88f9-2927e5439ade-utilities\") pod \"redhat-operators-t6tnb\" (UID: \"fd23e539-b882-4063-88f9-2927e5439ade\") " pod="openshift-marketplace/redhat-operators-t6tnb" Dec 09 12:11:24 crc kubenswrapper[4703]: I1209 12:11:24.163707 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 09 12:11:24 crc kubenswrapper[4703]: I1209 12:11:24.165740 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t6tnb"] Dec 09 12:11:24 crc kubenswrapper[4703]: I1209 12:11:24.263683 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd23e539-b882-4063-88f9-2927e5439ade-catalog-content\") pod \"redhat-operators-t6tnb\" (UID: \"fd23e539-b882-4063-88f9-2927e5439ade\") " pod="openshift-marketplace/redhat-operators-t6tnb" Dec 09 12:11:24 crc kubenswrapper[4703]: I1209 12:11:24.263740 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd23e539-b882-4063-88f9-2927e5439ade-utilities\") pod \"redhat-operators-t6tnb\" (UID: \"fd23e539-b882-4063-88f9-2927e5439ade\") " pod="openshift-marketplace/redhat-operators-t6tnb" Dec 09 12:11:24 crc kubenswrapper[4703]: I1209 12:11:24.263779 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rpcrp\" (UniqueName: \"kubernetes.io/projected/fd23e539-b882-4063-88f9-2927e5439ade-kube-api-access-rpcrp\") pod \"redhat-operators-t6tnb\" (UID: \"fd23e539-b882-4063-88f9-2927e5439ade\") " pod="openshift-marketplace/redhat-operators-t6tnb" Dec 09 12:11:24 crc kubenswrapper[4703]: I1209 12:11:24.264247 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd23e539-b882-4063-88f9-2927e5439ade-utilities\") pod \"redhat-operators-t6tnb\" (UID: \"fd23e539-b882-4063-88f9-2927e5439ade\") " pod="openshift-marketplace/redhat-operators-t6tnb" Dec 09 12:11:24 crc kubenswrapper[4703]: I1209 12:11:24.264254 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd23e539-b882-4063-88f9-2927e5439ade-catalog-content\") pod \"redhat-operators-t6tnb\" (UID: \"fd23e539-b882-4063-88f9-2927e5439ade\") " pod="openshift-marketplace/redhat-operators-t6tnb" Dec 09 12:11:24 crc kubenswrapper[4703]: I1209 12:11:24.291968 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rpcrp\" (UniqueName: \"kubernetes.io/projected/fd23e539-b882-4063-88f9-2927e5439ade-kube-api-access-rpcrp\") pod \"redhat-operators-t6tnb\" (UID: \"fd23e539-b882-4063-88f9-2927e5439ade\") " pod="openshift-marketplace/redhat-operators-t6tnb" Dec 09 12:11:24 crc kubenswrapper[4703]: I1209 12:11:24.489153 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t6tnb" Dec 09 12:11:24 crc kubenswrapper[4703]: I1209 12:11:24.519251 4703 generic.go:334] "Generic (PLEG): container finished" podID="e5c5b466-1538-4449-b93c-abeee2b2c8ff" containerID="3410a4f71c361623f329b933b7cd0bf4bed3168e08cc828589b937bf16810a6b" exitCode=0 Dec 09 12:11:24 crc kubenswrapper[4703]: I1209 12:11:24.519341 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8rlrm" event={"ID":"e5c5b466-1538-4449-b93c-abeee2b2c8ff","Type":"ContainerDied","Data":"3410a4f71c361623f329b933b7cd0bf4bed3168e08cc828589b937bf16810a6b"} Dec 09 12:11:24 crc kubenswrapper[4703]: I1209 12:11:24.526029 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5d764" event={"ID":"0d4f1675-f3ef-46ce-a4d3-24f581829298","Type":"ContainerStarted","Data":"eaf656918f1a36030ffec613e18ddf1e37e39349549fda80a31eea67d1b29328"} Dec 09 12:11:24 crc kubenswrapper[4703]: I1209 12:11:24.529918 4703 generic.go:334] "Generic (PLEG): container finished" podID="71b2ce30-0051-4329-b922-c8647bb87bb1" containerID="736753b6e53b69b2916ed2f538c6b7aa936992f55ec853c4500d431c86d578b6" exitCode=0 Dec 09 12:11:24 crc kubenswrapper[4703]: I1209 12:11:24.529973 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kvftx" event={"ID":"71b2ce30-0051-4329-b922-c8647bb87bb1","Type":"ContainerDied","Data":"736753b6e53b69b2916ed2f538c6b7aa936992f55ec853c4500d431c86d578b6"} Dec 09 12:11:24 crc kubenswrapper[4703]: I1209 12:11:24.530008 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kvftx" event={"ID":"71b2ce30-0051-4329-b922-c8647bb87bb1","Type":"ContainerStarted","Data":"73a018905a67606ccaaeef9b33bb2f981e0551762fe9e0910c5dfe4d8f092d62"} Dec 09 12:11:24 crc kubenswrapper[4703]: I1209 12:11:24.597593 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5d764" podStartSLOduration=2.186215483 podStartE2EDuration="4.597569774s" podCreationTimestamp="2025-12-09 12:11:20 +0000 UTC" firstStartedPulling="2025-12-09 12:11:21.475599076 +0000 UTC m=+380.724362595" lastFinishedPulling="2025-12-09 12:11:23.886953367 +0000 UTC m=+383.135716886" observedRunningTime="2025-12-09 12:11:24.595714753 +0000 UTC m=+383.844478272" watchObservedRunningTime="2025-12-09 12:11:24.597569774 +0000 UTC m=+383.846333293" Dec 09 12:11:24 crc kubenswrapper[4703]: I1209 12:11:24.952727 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t6tnb"] Dec 09 12:11:25 crc kubenswrapper[4703]: I1209 12:11:25.538997 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8rlrm" event={"ID":"e5c5b466-1538-4449-b93c-abeee2b2c8ff","Type":"ContainerStarted","Data":"2bdf6c74bc471dd35298acead5cdc344f03f95aa1a55ecf55b92e6d4fc83a09f"} Dec 09 12:11:25 crc kubenswrapper[4703]: I1209 12:11:25.541027 4703 generic.go:334] "Generic (PLEG): container finished" podID="fd23e539-b882-4063-88f9-2927e5439ade" containerID="4d2019961aa8594eea876849e5ef4e1f5d704cadad53a8e463ecefba8c2d4805" exitCode=0 Dec 09 12:11:25 crc kubenswrapper[4703]: I1209 12:11:25.541112 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t6tnb" event={"ID":"fd23e539-b882-4063-88f9-2927e5439ade","Type":"ContainerDied","Data":"4d2019961aa8594eea876849e5ef4e1f5d704cadad53a8e463ecefba8c2d4805"} Dec 09 12:11:25 crc kubenswrapper[4703]: I1209 12:11:25.541170 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t6tnb" event={"ID":"fd23e539-b882-4063-88f9-2927e5439ade","Type":"ContainerStarted","Data":"8339bcba1c924eb678578fab225bf6ccbcb5dfe8f518fbce8d505644dd392b52"} Dec 09 12:11:25 crc kubenswrapper[4703]: I1209 12:11:25.543753 4703 generic.go:334] "Generic (PLEG): container finished" podID="71b2ce30-0051-4329-b922-c8647bb87bb1" containerID="1c3632f67ffd64e30e63432534fa3ac490a573e581c38e510445a2419cc20f0a" exitCode=0 Dec 09 12:11:25 crc kubenswrapper[4703]: I1209 12:11:25.544162 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kvftx" event={"ID":"71b2ce30-0051-4329-b922-c8647bb87bb1","Type":"ContainerDied","Data":"1c3632f67ffd64e30e63432534fa3ac490a573e581c38e510445a2419cc20f0a"} Dec 09 12:11:25 crc kubenswrapper[4703]: I1209 12:11:25.562474 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8rlrm" podStartSLOduration=2.095644707 podStartE2EDuration="4.562457696s" podCreationTimestamp="2025-12-09 12:11:21 +0000 UTC" firstStartedPulling="2025-12-09 12:11:22.4815997 +0000 UTC m=+381.730363219" lastFinishedPulling="2025-12-09 12:11:24.948412689 +0000 UTC m=+384.197176208" observedRunningTime="2025-12-09 12:11:25.558724453 +0000 UTC m=+384.807487972" watchObservedRunningTime="2025-12-09 12:11:25.562457696 +0000 UTC m=+384.811221215" Dec 09 12:11:26 crc kubenswrapper[4703]: I1209 12:11:26.554136 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kvftx" event={"ID":"71b2ce30-0051-4329-b922-c8647bb87bb1","Type":"ContainerStarted","Data":"20571cb09d91bef3d656b2f7528e62bc49c2fec7f2ac726d740ef8adc7022dc5"} Dec 09 12:11:26 crc kubenswrapper[4703]: I1209 12:11:26.558400 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t6tnb" event={"ID":"fd23e539-b882-4063-88f9-2927e5439ade","Type":"ContainerStarted","Data":"5a49c8641dc1c8474a99d6e93adf0d3ec6ffdd944f93dccc3fea5fbfc8d974d7"} Dec 09 12:11:26 crc kubenswrapper[4703]: I1209 12:11:26.579713 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-kvftx" podStartSLOduration=3.170993901 podStartE2EDuration="4.57969091s" podCreationTimestamp="2025-12-09 12:11:22 +0000 UTC" firstStartedPulling="2025-12-09 12:11:24.532732757 +0000 UTC m=+383.781496276" lastFinishedPulling="2025-12-09 12:11:25.941429746 +0000 UTC m=+385.190193285" observedRunningTime="2025-12-09 12:11:26.574416594 +0000 UTC m=+385.823180113" watchObservedRunningTime="2025-12-09 12:11:26.57969091 +0000 UTC m=+385.828454429" Dec 09 12:11:27 crc kubenswrapper[4703]: I1209 12:11:27.567606 4703 generic.go:334] "Generic (PLEG): container finished" podID="fd23e539-b882-4063-88f9-2927e5439ade" containerID="5a49c8641dc1c8474a99d6e93adf0d3ec6ffdd944f93dccc3fea5fbfc8d974d7" exitCode=0 Dec 09 12:11:27 crc kubenswrapper[4703]: I1209 12:11:27.567785 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t6tnb" event={"ID":"fd23e539-b882-4063-88f9-2927e5439ade","Type":"ContainerDied","Data":"5a49c8641dc1c8474a99d6e93adf0d3ec6ffdd944f93dccc3fea5fbfc8d974d7"} Dec 09 12:11:29 crc kubenswrapper[4703]: I1209 12:11:29.455876 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-x9c56" Dec 09 12:11:29 crc kubenswrapper[4703]: I1209 12:11:29.512264 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-q57qh"] Dec 09 12:11:29 crc kubenswrapper[4703]: I1209 12:11:29.594740 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t6tnb" event={"ID":"fd23e539-b882-4063-88f9-2927e5439ade","Type":"ContainerStarted","Data":"129c0307304ea4e9ff3df1f1fb84e811d8f812c8d2fd1333c5a99d50e4a53ba6"} Dec 09 12:11:29 crc kubenswrapper[4703]: I1209 12:11:29.631813 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-t6tnb" podStartSLOduration=2.926021851 podStartE2EDuration="5.631793784s" podCreationTimestamp="2025-12-09 12:11:24 +0000 UTC" firstStartedPulling="2025-12-09 12:11:25.542789204 +0000 UTC m=+384.791552723" lastFinishedPulling="2025-12-09 12:11:28.248561137 +0000 UTC m=+387.497324656" observedRunningTime="2025-12-09 12:11:29.629625625 +0000 UTC m=+388.878389154" watchObservedRunningTime="2025-12-09 12:11:29.631793784 +0000 UTC m=+388.880557303" Dec 09 12:11:30 crc kubenswrapper[4703]: I1209 12:11:30.084026 4703 patch_prober.go:28] interesting pod/machine-config-daemon-q8sfk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 12:11:30 crc kubenswrapper[4703]: I1209 12:11:30.084334 4703 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 12:11:30 crc kubenswrapper[4703]: I1209 12:11:30.084380 4703 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" Dec 09 12:11:30 crc kubenswrapper[4703]: I1209 12:11:30.084931 4703 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6f04a84ed7bc72ac4ce5881e099355fdb1ee5268da9c4102eb52871ec941d585"} pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 12:11:30 crc kubenswrapper[4703]: I1209 12:11:30.084981 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" containerName="machine-config-daemon" containerID="cri-o://6f04a84ed7bc72ac4ce5881e099355fdb1ee5268da9c4102eb52871ec941d585" gracePeriod=600 Dec 09 12:11:30 crc kubenswrapper[4703]: I1209 12:11:30.602307 4703 generic.go:334] "Generic (PLEG): container finished" podID="32956ceb-8540-406e-8693-e86efb46cd42" containerID="6f04a84ed7bc72ac4ce5881e099355fdb1ee5268da9c4102eb52871ec941d585" exitCode=0 Dec 09 12:11:30 crc kubenswrapper[4703]: I1209 12:11:30.602393 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" event={"ID":"32956ceb-8540-406e-8693-e86efb46cd42","Type":"ContainerDied","Data":"6f04a84ed7bc72ac4ce5881e099355fdb1ee5268da9c4102eb52871ec941d585"} Dec 09 12:11:30 crc kubenswrapper[4703]: I1209 12:11:30.603007 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" event={"ID":"32956ceb-8540-406e-8693-e86efb46cd42","Type":"ContainerStarted","Data":"3f15a5f5fdcf521f3b29067d1b5408a59b00b69a832381cfeb9530a825bead3b"} Dec 09 12:11:30 crc kubenswrapper[4703]: I1209 12:11:30.603033 4703 scope.go:117] "RemoveContainer" containerID="d149dfa980c5e9cee99cd7ba8a51bca19529368e037a1fd8e9b8dbea04179378" Dec 09 12:11:30 crc kubenswrapper[4703]: I1209 12:11:30.870146 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5d764" Dec 09 12:11:30 crc kubenswrapper[4703]: I1209 12:11:30.870509 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5d764" Dec 09 12:11:30 crc kubenswrapper[4703]: I1209 12:11:30.911893 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5d764" Dec 09 12:11:31 crc kubenswrapper[4703]: I1209 12:11:31.649092 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5d764" Dec 09 12:11:31 crc kubenswrapper[4703]: I1209 12:11:31.885051 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8rlrm" Dec 09 12:11:31 crc kubenswrapper[4703]: I1209 12:11:31.885374 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8rlrm" Dec 09 12:11:31 crc kubenswrapper[4703]: I1209 12:11:31.924477 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8rlrm" Dec 09 12:11:32 crc kubenswrapper[4703]: I1209 12:11:32.663496 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8rlrm" Dec 09 12:11:33 crc kubenswrapper[4703]: I1209 12:11:33.286205 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-kvftx" Dec 09 12:11:33 crc kubenswrapper[4703]: I1209 12:11:33.286281 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-kvftx" Dec 09 12:11:33 crc kubenswrapper[4703]: I1209 12:11:33.325760 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-kvftx" Dec 09 12:11:33 crc kubenswrapper[4703]: I1209 12:11:33.659080 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-kvftx" Dec 09 12:11:34 crc kubenswrapper[4703]: I1209 12:11:34.490200 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-t6tnb" Dec 09 12:11:34 crc kubenswrapper[4703]: I1209 12:11:34.490643 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-t6tnb" Dec 09 12:11:34 crc kubenswrapper[4703]: I1209 12:11:34.599110 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-t6tnb" Dec 09 12:11:34 crc kubenswrapper[4703]: I1209 12:11:34.671358 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-t6tnb" Dec 09 12:11:54 crc kubenswrapper[4703]: I1209 12:11:54.567296 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-q57qh" podUID="352f59bb-69ee-46d0-862c-d839ac334b35" containerName="registry" containerID="cri-o://fd18ad87dc79f7b95db43c3756df3a5bb3e8f58c7e3a6648ef7346a28e4a5cfe" gracePeriod=30 Dec 09 12:11:54 crc kubenswrapper[4703]: I1209 12:11:54.724668 4703 generic.go:334] "Generic (PLEG): container finished" podID="352f59bb-69ee-46d0-862c-d839ac334b35" containerID="fd18ad87dc79f7b95db43c3756df3a5bb3e8f58c7e3a6648ef7346a28e4a5cfe" exitCode=0 Dec 09 12:11:54 crc kubenswrapper[4703]: I1209 12:11:54.724712 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-q57qh" event={"ID":"352f59bb-69ee-46d0-862c-d839ac334b35","Type":"ContainerDied","Data":"fd18ad87dc79f7b95db43c3756df3a5bb3e8f58c7e3a6648ef7346a28e4a5cfe"} Dec 09 12:11:54 crc kubenswrapper[4703]: I1209 12:11:54.899959 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-q57qh" Dec 09 12:11:55 crc kubenswrapper[4703]: I1209 12:11:55.011944 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/352f59bb-69ee-46d0-862c-d839ac334b35-registry-certificates\") pod \"352f59bb-69ee-46d0-862c-d839ac334b35\" (UID: \"352f59bb-69ee-46d0-862c-d839ac334b35\") " Dec 09 12:11:55 crc kubenswrapper[4703]: I1209 12:11:55.012018 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/352f59bb-69ee-46d0-862c-d839ac334b35-bound-sa-token\") pod \"352f59bb-69ee-46d0-862c-d839ac334b35\" (UID: \"352f59bb-69ee-46d0-862c-d839ac334b35\") " Dec 09 12:11:55 crc kubenswrapper[4703]: I1209 12:11:55.012079 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/352f59bb-69ee-46d0-862c-d839ac334b35-ca-trust-extracted\") pod \"352f59bb-69ee-46d0-862c-d839ac334b35\" (UID: \"352f59bb-69ee-46d0-862c-d839ac334b35\") " Dec 09 12:11:55 crc kubenswrapper[4703]: I1209 12:11:55.012263 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"352f59bb-69ee-46d0-862c-d839ac334b35\" (UID: \"352f59bb-69ee-46d0-862c-d839ac334b35\") " Dec 09 12:11:55 crc kubenswrapper[4703]: I1209 12:11:55.012303 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/352f59bb-69ee-46d0-862c-d839ac334b35-trusted-ca\") pod \"352f59bb-69ee-46d0-862c-d839ac334b35\" (UID: \"352f59bb-69ee-46d0-862c-d839ac334b35\") " Dec 09 12:11:55 crc kubenswrapper[4703]: I1209 12:11:55.012327 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/352f59bb-69ee-46d0-862c-d839ac334b35-installation-pull-secrets\") pod \"352f59bb-69ee-46d0-862c-d839ac334b35\" (UID: \"352f59bb-69ee-46d0-862c-d839ac334b35\") " Dec 09 12:11:55 crc kubenswrapper[4703]: I1209 12:11:55.012355 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/352f59bb-69ee-46d0-862c-d839ac334b35-registry-tls\") pod \"352f59bb-69ee-46d0-862c-d839ac334b35\" (UID: \"352f59bb-69ee-46d0-862c-d839ac334b35\") " Dec 09 12:11:55 crc kubenswrapper[4703]: I1209 12:11:55.012387 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bbcc8\" (UniqueName: \"kubernetes.io/projected/352f59bb-69ee-46d0-862c-d839ac334b35-kube-api-access-bbcc8\") pod \"352f59bb-69ee-46d0-862c-d839ac334b35\" (UID: \"352f59bb-69ee-46d0-862c-d839ac334b35\") " Dec 09 12:11:55 crc kubenswrapper[4703]: I1209 12:11:55.013126 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/352f59bb-69ee-46d0-862c-d839ac334b35-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "352f59bb-69ee-46d0-862c-d839ac334b35" (UID: "352f59bb-69ee-46d0-862c-d839ac334b35"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:11:55 crc kubenswrapper[4703]: I1209 12:11:55.013132 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/352f59bb-69ee-46d0-862c-d839ac334b35-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "352f59bb-69ee-46d0-862c-d839ac334b35" (UID: "352f59bb-69ee-46d0-862c-d839ac334b35"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:11:55 crc kubenswrapper[4703]: I1209 12:11:55.018726 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/352f59bb-69ee-46d0-862c-d839ac334b35-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "352f59bb-69ee-46d0-862c-d839ac334b35" (UID: "352f59bb-69ee-46d0-862c-d839ac334b35"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:11:55 crc kubenswrapper[4703]: I1209 12:11:55.018931 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/352f59bb-69ee-46d0-862c-d839ac334b35-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "352f59bb-69ee-46d0-862c-d839ac334b35" (UID: "352f59bb-69ee-46d0-862c-d839ac334b35"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:11:55 crc kubenswrapper[4703]: I1209 12:11:55.019021 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/352f59bb-69ee-46d0-862c-d839ac334b35-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "352f59bb-69ee-46d0-862c-d839ac334b35" (UID: "352f59bb-69ee-46d0-862c-d839ac334b35"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:11:55 crc kubenswrapper[4703]: I1209 12:11:55.019431 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/352f59bb-69ee-46d0-862c-d839ac334b35-kube-api-access-bbcc8" (OuterVolumeSpecName: "kube-api-access-bbcc8") pod "352f59bb-69ee-46d0-862c-d839ac334b35" (UID: "352f59bb-69ee-46d0-862c-d839ac334b35"). InnerVolumeSpecName "kube-api-access-bbcc8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:11:55 crc kubenswrapper[4703]: I1209 12:11:55.020763 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "352f59bb-69ee-46d0-862c-d839ac334b35" (UID: "352f59bb-69ee-46d0-862c-d839ac334b35"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 09 12:11:55 crc kubenswrapper[4703]: I1209 12:11:55.028833 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/352f59bb-69ee-46d0-862c-d839ac334b35-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "352f59bb-69ee-46d0-862c-d839ac334b35" (UID: "352f59bb-69ee-46d0-862c-d839ac334b35"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:11:55 crc kubenswrapper[4703]: I1209 12:11:55.113474 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bbcc8\" (UniqueName: \"kubernetes.io/projected/352f59bb-69ee-46d0-862c-d839ac334b35-kube-api-access-bbcc8\") on node \"crc\" DevicePath \"\"" Dec 09 12:11:55 crc kubenswrapper[4703]: I1209 12:11:55.113512 4703 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/352f59bb-69ee-46d0-862c-d839ac334b35-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 09 12:11:55 crc kubenswrapper[4703]: I1209 12:11:55.113523 4703 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/352f59bb-69ee-46d0-862c-d839ac334b35-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 09 12:11:55 crc kubenswrapper[4703]: I1209 12:11:55.113531 4703 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/352f59bb-69ee-46d0-862c-d839ac334b35-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 09 12:11:55 crc kubenswrapper[4703]: I1209 12:11:55.113541 4703 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/352f59bb-69ee-46d0-862c-d839ac334b35-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 09 12:11:55 crc kubenswrapper[4703]: I1209 12:11:55.113553 4703 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/352f59bb-69ee-46d0-862c-d839ac334b35-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 09 12:11:55 crc kubenswrapper[4703]: I1209 12:11:55.113562 4703 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/352f59bb-69ee-46d0-862c-d839ac334b35-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 09 12:11:55 crc kubenswrapper[4703]: I1209 12:11:55.730977 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-q57qh" event={"ID":"352f59bb-69ee-46d0-862c-d839ac334b35","Type":"ContainerDied","Data":"2e207cb8bd75964a3baad1154e873808723fd3ca57f41a30ef408b021e3039dc"} Dec 09 12:11:55 crc kubenswrapper[4703]: I1209 12:11:55.731384 4703 scope.go:117] "RemoveContainer" containerID="fd18ad87dc79f7b95db43c3756df3a5bb3e8f58c7e3a6648ef7346a28e4a5cfe" Dec 09 12:11:55 crc kubenswrapper[4703]: I1209 12:11:55.732261 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-q57qh" Dec 09 12:11:55 crc kubenswrapper[4703]: I1209 12:11:55.752939 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-q57qh"] Dec 09 12:11:55 crc kubenswrapper[4703]: I1209 12:11:55.759723 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-q57qh"] Dec 09 12:11:57 crc kubenswrapper[4703]: I1209 12:11:57.076462 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="352f59bb-69ee-46d0-862c-d839ac334b35" path="/var/lib/kubelet/pods/352f59bb-69ee-46d0-862c-d839ac334b35/volumes" Dec 09 12:13:30 crc kubenswrapper[4703]: I1209 12:13:30.083906 4703 patch_prober.go:28] interesting pod/machine-config-daemon-q8sfk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 12:13:30 crc kubenswrapper[4703]: I1209 12:13:30.084483 4703 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 12:14:00 crc kubenswrapper[4703]: I1209 12:14:00.083599 4703 patch_prober.go:28] interesting pod/machine-config-daemon-q8sfk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 12:14:00 crc kubenswrapper[4703]: I1209 12:14:00.084290 4703 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 12:14:30 crc kubenswrapper[4703]: I1209 12:14:30.083919 4703 patch_prober.go:28] interesting pod/machine-config-daemon-q8sfk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 12:14:30 crc kubenswrapper[4703]: I1209 12:14:30.084938 4703 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 12:14:30 crc kubenswrapper[4703]: I1209 12:14:30.085029 4703 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" Dec 09 12:14:30 crc kubenswrapper[4703]: I1209 12:14:30.085997 4703 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3f15a5f5fdcf521f3b29067d1b5408a59b00b69a832381cfeb9530a825bead3b"} pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 12:14:30 crc kubenswrapper[4703]: I1209 12:14:30.086083 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" containerName="machine-config-daemon" containerID="cri-o://3f15a5f5fdcf521f3b29067d1b5408a59b00b69a832381cfeb9530a825bead3b" gracePeriod=600 Dec 09 12:14:30 crc kubenswrapper[4703]: I1209 12:14:30.887821 4703 generic.go:334] "Generic (PLEG): container finished" podID="32956ceb-8540-406e-8693-e86efb46cd42" containerID="3f15a5f5fdcf521f3b29067d1b5408a59b00b69a832381cfeb9530a825bead3b" exitCode=0 Dec 09 12:14:30 crc kubenswrapper[4703]: I1209 12:14:30.888020 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" event={"ID":"32956ceb-8540-406e-8693-e86efb46cd42","Type":"ContainerDied","Data":"3f15a5f5fdcf521f3b29067d1b5408a59b00b69a832381cfeb9530a825bead3b"} Dec 09 12:14:30 crc kubenswrapper[4703]: I1209 12:14:30.888277 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" event={"ID":"32956ceb-8540-406e-8693-e86efb46cd42","Type":"ContainerStarted","Data":"070529224aea51e7b9ab2ac8deaa225f76e2ab9d46c38789bc31e027e9fd43af"} Dec 09 12:14:30 crc kubenswrapper[4703]: I1209 12:14:30.888301 4703 scope.go:117] "RemoveContainer" containerID="6f04a84ed7bc72ac4ce5881e099355fdb1ee5268da9c4102eb52871ec941d585" Dec 09 12:15:00 crc kubenswrapper[4703]: I1209 12:15:00.173451 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421375-wzghh"] Dec 09 12:15:00 crc kubenswrapper[4703]: E1209 12:15:00.175223 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="352f59bb-69ee-46d0-862c-d839ac334b35" containerName="registry" Dec 09 12:15:00 crc kubenswrapper[4703]: I1209 12:15:00.175293 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="352f59bb-69ee-46d0-862c-d839ac334b35" containerName="registry" Dec 09 12:15:00 crc kubenswrapper[4703]: I1209 12:15:00.175479 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="352f59bb-69ee-46d0-862c-d839ac334b35" containerName="registry" Dec 09 12:15:00 crc kubenswrapper[4703]: I1209 12:15:00.175915 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421375-wzghh" Dec 09 12:15:00 crc kubenswrapper[4703]: I1209 12:15:00.178809 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 09 12:15:00 crc kubenswrapper[4703]: I1209 12:15:00.179040 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 09 12:15:00 crc kubenswrapper[4703]: I1209 12:15:00.182192 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421375-wzghh"] Dec 09 12:15:00 crc kubenswrapper[4703]: I1209 12:15:00.249018 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xj622\" (UniqueName: \"kubernetes.io/projected/03dacb45-3e46-407f-8679-66e59131494c-kube-api-access-xj622\") pod \"collect-profiles-29421375-wzghh\" (UID: \"03dacb45-3e46-407f-8679-66e59131494c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421375-wzghh" Dec 09 12:15:00 crc kubenswrapper[4703]: I1209 12:15:00.249064 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/03dacb45-3e46-407f-8679-66e59131494c-secret-volume\") pod \"collect-profiles-29421375-wzghh\" (UID: \"03dacb45-3e46-407f-8679-66e59131494c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421375-wzghh" Dec 09 12:15:00 crc kubenswrapper[4703]: I1209 12:15:00.249111 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/03dacb45-3e46-407f-8679-66e59131494c-config-volume\") pod \"collect-profiles-29421375-wzghh\" (UID: \"03dacb45-3e46-407f-8679-66e59131494c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421375-wzghh" Dec 09 12:15:00 crc kubenswrapper[4703]: I1209 12:15:00.350378 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xj622\" (UniqueName: \"kubernetes.io/projected/03dacb45-3e46-407f-8679-66e59131494c-kube-api-access-xj622\") pod \"collect-profiles-29421375-wzghh\" (UID: \"03dacb45-3e46-407f-8679-66e59131494c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421375-wzghh" Dec 09 12:15:00 crc kubenswrapper[4703]: I1209 12:15:00.350436 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/03dacb45-3e46-407f-8679-66e59131494c-secret-volume\") pod \"collect-profiles-29421375-wzghh\" (UID: \"03dacb45-3e46-407f-8679-66e59131494c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421375-wzghh" Dec 09 12:15:00 crc kubenswrapper[4703]: I1209 12:15:00.350499 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/03dacb45-3e46-407f-8679-66e59131494c-config-volume\") pod \"collect-profiles-29421375-wzghh\" (UID: \"03dacb45-3e46-407f-8679-66e59131494c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421375-wzghh" Dec 09 12:15:00 crc kubenswrapper[4703]: I1209 12:15:00.351482 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/03dacb45-3e46-407f-8679-66e59131494c-config-volume\") pod \"collect-profiles-29421375-wzghh\" (UID: \"03dacb45-3e46-407f-8679-66e59131494c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421375-wzghh" Dec 09 12:15:00 crc kubenswrapper[4703]: I1209 12:15:00.360834 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/03dacb45-3e46-407f-8679-66e59131494c-secret-volume\") pod \"collect-profiles-29421375-wzghh\" (UID: \"03dacb45-3e46-407f-8679-66e59131494c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421375-wzghh" Dec 09 12:15:00 crc kubenswrapper[4703]: I1209 12:15:00.366818 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xj622\" (UniqueName: \"kubernetes.io/projected/03dacb45-3e46-407f-8679-66e59131494c-kube-api-access-xj622\") pod \"collect-profiles-29421375-wzghh\" (UID: \"03dacb45-3e46-407f-8679-66e59131494c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421375-wzghh" Dec 09 12:15:00 crc kubenswrapper[4703]: I1209 12:15:00.498572 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421375-wzghh" Dec 09 12:15:00 crc kubenswrapper[4703]: I1209 12:15:00.690222 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421375-wzghh"] Dec 09 12:15:01 crc kubenswrapper[4703]: I1209 12:15:01.090965 4703 generic.go:334] "Generic (PLEG): container finished" podID="03dacb45-3e46-407f-8679-66e59131494c" containerID="5d7249e747e0119dfdb8cd04a272497c1316736bd31cd231504ce79f521a2944" exitCode=0 Dec 09 12:15:01 crc kubenswrapper[4703]: I1209 12:15:01.091096 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421375-wzghh" event={"ID":"03dacb45-3e46-407f-8679-66e59131494c","Type":"ContainerDied","Data":"5d7249e747e0119dfdb8cd04a272497c1316736bd31cd231504ce79f521a2944"} Dec 09 12:15:01 crc kubenswrapper[4703]: I1209 12:15:01.091536 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421375-wzghh" event={"ID":"03dacb45-3e46-407f-8679-66e59131494c","Type":"ContainerStarted","Data":"c870b66b2a395535fe2b4f888e0cc04f8f2b49294231721c41e39a1bceb51df8"} Dec 09 12:15:02 crc kubenswrapper[4703]: I1209 12:15:02.301936 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421375-wzghh" Dec 09 12:15:02 crc kubenswrapper[4703]: I1209 12:15:02.375122 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/03dacb45-3e46-407f-8679-66e59131494c-secret-volume\") pod \"03dacb45-3e46-407f-8679-66e59131494c\" (UID: \"03dacb45-3e46-407f-8679-66e59131494c\") " Dec 09 12:15:02 crc kubenswrapper[4703]: I1209 12:15:02.375266 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/03dacb45-3e46-407f-8679-66e59131494c-config-volume\") pod \"03dacb45-3e46-407f-8679-66e59131494c\" (UID: \"03dacb45-3e46-407f-8679-66e59131494c\") " Dec 09 12:15:02 crc kubenswrapper[4703]: I1209 12:15:02.375336 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xj622\" (UniqueName: \"kubernetes.io/projected/03dacb45-3e46-407f-8679-66e59131494c-kube-api-access-xj622\") pod \"03dacb45-3e46-407f-8679-66e59131494c\" (UID: \"03dacb45-3e46-407f-8679-66e59131494c\") " Dec 09 12:15:02 crc kubenswrapper[4703]: I1209 12:15:02.376259 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03dacb45-3e46-407f-8679-66e59131494c-config-volume" (OuterVolumeSpecName: "config-volume") pod "03dacb45-3e46-407f-8679-66e59131494c" (UID: "03dacb45-3e46-407f-8679-66e59131494c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:15:02 crc kubenswrapper[4703]: I1209 12:15:02.380261 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03dacb45-3e46-407f-8679-66e59131494c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "03dacb45-3e46-407f-8679-66e59131494c" (UID: "03dacb45-3e46-407f-8679-66e59131494c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:15:02 crc kubenswrapper[4703]: I1209 12:15:02.380578 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03dacb45-3e46-407f-8679-66e59131494c-kube-api-access-xj622" (OuterVolumeSpecName: "kube-api-access-xj622") pod "03dacb45-3e46-407f-8679-66e59131494c" (UID: "03dacb45-3e46-407f-8679-66e59131494c"). InnerVolumeSpecName "kube-api-access-xj622". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:15:02 crc kubenswrapper[4703]: I1209 12:15:02.476332 4703 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/03dacb45-3e46-407f-8679-66e59131494c-config-volume\") on node \"crc\" DevicePath \"\"" Dec 09 12:15:02 crc kubenswrapper[4703]: I1209 12:15:02.476380 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xj622\" (UniqueName: \"kubernetes.io/projected/03dacb45-3e46-407f-8679-66e59131494c-kube-api-access-xj622\") on node \"crc\" DevicePath \"\"" Dec 09 12:15:02 crc kubenswrapper[4703]: I1209 12:15:02.476392 4703 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/03dacb45-3e46-407f-8679-66e59131494c-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 09 12:15:03 crc kubenswrapper[4703]: I1209 12:15:03.102477 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421375-wzghh" event={"ID":"03dacb45-3e46-407f-8679-66e59131494c","Type":"ContainerDied","Data":"c870b66b2a395535fe2b4f888e0cc04f8f2b49294231721c41e39a1bceb51df8"} Dec 09 12:15:03 crc kubenswrapper[4703]: I1209 12:15:03.102525 4703 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c870b66b2a395535fe2b4f888e0cc04f8f2b49294231721c41e39a1bceb51df8" Dec 09 12:15:03 crc kubenswrapper[4703]: I1209 12:15:03.102529 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421375-wzghh" Dec 09 12:16:30 crc kubenswrapper[4703]: I1209 12:16:30.084040 4703 patch_prober.go:28] interesting pod/machine-config-daemon-q8sfk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 12:16:30 crc kubenswrapper[4703]: I1209 12:16:30.084561 4703 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 12:16:37 crc kubenswrapper[4703]: I1209 12:16:37.034753 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210p9nx6"] Dec 09 12:16:37 crc kubenswrapper[4703]: E1209 12:16:37.035420 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03dacb45-3e46-407f-8679-66e59131494c" containerName="collect-profiles" Dec 09 12:16:37 crc kubenswrapper[4703]: I1209 12:16:37.035434 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="03dacb45-3e46-407f-8679-66e59131494c" containerName="collect-profiles" Dec 09 12:16:37 crc kubenswrapper[4703]: I1209 12:16:37.035525 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="03dacb45-3e46-407f-8679-66e59131494c" containerName="collect-profiles" Dec 09 12:16:37 crc kubenswrapper[4703]: I1209 12:16:37.036205 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210p9nx6" Dec 09 12:16:37 crc kubenswrapper[4703]: I1209 12:16:37.038262 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 09 12:16:37 crc kubenswrapper[4703]: I1209 12:16:37.061703 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210p9nx6"] Dec 09 12:16:37 crc kubenswrapper[4703]: I1209 12:16:37.158113 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2vnv\" (UniqueName: \"kubernetes.io/projected/fe409fef-afb0-4377-9bda-f6f1e9390efc-kube-api-access-h2vnv\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210p9nx6\" (UID: \"fe409fef-afb0-4377-9bda-f6f1e9390efc\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210p9nx6" Dec 09 12:16:37 crc kubenswrapper[4703]: I1209 12:16:37.158278 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fe409fef-afb0-4377-9bda-f6f1e9390efc-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210p9nx6\" (UID: \"fe409fef-afb0-4377-9bda-f6f1e9390efc\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210p9nx6" Dec 09 12:16:37 crc kubenswrapper[4703]: I1209 12:16:37.158340 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fe409fef-afb0-4377-9bda-f6f1e9390efc-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210p9nx6\" (UID: \"fe409fef-afb0-4377-9bda-f6f1e9390efc\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210p9nx6" Dec 09 12:16:37 crc kubenswrapper[4703]: I1209 12:16:37.259918 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fe409fef-afb0-4377-9bda-f6f1e9390efc-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210p9nx6\" (UID: \"fe409fef-afb0-4377-9bda-f6f1e9390efc\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210p9nx6" Dec 09 12:16:37 crc kubenswrapper[4703]: I1209 12:16:37.260005 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2vnv\" (UniqueName: \"kubernetes.io/projected/fe409fef-afb0-4377-9bda-f6f1e9390efc-kube-api-access-h2vnv\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210p9nx6\" (UID: \"fe409fef-afb0-4377-9bda-f6f1e9390efc\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210p9nx6" Dec 09 12:16:37 crc kubenswrapper[4703]: I1209 12:16:37.260057 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fe409fef-afb0-4377-9bda-f6f1e9390efc-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210p9nx6\" (UID: \"fe409fef-afb0-4377-9bda-f6f1e9390efc\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210p9nx6" Dec 09 12:16:37 crc kubenswrapper[4703]: I1209 12:16:37.260527 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fe409fef-afb0-4377-9bda-f6f1e9390efc-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210p9nx6\" (UID: \"fe409fef-afb0-4377-9bda-f6f1e9390efc\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210p9nx6" Dec 09 12:16:37 crc kubenswrapper[4703]: I1209 12:16:37.260575 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fe409fef-afb0-4377-9bda-f6f1e9390efc-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210p9nx6\" (UID: \"fe409fef-afb0-4377-9bda-f6f1e9390efc\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210p9nx6" Dec 09 12:16:37 crc kubenswrapper[4703]: I1209 12:16:37.284641 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2vnv\" (UniqueName: \"kubernetes.io/projected/fe409fef-afb0-4377-9bda-f6f1e9390efc-kube-api-access-h2vnv\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210p9nx6\" (UID: \"fe409fef-afb0-4377-9bda-f6f1e9390efc\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210p9nx6" Dec 09 12:16:37 crc kubenswrapper[4703]: I1209 12:16:37.361032 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210p9nx6" Dec 09 12:16:37 crc kubenswrapper[4703]: I1209 12:16:37.545507 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210p9nx6"] Dec 09 12:16:37 crc kubenswrapper[4703]: I1209 12:16:37.597622 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210p9nx6" event={"ID":"fe409fef-afb0-4377-9bda-f6f1e9390efc","Type":"ContainerStarted","Data":"231c0b021bc91e992ca78740a52292eb914931d97483d061917375d975c0b9ed"} Dec 09 12:16:38 crc kubenswrapper[4703]: I1209 12:16:38.603659 4703 generic.go:334] "Generic (PLEG): container finished" podID="fe409fef-afb0-4377-9bda-f6f1e9390efc" containerID="7a184875bc8a9f596359f25d19253ac4de3bb6134a42f5f8fef34fb48395fe54" exitCode=0 Dec 09 12:16:38 crc kubenswrapper[4703]: I1209 12:16:38.603706 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210p9nx6" event={"ID":"fe409fef-afb0-4377-9bda-f6f1e9390efc","Type":"ContainerDied","Data":"7a184875bc8a9f596359f25d19253ac4de3bb6134a42f5f8fef34fb48395fe54"} Dec 09 12:16:38 crc kubenswrapper[4703]: I1209 12:16:38.605661 4703 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 09 12:16:39 crc kubenswrapper[4703]: I1209 12:16:39.611919 4703 generic.go:334] "Generic (PLEG): container finished" podID="fe409fef-afb0-4377-9bda-f6f1e9390efc" containerID="db84c5fd9c5e3aefda73c8e86e8169fd9b88927ace993e3897ff711078f47ab3" exitCode=0 Dec 09 12:16:39 crc kubenswrapper[4703]: I1209 12:16:39.612063 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210p9nx6" event={"ID":"fe409fef-afb0-4377-9bda-f6f1e9390efc","Type":"ContainerDied","Data":"db84c5fd9c5e3aefda73c8e86e8169fd9b88927ace993e3897ff711078f47ab3"} Dec 09 12:16:40 crc kubenswrapper[4703]: I1209 12:16:40.620460 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210p9nx6" event={"ID":"fe409fef-afb0-4377-9bda-f6f1e9390efc","Type":"ContainerStarted","Data":"978c5603baea53dc81c95e1590ffdf6136df1c47224476693e5a91d93276febb"} Dec 09 12:16:40 crc kubenswrapper[4703]: I1209 12:16:40.639521 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210p9nx6" podStartSLOduration=2.82029879 podStartE2EDuration="3.63950236s" podCreationTimestamp="2025-12-09 12:16:37 +0000 UTC" firstStartedPulling="2025-12-09 12:16:38.605330599 +0000 UTC m=+697.854094118" lastFinishedPulling="2025-12-09 12:16:39.424534169 +0000 UTC m=+698.673297688" observedRunningTime="2025-12-09 12:16:40.637393763 +0000 UTC m=+699.886157282" watchObservedRunningTime="2025-12-09 12:16:40.63950236 +0000 UTC m=+699.888265879" Dec 09 12:16:41 crc kubenswrapper[4703]: I1209 12:16:41.626953 4703 generic.go:334] "Generic (PLEG): container finished" podID="fe409fef-afb0-4377-9bda-f6f1e9390efc" containerID="978c5603baea53dc81c95e1590ffdf6136df1c47224476693e5a91d93276febb" exitCode=0 Dec 09 12:16:41 crc kubenswrapper[4703]: I1209 12:16:41.627026 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210p9nx6" event={"ID":"fe409fef-afb0-4377-9bda-f6f1e9390efc","Type":"ContainerDied","Data":"978c5603baea53dc81c95e1590ffdf6136df1c47224476693e5a91d93276febb"} Dec 09 12:16:42 crc kubenswrapper[4703]: I1209 12:16:42.891757 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210p9nx6" Dec 09 12:16:43 crc kubenswrapper[4703]: I1209 12:16:43.041780 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h2vnv\" (UniqueName: \"kubernetes.io/projected/fe409fef-afb0-4377-9bda-f6f1e9390efc-kube-api-access-h2vnv\") pod \"fe409fef-afb0-4377-9bda-f6f1e9390efc\" (UID: \"fe409fef-afb0-4377-9bda-f6f1e9390efc\") " Dec 09 12:16:43 crc kubenswrapper[4703]: I1209 12:16:43.041908 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fe409fef-afb0-4377-9bda-f6f1e9390efc-bundle\") pod \"fe409fef-afb0-4377-9bda-f6f1e9390efc\" (UID: \"fe409fef-afb0-4377-9bda-f6f1e9390efc\") " Dec 09 12:16:43 crc kubenswrapper[4703]: I1209 12:16:43.041951 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fe409fef-afb0-4377-9bda-f6f1e9390efc-util\") pod \"fe409fef-afb0-4377-9bda-f6f1e9390efc\" (UID: \"fe409fef-afb0-4377-9bda-f6f1e9390efc\") " Dec 09 12:16:43 crc kubenswrapper[4703]: I1209 12:16:43.045257 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe409fef-afb0-4377-9bda-f6f1e9390efc-bundle" (OuterVolumeSpecName: "bundle") pod "fe409fef-afb0-4377-9bda-f6f1e9390efc" (UID: "fe409fef-afb0-4377-9bda-f6f1e9390efc"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:16:43 crc kubenswrapper[4703]: I1209 12:16:43.048794 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe409fef-afb0-4377-9bda-f6f1e9390efc-kube-api-access-h2vnv" (OuterVolumeSpecName: "kube-api-access-h2vnv") pod "fe409fef-afb0-4377-9bda-f6f1e9390efc" (UID: "fe409fef-afb0-4377-9bda-f6f1e9390efc"). InnerVolumeSpecName "kube-api-access-h2vnv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:16:43 crc kubenswrapper[4703]: I1209 12:16:43.056265 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe409fef-afb0-4377-9bda-f6f1e9390efc-util" (OuterVolumeSpecName: "util") pod "fe409fef-afb0-4377-9bda-f6f1e9390efc" (UID: "fe409fef-afb0-4377-9bda-f6f1e9390efc"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:16:43 crc kubenswrapper[4703]: I1209 12:16:43.143792 4703 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fe409fef-afb0-4377-9bda-f6f1e9390efc-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:16:43 crc kubenswrapper[4703]: I1209 12:16:43.143834 4703 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fe409fef-afb0-4377-9bda-f6f1e9390efc-util\") on node \"crc\" DevicePath \"\"" Dec 09 12:16:43 crc kubenswrapper[4703]: I1209 12:16:43.143847 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h2vnv\" (UniqueName: \"kubernetes.io/projected/fe409fef-afb0-4377-9bda-f6f1e9390efc-kube-api-access-h2vnv\") on node \"crc\" DevicePath \"\"" Dec 09 12:16:43 crc kubenswrapper[4703]: I1209 12:16:43.638726 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210p9nx6" event={"ID":"fe409fef-afb0-4377-9bda-f6f1e9390efc","Type":"ContainerDied","Data":"231c0b021bc91e992ca78740a52292eb914931d97483d061917375d975c0b9ed"} Dec 09 12:16:43 crc kubenswrapper[4703]: I1209 12:16:43.639009 4703 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="231c0b021bc91e992ca78740a52292eb914931d97483d061917375d975c0b9ed" Dec 09 12:16:43 crc kubenswrapper[4703]: I1209 12:16:43.638803 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210p9nx6" Dec 09 12:16:51 crc kubenswrapper[4703]: I1209 12:16:51.439545 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-dg7pb"] Dec 09 12:16:51 crc kubenswrapper[4703]: E1209 12:16:51.440244 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe409fef-afb0-4377-9bda-f6f1e9390efc" containerName="extract" Dec 09 12:16:51 crc kubenswrapper[4703]: I1209 12:16:51.440260 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe409fef-afb0-4377-9bda-f6f1e9390efc" containerName="extract" Dec 09 12:16:51 crc kubenswrapper[4703]: E1209 12:16:51.440273 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe409fef-afb0-4377-9bda-f6f1e9390efc" containerName="pull" Dec 09 12:16:51 crc kubenswrapper[4703]: I1209 12:16:51.440279 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe409fef-afb0-4377-9bda-f6f1e9390efc" containerName="pull" Dec 09 12:16:51 crc kubenswrapper[4703]: E1209 12:16:51.440297 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe409fef-afb0-4377-9bda-f6f1e9390efc" containerName="util" Dec 09 12:16:51 crc kubenswrapper[4703]: I1209 12:16:51.440304 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe409fef-afb0-4377-9bda-f6f1e9390efc" containerName="util" Dec 09 12:16:51 crc kubenswrapper[4703]: I1209 12:16:51.440411 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe409fef-afb0-4377-9bda-f6f1e9390efc" containerName="extract" Dec 09 12:16:51 crc kubenswrapper[4703]: I1209 12:16:51.440836 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-dg7pb" Dec 09 12:16:51 crc kubenswrapper[4703]: I1209 12:16:51.443821 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Dec 09 12:16:51 crc kubenswrapper[4703]: I1209 12:16:51.444101 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Dec 09 12:16:51 crc kubenswrapper[4703]: I1209 12:16:51.444417 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-x984d" Dec 09 12:16:51 crc kubenswrapper[4703]: I1209 12:16:51.462727 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-dg7pb"] Dec 09 12:16:51 crc kubenswrapper[4703]: I1209 12:16:51.495722 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-677d894988-6gqww"] Dec 09 12:16:51 crc kubenswrapper[4703]: I1209 12:16:51.497170 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-677d894988-6gqww" Dec 09 12:16:51 crc kubenswrapper[4703]: I1209 12:16:51.499424 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-r9svg" Dec 09 12:16:51 crc kubenswrapper[4703]: I1209 12:16:51.499644 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Dec 09 12:16:51 crc kubenswrapper[4703]: I1209 12:16:51.508540 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-677d894988-5jcb6"] Dec 09 12:16:51 crc kubenswrapper[4703]: I1209 12:16:51.509690 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-677d894988-5jcb6" Dec 09 12:16:51 crc kubenswrapper[4703]: I1209 12:16:51.513515 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-677d894988-6gqww"] Dec 09 12:16:51 crc kubenswrapper[4703]: I1209 12:16:51.531867 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-677d894988-5jcb6"] Dec 09 12:16:51 crc kubenswrapper[4703]: I1209 12:16:51.555358 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b7467102-297a-4824-a598-f22317525002-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-677d894988-6gqww\" (UID: \"b7467102-297a-4824-a598-f22317525002\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-677d894988-6gqww" Dec 09 12:16:51 crc kubenswrapper[4703]: I1209 12:16:51.555431 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c0b01c4e-ed19-4d86-b0f7-a459744771d5-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-677d894988-5jcb6\" (UID: \"c0b01c4e-ed19-4d86-b0f7-a459744771d5\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-677d894988-5jcb6" Dec 09 12:16:51 crc kubenswrapper[4703]: I1209 12:16:51.555472 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c0b01c4e-ed19-4d86-b0f7-a459744771d5-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-677d894988-5jcb6\" (UID: \"c0b01c4e-ed19-4d86-b0f7-a459744771d5\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-677d894988-5jcb6" Dec 09 12:16:51 crc kubenswrapper[4703]: I1209 12:16:51.555493 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldsbw\" (UniqueName: \"kubernetes.io/projected/4cec0df1-f871-497e-8d5a-03ed7c99c085-kube-api-access-ldsbw\") pod \"obo-prometheus-operator-668cf9dfbb-dg7pb\" (UID: \"4cec0df1-f871-497e-8d5a-03ed7c99c085\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-dg7pb" Dec 09 12:16:51 crc kubenswrapper[4703]: I1209 12:16:51.555531 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b7467102-297a-4824-a598-f22317525002-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-677d894988-6gqww\" (UID: \"b7467102-297a-4824-a598-f22317525002\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-677d894988-6gqww" Dec 09 12:16:51 crc kubenswrapper[4703]: I1209 12:16:51.647655 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-bfp54"] Dec 09 12:16:51 crc kubenswrapper[4703]: I1209 12:16:51.648325 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-bfp54" Dec 09 12:16:51 crc kubenswrapper[4703]: I1209 12:16:51.650697 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-dwl56" Dec 09 12:16:51 crc kubenswrapper[4703]: I1209 12:16:51.650887 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Dec 09 12:16:51 crc kubenswrapper[4703]: I1209 12:16:51.656430 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c0b01c4e-ed19-4d86-b0f7-a459744771d5-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-677d894988-5jcb6\" (UID: \"c0b01c4e-ed19-4d86-b0f7-a459744771d5\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-677d894988-5jcb6" Dec 09 12:16:51 crc kubenswrapper[4703]: I1209 12:16:51.656462 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldsbw\" (UniqueName: \"kubernetes.io/projected/4cec0df1-f871-497e-8d5a-03ed7c99c085-kube-api-access-ldsbw\") pod \"obo-prometheus-operator-668cf9dfbb-dg7pb\" (UID: \"4cec0df1-f871-497e-8d5a-03ed7c99c085\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-dg7pb" Dec 09 12:16:51 crc kubenswrapper[4703]: I1209 12:16:51.656499 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b7467102-297a-4824-a598-f22317525002-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-677d894988-6gqww\" (UID: \"b7467102-297a-4824-a598-f22317525002\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-677d894988-6gqww" Dec 09 12:16:51 crc kubenswrapper[4703]: I1209 12:16:51.656552 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b7467102-297a-4824-a598-f22317525002-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-677d894988-6gqww\" (UID: \"b7467102-297a-4824-a598-f22317525002\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-677d894988-6gqww" Dec 09 12:16:51 crc kubenswrapper[4703]: I1209 12:16:51.656588 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c0b01c4e-ed19-4d86-b0f7-a459744771d5-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-677d894988-5jcb6\" (UID: \"c0b01c4e-ed19-4d86-b0f7-a459744771d5\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-677d894988-5jcb6" Dec 09 12:16:51 crc kubenswrapper[4703]: I1209 12:16:51.664631 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b7467102-297a-4824-a598-f22317525002-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-677d894988-6gqww\" (UID: \"b7467102-297a-4824-a598-f22317525002\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-677d894988-6gqww" Dec 09 12:16:51 crc kubenswrapper[4703]: I1209 12:16:51.665144 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c0b01c4e-ed19-4d86-b0f7-a459744771d5-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-677d894988-5jcb6\" (UID: \"c0b01c4e-ed19-4d86-b0f7-a459744771d5\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-677d894988-5jcb6" Dec 09 12:16:51 crc kubenswrapper[4703]: I1209 12:16:51.670146 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-bfp54"] Dec 09 12:16:51 crc kubenswrapper[4703]: I1209 12:16:51.670775 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c0b01c4e-ed19-4d86-b0f7-a459744771d5-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-677d894988-5jcb6\" (UID: \"c0b01c4e-ed19-4d86-b0f7-a459744771d5\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-677d894988-5jcb6" Dec 09 12:16:51 crc kubenswrapper[4703]: I1209 12:16:51.678771 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b7467102-297a-4824-a598-f22317525002-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-677d894988-6gqww\" (UID: \"b7467102-297a-4824-a598-f22317525002\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-677d894988-6gqww" Dec 09 12:16:51 crc kubenswrapper[4703]: I1209 12:16:51.703266 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldsbw\" (UniqueName: \"kubernetes.io/projected/4cec0df1-f871-497e-8d5a-03ed7c99c085-kube-api-access-ldsbw\") pod \"obo-prometheus-operator-668cf9dfbb-dg7pb\" (UID: \"4cec0df1-f871-497e-8d5a-03ed7c99c085\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-dg7pb" Dec 09 12:16:51 crc kubenswrapper[4703]: I1209 12:16:51.757735 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/a3216052-a675-4452-b6b4-a63dcce7a51f-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-bfp54\" (UID: \"a3216052-a675-4452-b6b4-a63dcce7a51f\") " pod="openshift-operators/observability-operator-d8bb48f5d-bfp54" Dec 09 12:16:51 crc kubenswrapper[4703]: I1209 12:16:51.757783 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddtbv\" (UniqueName: \"kubernetes.io/projected/a3216052-a675-4452-b6b4-a63dcce7a51f-kube-api-access-ddtbv\") pod \"observability-operator-d8bb48f5d-bfp54\" (UID: \"a3216052-a675-4452-b6b4-a63dcce7a51f\") " pod="openshift-operators/observability-operator-d8bb48f5d-bfp54" Dec 09 12:16:51 crc kubenswrapper[4703]: I1209 12:16:51.763887 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-dg7pb" Dec 09 12:16:51 crc kubenswrapper[4703]: I1209 12:16:51.822967 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-677d894988-6gqww" Dec 09 12:16:51 crc kubenswrapper[4703]: I1209 12:16:51.843361 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-677d894988-5jcb6" Dec 09 12:16:51 crc kubenswrapper[4703]: I1209 12:16:51.858955 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/a3216052-a675-4452-b6b4-a63dcce7a51f-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-bfp54\" (UID: \"a3216052-a675-4452-b6b4-a63dcce7a51f\") " pod="openshift-operators/observability-operator-d8bb48f5d-bfp54" Dec 09 12:16:51 crc kubenswrapper[4703]: I1209 12:16:51.859008 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddtbv\" (UniqueName: \"kubernetes.io/projected/a3216052-a675-4452-b6b4-a63dcce7a51f-kube-api-access-ddtbv\") pod \"observability-operator-d8bb48f5d-bfp54\" (UID: \"a3216052-a675-4452-b6b4-a63dcce7a51f\") " pod="openshift-operators/observability-operator-d8bb48f5d-bfp54" Dec 09 12:16:51 crc kubenswrapper[4703]: I1209 12:16:51.864519 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/a3216052-a675-4452-b6b4-a63dcce7a51f-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-bfp54\" (UID: \"a3216052-a675-4452-b6b4-a63dcce7a51f\") " pod="openshift-operators/observability-operator-d8bb48f5d-bfp54" Dec 09 12:16:51 crc kubenswrapper[4703]: I1209 12:16:51.866611 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5446b9c989-tzn85"] Dec 09 12:16:51 crc kubenswrapper[4703]: I1209 12:16:51.867864 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-tzn85" Dec 09 12:16:51 crc kubenswrapper[4703]: I1209 12:16:51.873932 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-8jnp2" Dec 09 12:16:51 crc kubenswrapper[4703]: I1209 12:16:51.893993 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5446b9c989-tzn85"] Dec 09 12:16:51 crc kubenswrapper[4703]: I1209 12:16:51.895164 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddtbv\" (UniqueName: \"kubernetes.io/projected/a3216052-a675-4452-b6b4-a63dcce7a51f-kube-api-access-ddtbv\") pod \"observability-operator-d8bb48f5d-bfp54\" (UID: \"a3216052-a675-4452-b6b4-a63dcce7a51f\") " pod="openshift-operators/observability-operator-d8bb48f5d-bfp54" Dec 09 12:16:51 crc kubenswrapper[4703]: I1209 12:16:51.960901 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vxh7\" (UniqueName: \"kubernetes.io/projected/2f5ff9fe-6b53-49b8-ba78-30df51c9473e-kube-api-access-6vxh7\") pod \"perses-operator-5446b9c989-tzn85\" (UID: \"2f5ff9fe-6b53-49b8-ba78-30df51c9473e\") " pod="openshift-operators/perses-operator-5446b9c989-tzn85" Dec 09 12:16:51 crc kubenswrapper[4703]: I1209 12:16:51.961003 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/2f5ff9fe-6b53-49b8-ba78-30df51c9473e-openshift-service-ca\") pod \"perses-operator-5446b9c989-tzn85\" (UID: \"2f5ff9fe-6b53-49b8-ba78-30df51c9473e\") " pod="openshift-operators/perses-operator-5446b9c989-tzn85" Dec 09 12:16:51 crc kubenswrapper[4703]: I1209 12:16:51.972103 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-bfp54" Dec 09 12:16:52 crc kubenswrapper[4703]: I1209 12:16:52.061977 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/2f5ff9fe-6b53-49b8-ba78-30df51c9473e-openshift-service-ca\") pod \"perses-operator-5446b9c989-tzn85\" (UID: \"2f5ff9fe-6b53-49b8-ba78-30df51c9473e\") " pod="openshift-operators/perses-operator-5446b9c989-tzn85" Dec 09 12:16:52 crc kubenswrapper[4703]: I1209 12:16:52.062059 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vxh7\" (UniqueName: \"kubernetes.io/projected/2f5ff9fe-6b53-49b8-ba78-30df51c9473e-kube-api-access-6vxh7\") pod \"perses-operator-5446b9c989-tzn85\" (UID: \"2f5ff9fe-6b53-49b8-ba78-30df51c9473e\") " pod="openshift-operators/perses-operator-5446b9c989-tzn85" Dec 09 12:16:52 crc kubenswrapper[4703]: I1209 12:16:52.063091 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/2f5ff9fe-6b53-49b8-ba78-30df51c9473e-openshift-service-ca\") pod \"perses-operator-5446b9c989-tzn85\" (UID: \"2f5ff9fe-6b53-49b8-ba78-30df51c9473e\") " pod="openshift-operators/perses-operator-5446b9c989-tzn85" Dec 09 12:16:52 crc kubenswrapper[4703]: I1209 12:16:52.084660 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-dg7pb"] Dec 09 12:16:52 crc kubenswrapper[4703]: I1209 12:16:52.084879 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vxh7\" (UniqueName: \"kubernetes.io/projected/2f5ff9fe-6b53-49b8-ba78-30df51c9473e-kube-api-access-6vxh7\") pod \"perses-operator-5446b9c989-tzn85\" (UID: \"2f5ff9fe-6b53-49b8-ba78-30df51c9473e\") " pod="openshift-operators/perses-operator-5446b9c989-tzn85" Dec 09 12:16:52 crc kubenswrapper[4703]: I1209 12:16:52.214647 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-tzn85" Dec 09 12:16:52 crc kubenswrapper[4703]: I1209 12:16:52.231525 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-677d894988-5jcb6"] Dec 09 12:16:52 crc kubenswrapper[4703]: I1209 12:16:52.340207 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-677d894988-6gqww"] Dec 09 12:16:52 crc kubenswrapper[4703]: I1209 12:16:52.372550 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-bfp54"] Dec 09 12:16:52 crc kubenswrapper[4703]: W1209 12:16:52.396661 4703 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda3216052_a675_4452_b6b4_a63dcce7a51f.slice/crio-d0abf6a556475f59663dd113be4b6eff2b7581667efecaf03397d665b4c55bce WatchSource:0}: Error finding container d0abf6a556475f59663dd113be4b6eff2b7581667efecaf03397d665b4c55bce: Status 404 returned error can't find the container with id d0abf6a556475f59663dd113be4b6eff2b7581667efecaf03397d665b4c55bce Dec 09 12:16:52 crc kubenswrapper[4703]: I1209 12:16:52.655101 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5446b9c989-tzn85"] Dec 09 12:16:52 crc kubenswrapper[4703]: W1209 12:16:52.664723 4703 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2f5ff9fe_6b53_49b8_ba78_30df51c9473e.slice/crio-a677ce0948afad726ecb70722b695ddf30f738ffd9aacb659d895a6ec2c0fef4 WatchSource:0}: Error finding container a677ce0948afad726ecb70722b695ddf30f738ffd9aacb659d895a6ec2c0fef4: Status 404 returned error can't find the container with id a677ce0948afad726ecb70722b695ddf30f738ffd9aacb659d895a6ec2c0fef4 Dec 09 12:16:52 crc kubenswrapper[4703]: I1209 12:16:52.689415 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-dg7pb" event={"ID":"4cec0df1-f871-497e-8d5a-03ed7c99c085","Type":"ContainerStarted","Data":"998f86fcf1814519278f4f39274249f93d282f6011aa4f07b781f8c7dc7ca640"} Dec 09 12:16:52 crc kubenswrapper[4703]: I1209 12:16:52.690588 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-677d894988-5jcb6" event={"ID":"c0b01c4e-ed19-4d86-b0f7-a459744771d5","Type":"ContainerStarted","Data":"360f497da7528f05765f0c76a9d26574ddb53a93beed6b89eaf2d2f5b0e7ae1e"} Dec 09 12:16:52 crc kubenswrapper[4703]: I1209 12:16:52.692057 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5446b9c989-tzn85" event={"ID":"2f5ff9fe-6b53-49b8-ba78-30df51c9473e","Type":"ContainerStarted","Data":"a677ce0948afad726ecb70722b695ddf30f738ffd9aacb659d895a6ec2c0fef4"} Dec 09 12:16:52 crc kubenswrapper[4703]: I1209 12:16:52.693296 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-d8bb48f5d-bfp54" event={"ID":"a3216052-a675-4452-b6b4-a63dcce7a51f","Type":"ContainerStarted","Data":"d0abf6a556475f59663dd113be4b6eff2b7581667efecaf03397d665b4c55bce"} Dec 09 12:16:52 crc kubenswrapper[4703]: I1209 12:16:52.694517 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-677d894988-6gqww" event={"ID":"b7467102-297a-4824-a598-f22317525002","Type":"ContainerStarted","Data":"8fbee5188cd0909e93209bd330e31eaf77a1fd183856bb7bcbafdfd96815e4ea"} Dec 09 12:16:53 crc kubenswrapper[4703]: I1209 12:16:53.664366 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-7hrm8"] Dec 09 12:16:53 crc kubenswrapper[4703]: I1209 12:16:53.664807 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" podUID="e9173444-5181-4ee4-b651-11d92ccab0d0" containerName="ovn-controller" containerID="cri-o://02b8fb5400b3ed69b7e65ae9f09aac617bd51972cfa0d5aec436c9f59eb65ac2" gracePeriod=30 Dec 09 12:16:53 crc kubenswrapper[4703]: I1209 12:16:53.664930 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" podUID="e9173444-5181-4ee4-b651-11d92ccab0d0" containerName="ovn-acl-logging" containerID="cri-o://06ebee5cde81f7c5fb299c4a346f466d3bd4667be4dd2e89712dcae660a1e08c" gracePeriod=30 Dec 09 12:16:53 crc kubenswrapper[4703]: I1209 12:16:53.664996 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" podUID="e9173444-5181-4ee4-b651-11d92ccab0d0" containerName="sbdb" containerID="cri-o://88a462e310fdbdf79496906a304fcf953c4e48c9cab9137f52cf5aafa9c4054c" gracePeriod=30 Dec 09 12:16:53 crc kubenswrapper[4703]: I1209 12:16:53.665014 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" podUID="e9173444-5181-4ee4-b651-11d92ccab0d0" containerName="kube-rbac-proxy-node" containerID="cri-o://382198a5fbf52c39a98bc79aa4e917d50dfbcb3ff73d831389901544d400817c" gracePeriod=30 Dec 09 12:16:53 crc kubenswrapper[4703]: I1209 12:16:53.665214 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" podUID="e9173444-5181-4ee4-b651-11d92ccab0d0" containerName="northd" containerID="cri-o://9652e0d783249140796d5c8586e300acb7918f3ced79a1daec01da21eac363c8" gracePeriod=30 Dec 09 12:16:53 crc kubenswrapper[4703]: I1209 12:16:53.665253 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" podUID="e9173444-5181-4ee4-b651-11d92ccab0d0" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://2ce8423b98a9127fea76c23c865d9840069fe97e1d10dc9e97816c916192bf8e" gracePeriod=30 Dec 09 12:16:53 crc kubenswrapper[4703]: I1209 12:16:53.664894 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" podUID="e9173444-5181-4ee4-b651-11d92ccab0d0" containerName="nbdb" containerID="cri-o://5989b9b8e8fb619a07c3242f7c4310e042a281648593a05edd6cb16449480428" gracePeriod=30 Dec 09 12:16:53 crc kubenswrapper[4703]: I1209 12:16:53.785111 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" podUID="e9173444-5181-4ee4-b651-11d92ccab0d0" containerName="ovnkube-controller" containerID="cri-o://6e6972743cb84820f7134dd61623aa0e6430203e481128bb119defbfafc8d344" gracePeriod=30 Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.166999 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7hrm8_e9173444-5181-4ee4-b651-11d92ccab0d0/ovnkube-controller/3.log" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.169375 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7hrm8_e9173444-5181-4ee4-b651-11d92ccab0d0/ovn-acl-logging/0.log" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.171700 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7hrm8_e9173444-5181-4ee4-b651-11d92ccab0d0/ovn-controller/0.log" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.172231 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.242317 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-5dvmg"] Dec 09 12:16:54 crc kubenswrapper[4703]: E1209 12:16:54.242555 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9173444-5181-4ee4-b651-11d92ccab0d0" containerName="ovn-controller" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.242578 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9173444-5181-4ee4-b651-11d92ccab0d0" containerName="ovn-controller" Dec 09 12:16:54 crc kubenswrapper[4703]: E1209 12:16:54.242593 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9173444-5181-4ee4-b651-11d92ccab0d0" containerName="kube-rbac-proxy-ovn-metrics" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.242601 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9173444-5181-4ee4-b651-11d92ccab0d0" containerName="kube-rbac-proxy-ovn-metrics" Dec 09 12:16:54 crc kubenswrapper[4703]: E1209 12:16:54.242609 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9173444-5181-4ee4-b651-11d92ccab0d0" containerName="ovnkube-controller" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.242616 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9173444-5181-4ee4-b651-11d92ccab0d0" containerName="ovnkube-controller" Dec 09 12:16:54 crc kubenswrapper[4703]: E1209 12:16:54.242622 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9173444-5181-4ee4-b651-11d92ccab0d0" containerName="northd" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.242629 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9173444-5181-4ee4-b651-11d92ccab0d0" containerName="northd" Dec 09 12:16:54 crc kubenswrapper[4703]: E1209 12:16:54.242641 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9173444-5181-4ee4-b651-11d92ccab0d0" containerName="sbdb" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.242648 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9173444-5181-4ee4-b651-11d92ccab0d0" containerName="sbdb" Dec 09 12:16:54 crc kubenswrapper[4703]: E1209 12:16:54.242663 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9173444-5181-4ee4-b651-11d92ccab0d0" containerName="nbdb" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.242670 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9173444-5181-4ee4-b651-11d92ccab0d0" containerName="nbdb" Dec 09 12:16:54 crc kubenswrapper[4703]: E1209 12:16:54.242679 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9173444-5181-4ee4-b651-11d92ccab0d0" containerName="ovnkube-controller" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.242685 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9173444-5181-4ee4-b651-11d92ccab0d0" containerName="ovnkube-controller" Dec 09 12:16:54 crc kubenswrapper[4703]: E1209 12:16:54.242694 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9173444-5181-4ee4-b651-11d92ccab0d0" containerName="ovnkube-controller" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.242699 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9173444-5181-4ee4-b651-11d92ccab0d0" containerName="ovnkube-controller" Dec 09 12:16:54 crc kubenswrapper[4703]: E1209 12:16:54.242707 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9173444-5181-4ee4-b651-11d92ccab0d0" containerName="ovn-acl-logging" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.242713 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9173444-5181-4ee4-b651-11d92ccab0d0" containerName="ovn-acl-logging" Dec 09 12:16:54 crc kubenswrapper[4703]: E1209 12:16:54.242722 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9173444-5181-4ee4-b651-11d92ccab0d0" containerName="kube-rbac-proxy-node" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.242727 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9173444-5181-4ee4-b651-11d92ccab0d0" containerName="kube-rbac-proxy-node" Dec 09 12:16:54 crc kubenswrapper[4703]: E1209 12:16:54.242736 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9173444-5181-4ee4-b651-11d92ccab0d0" containerName="kubecfg-setup" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.242741 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9173444-5181-4ee4-b651-11d92ccab0d0" containerName="kubecfg-setup" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.242830 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9173444-5181-4ee4-b651-11d92ccab0d0" containerName="ovn-acl-logging" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.242840 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9173444-5181-4ee4-b651-11d92ccab0d0" containerName="sbdb" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.242848 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9173444-5181-4ee4-b651-11d92ccab0d0" containerName="ovnkube-controller" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.242853 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9173444-5181-4ee4-b651-11d92ccab0d0" containerName="kube-rbac-proxy-node" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.242861 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9173444-5181-4ee4-b651-11d92ccab0d0" containerName="ovn-controller" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.242869 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9173444-5181-4ee4-b651-11d92ccab0d0" containerName="northd" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.242878 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9173444-5181-4ee4-b651-11d92ccab0d0" containerName="nbdb" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.242886 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9173444-5181-4ee4-b651-11d92ccab0d0" containerName="kube-rbac-proxy-ovn-metrics" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.242893 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9173444-5181-4ee4-b651-11d92ccab0d0" containerName="ovnkube-controller" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.242900 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9173444-5181-4ee4-b651-11d92ccab0d0" containerName="ovnkube-controller" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.242908 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9173444-5181-4ee4-b651-11d92ccab0d0" containerName="ovnkube-controller" Dec 09 12:16:54 crc kubenswrapper[4703]: E1209 12:16:54.243007 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9173444-5181-4ee4-b651-11d92ccab0d0" containerName="ovnkube-controller" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.243014 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9173444-5181-4ee4-b651-11d92ccab0d0" containerName="ovnkube-controller" Dec 09 12:16:54 crc kubenswrapper[4703]: E1209 12:16:54.243021 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9173444-5181-4ee4-b651-11d92ccab0d0" containerName="ovnkube-controller" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.243027 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9173444-5181-4ee4-b651-11d92ccab0d0" containerName="ovnkube-controller" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.243112 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9173444-5181-4ee4-b651-11d92ccab0d0" containerName="ovnkube-controller" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.244927 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-5dvmg" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.296020 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e9173444-5181-4ee4-b651-11d92ccab0d0-host-run-ovn-kubernetes\") pod \"e9173444-5181-4ee4-b651-11d92ccab0d0\" (UID: \"e9173444-5181-4ee4-b651-11d92ccab0d0\") " Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.296070 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e9173444-5181-4ee4-b651-11d92ccab0d0-run-ovn\") pod \"e9173444-5181-4ee4-b651-11d92ccab0d0\" (UID: \"e9173444-5181-4ee4-b651-11d92ccab0d0\") " Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.296107 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e9173444-5181-4ee4-b651-11d92ccab0d0-host-cni-bin\") pod \"e9173444-5181-4ee4-b651-11d92ccab0d0\" (UID: \"e9173444-5181-4ee4-b651-11d92ccab0d0\") " Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.296138 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e9173444-5181-4ee4-b651-11d92ccab0d0-run-systemd\") pod \"e9173444-5181-4ee4-b651-11d92ccab0d0\" (UID: \"e9173444-5181-4ee4-b651-11d92ccab0d0\") " Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.296174 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e9173444-5181-4ee4-b651-11d92ccab0d0-ovn-node-metrics-cert\") pod \"e9173444-5181-4ee4-b651-11d92ccab0d0\" (UID: \"e9173444-5181-4ee4-b651-11d92ccab0d0\") " Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.296215 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e9173444-5181-4ee4-b651-11d92ccab0d0-env-overrides\") pod \"e9173444-5181-4ee4-b651-11d92ccab0d0\" (UID: \"e9173444-5181-4ee4-b651-11d92ccab0d0\") " Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.296251 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e9173444-5181-4ee4-b651-11d92ccab0d0-systemd-units\") pod \"e9173444-5181-4ee4-b651-11d92ccab0d0\" (UID: \"e9173444-5181-4ee4-b651-11d92ccab0d0\") " Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.296275 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e9173444-5181-4ee4-b651-11d92ccab0d0-log-socket\") pod \"e9173444-5181-4ee4-b651-11d92ccab0d0\" (UID: \"e9173444-5181-4ee4-b651-11d92ccab0d0\") " Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.296212 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e9173444-5181-4ee4-b651-11d92ccab0d0-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "e9173444-5181-4ee4-b651-11d92ccab0d0" (UID: "e9173444-5181-4ee4-b651-11d92ccab0d0"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.296294 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e9173444-5181-4ee4-b651-11d92ccab0d0-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "e9173444-5181-4ee4-b651-11d92ccab0d0" (UID: "e9173444-5181-4ee4-b651-11d92ccab0d0"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.296297 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e9173444-5181-4ee4-b651-11d92ccab0d0-ovnkube-script-lib\") pod \"e9173444-5181-4ee4-b651-11d92ccab0d0\" (UID: \"e9173444-5181-4ee4-b651-11d92ccab0d0\") " Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.296361 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e9173444-5181-4ee4-b651-11d92ccab0d0-log-socket" (OuterVolumeSpecName: "log-socket") pod "e9173444-5181-4ee4-b651-11d92ccab0d0" (UID: "e9173444-5181-4ee4-b651-11d92ccab0d0"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.296361 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e9173444-5181-4ee4-b651-11d92ccab0d0-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "e9173444-5181-4ee4-b651-11d92ccab0d0" (UID: "e9173444-5181-4ee4-b651-11d92ccab0d0"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.296409 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e9173444-5181-4ee4-b651-11d92ccab0d0-host-run-netns\") pod \"e9173444-5181-4ee4-b651-11d92ccab0d0\" (UID: \"e9173444-5181-4ee4-b651-11d92ccab0d0\") " Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.296465 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e9173444-5181-4ee4-b651-11d92ccab0d0-var-lib-openvswitch\") pod \"e9173444-5181-4ee4-b651-11d92ccab0d0\" (UID: \"e9173444-5181-4ee4-b651-11d92ccab0d0\") " Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.296506 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e9173444-5181-4ee4-b651-11d92ccab0d0-ovnkube-config\") pod \"e9173444-5181-4ee4-b651-11d92ccab0d0\" (UID: \"e9173444-5181-4ee4-b651-11d92ccab0d0\") " Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.296527 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e9173444-5181-4ee4-b651-11d92ccab0d0-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "e9173444-5181-4ee4-b651-11d92ccab0d0" (UID: "e9173444-5181-4ee4-b651-11d92ccab0d0"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.296580 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e9173444-5181-4ee4-b651-11d92ccab0d0-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "e9173444-5181-4ee4-b651-11d92ccab0d0" (UID: "e9173444-5181-4ee4-b651-11d92ccab0d0"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.296588 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e9173444-5181-4ee4-b651-11d92ccab0d0-host-var-lib-cni-networks-ovn-kubernetes\") pod \"e9173444-5181-4ee4-b651-11d92ccab0d0\" (UID: \"e9173444-5181-4ee4-b651-11d92ccab0d0\") " Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.296625 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e9173444-5181-4ee4-b651-11d92ccab0d0-host-cni-netd\") pod \"e9173444-5181-4ee4-b651-11d92ccab0d0\" (UID: \"e9173444-5181-4ee4-b651-11d92ccab0d0\") " Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.296652 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e9173444-5181-4ee4-b651-11d92ccab0d0-etc-openvswitch\") pod \"e9173444-5181-4ee4-b651-11d92ccab0d0\" (UID: \"e9173444-5181-4ee4-b651-11d92ccab0d0\") " Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.296682 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e9173444-5181-4ee4-b651-11d92ccab0d0-node-log\") pod \"e9173444-5181-4ee4-b651-11d92ccab0d0\" (UID: \"e9173444-5181-4ee4-b651-11d92ccab0d0\") " Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.296708 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xm2fh\" (UniqueName: \"kubernetes.io/projected/e9173444-5181-4ee4-b651-11d92ccab0d0-kube-api-access-xm2fh\") pod \"e9173444-5181-4ee4-b651-11d92ccab0d0\" (UID: \"e9173444-5181-4ee4-b651-11d92ccab0d0\") " Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.296737 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e9173444-5181-4ee4-b651-11d92ccab0d0-host-kubelet\") pod \"e9173444-5181-4ee4-b651-11d92ccab0d0\" (UID: \"e9173444-5181-4ee4-b651-11d92ccab0d0\") " Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.296769 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e9173444-5181-4ee4-b651-11d92ccab0d0-run-openvswitch\") pod \"e9173444-5181-4ee4-b651-11d92ccab0d0\" (UID: \"e9173444-5181-4ee4-b651-11d92ccab0d0\") " Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.296779 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9173444-5181-4ee4-b651-11d92ccab0d0-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "e9173444-5181-4ee4-b651-11d92ccab0d0" (UID: "e9173444-5181-4ee4-b651-11d92ccab0d0"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.296804 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e9173444-5181-4ee4-b651-11d92ccab0d0-host-slash\") pod \"e9173444-5181-4ee4-b651-11d92ccab0d0\" (UID: \"e9173444-5181-4ee4-b651-11d92ccab0d0\") " Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.296899 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9173444-5181-4ee4-b651-11d92ccab0d0-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "e9173444-5181-4ee4-b651-11d92ccab0d0" (UID: "e9173444-5181-4ee4-b651-11d92ccab0d0"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.296944 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e9173444-5181-4ee4-b651-11d92ccab0d0-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "e9173444-5181-4ee4-b651-11d92ccab0d0" (UID: "e9173444-5181-4ee4-b651-11d92ccab0d0"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.296973 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e9173444-5181-4ee4-b651-11d92ccab0d0-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "e9173444-5181-4ee4-b651-11d92ccab0d0" (UID: "e9173444-5181-4ee4-b651-11d92ccab0d0"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.296983 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e9173444-5181-4ee4-b651-11d92ccab0d0-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "e9173444-5181-4ee4-b651-11d92ccab0d0" (UID: "e9173444-5181-4ee4-b651-11d92ccab0d0"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.297001 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e9173444-5181-4ee4-b651-11d92ccab0d0-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "e9173444-5181-4ee4-b651-11d92ccab0d0" (UID: "e9173444-5181-4ee4-b651-11d92ccab0d0"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.297012 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e9173444-5181-4ee4-b651-11d92ccab0d0-node-log" (OuterVolumeSpecName: "node-log") pod "e9173444-5181-4ee4-b651-11d92ccab0d0" (UID: "e9173444-5181-4ee4-b651-11d92ccab0d0"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.296972 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1217710a-4198-4bae-9b5d-63f09834713c-node-log\") pod \"ovnkube-node-5dvmg\" (UID: \"1217710a-4198-4bae-9b5d-63f09834713c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5dvmg" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.297037 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9173444-5181-4ee4-b651-11d92ccab0d0-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "e9173444-5181-4ee4-b651-11d92ccab0d0" (UID: "e9173444-5181-4ee4-b651-11d92ccab0d0"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.297038 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e9173444-5181-4ee4-b651-11d92ccab0d0-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "e9173444-5181-4ee4-b651-11d92ccab0d0" (UID: "e9173444-5181-4ee4-b651-11d92ccab0d0"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.297057 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1217710a-4198-4bae-9b5d-63f09834713c-host-cni-bin\") pod \"ovnkube-node-5dvmg\" (UID: \"1217710a-4198-4bae-9b5d-63f09834713c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5dvmg" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.297116 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c22ww\" (UniqueName: \"kubernetes.io/projected/1217710a-4198-4bae-9b5d-63f09834713c-kube-api-access-c22ww\") pod \"ovnkube-node-5dvmg\" (UID: \"1217710a-4198-4bae-9b5d-63f09834713c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5dvmg" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.297058 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e9173444-5181-4ee4-b651-11d92ccab0d0-host-slash" (OuterVolumeSpecName: "host-slash") pod "e9173444-5181-4ee4-b651-11d92ccab0d0" (UID: "e9173444-5181-4ee4-b651-11d92ccab0d0"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.297157 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1217710a-4198-4bae-9b5d-63f09834713c-ovnkube-script-lib\") pod \"ovnkube-node-5dvmg\" (UID: \"1217710a-4198-4bae-9b5d-63f09834713c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5dvmg" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.297182 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1217710a-4198-4bae-9b5d-63f09834713c-run-openvswitch\") pod \"ovnkube-node-5dvmg\" (UID: \"1217710a-4198-4bae-9b5d-63f09834713c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5dvmg" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.297250 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1217710a-4198-4bae-9b5d-63f09834713c-run-ovn\") pod \"ovnkube-node-5dvmg\" (UID: \"1217710a-4198-4bae-9b5d-63f09834713c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5dvmg" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.297338 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1217710a-4198-4bae-9b5d-63f09834713c-host-cni-netd\") pod \"ovnkube-node-5dvmg\" (UID: \"1217710a-4198-4bae-9b5d-63f09834713c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5dvmg" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.297383 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1217710a-4198-4bae-9b5d-63f09834713c-systemd-units\") pod \"ovnkube-node-5dvmg\" (UID: \"1217710a-4198-4bae-9b5d-63f09834713c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5dvmg" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.297457 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1217710a-4198-4bae-9b5d-63f09834713c-ovn-node-metrics-cert\") pod \"ovnkube-node-5dvmg\" (UID: \"1217710a-4198-4bae-9b5d-63f09834713c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5dvmg" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.297487 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1217710a-4198-4bae-9b5d-63f09834713c-env-overrides\") pod \"ovnkube-node-5dvmg\" (UID: \"1217710a-4198-4bae-9b5d-63f09834713c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5dvmg" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.297512 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1217710a-4198-4bae-9b5d-63f09834713c-host-run-netns\") pod \"ovnkube-node-5dvmg\" (UID: \"1217710a-4198-4bae-9b5d-63f09834713c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5dvmg" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.297548 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1217710a-4198-4bae-9b5d-63f09834713c-host-slash\") pod \"ovnkube-node-5dvmg\" (UID: \"1217710a-4198-4bae-9b5d-63f09834713c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5dvmg" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.297567 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1217710a-4198-4bae-9b5d-63f09834713c-host-kubelet\") pod \"ovnkube-node-5dvmg\" (UID: \"1217710a-4198-4bae-9b5d-63f09834713c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5dvmg" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.297589 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1217710a-4198-4bae-9b5d-63f09834713c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-5dvmg\" (UID: \"1217710a-4198-4bae-9b5d-63f09834713c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5dvmg" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.297607 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1217710a-4198-4bae-9b5d-63f09834713c-log-socket\") pod \"ovnkube-node-5dvmg\" (UID: \"1217710a-4198-4bae-9b5d-63f09834713c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5dvmg" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.297629 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1217710a-4198-4bae-9b5d-63f09834713c-var-lib-openvswitch\") pod \"ovnkube-node-5dvmg\" (UID: \"1217710a-4198-4bae-9b5d-63f09834713c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5dvmg" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.297644 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1217710a-4198-4bae-9b5d-63f09834713c-ovnkube-config\") pod \"ovnkube-node-5dvmg\" (UID: \"1217710a-4198-4bae-9b5d-63f09834713c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5dvmg" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.297663 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1217710a-4198-4bae-9b5d-63f09834713c-run-systemd\") pod \"ovnkube-node-5dvmg\" (UID: \"1217710a-4198-4bae-9b5d-63f09834713c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5dvmg" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.297681 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1217710a-4198-4bae-9b5d-63f09834713c-host-run-ovn-kubernetes\") pod \"ovnkube-node-5dvmg\" (UID: \"1217710a-4198-4bae-9b5d-63f09834713c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5dvmg" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.297700 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1217710a-4198-4bae-9b5d-63f09834713c-etc-openvswitch\") pod \"ovnkube-node-5dvmg\" (UID: \"1217710a-4198-4bae-9b5d-63f09834713c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5dvmg" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.297780 4703 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e9173444-5181-4ee4-b651-11d92ccab0d0-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.297790 4703 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e9173444-5181-4ee4-b651-11d92ccab0d0-systemd-units\") on node \"crc\" DevicePath \"\"" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.297799 4703 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e9173444-5181-4ee4-b651-11d92ccab0d0-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.297806 4703 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e9173444-5181-4ee4-b651-11d92ccab0d0-log-socket\") on node \"crc\" DevicePath \"\"" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.297814 4703 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e9173444-5181-4ee4-b651-11d92ccab0d0-host-run-netns\") on node \"crc\" DevicePath \"\"" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.297822 4703 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e9173444-5181-4ee4-b651-11d92ccab0d0-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.297830 4703 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e9173444-5181-4ee4-b651-11d92ccab0d0-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.297839 4703 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e9173444-5181-4ee4-b651-11d92ccab0d0-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.297848 4703 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e9173444-5181-4ee4-b651-11d92ccab0d0-host-cni-netd\") on node \"crc\" DevicePath \"\"" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.297857 4703 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e9173444-5181-4ee4-b651-11d92ccab0d0-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.297864 4703 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e9173444-5181-4ee4-b651-11d92ccab0d0-node-log\") on node \"crc\" DevicePath \"\"" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.297872 4703 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e9173444-5181-4ee4-b651-11d92ccab0d0-host-kubelet\") on node \"crc\" DevicePath \"\"" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.297879 4703 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e9173444-5181-4ee4-b651-11d92ccab0d0-run-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.297887 4703 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e9173444-5181-4ee4-b651-11d92ccab0d0-host-slash\") on node \"crc\" DevicePath \"\"" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.297896 4703 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e9173444-5181-4ee4-b651-11d92ccab0d0-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.297903 4703 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e9173444-5181-4ee4-b651-11d92ccab0d0-host-cni-bin\") on node \"crc\" DevicePath \"\"" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.298448 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e9173444-5181-4ee4-b651-11d92ccab0d0-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "e9173444-5181-4ee4-b651-11d92ccab0d0" (UID: "e9173444-5181-4ee4-b651-11d92ccab0d0"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.304750 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9173444-5181-4ee4-b651-11d92ccab0d0-kube-api-access-xm2fh" (OuterVolumeSpecName: "kube-api-access-xm2fh") pod "e9173444-5181-4ee4-b651-11d92ccab0d0" (UID: "e9173444-5181-4ee4-b651-11d92ccab0d0"). InnerVolumeSpecName "kube-api-access-xm2fh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.305098 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9173444-5181-4ee4-b651-11d92ccab0d0-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "e9173444-5181-4ee4-b651-11d92ccab0d0" (UID: "e9173444-5181-4ee4-b651-11d92ccab0d0"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.314988 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e9173444-5181-4ee4-b651-11d92ccab0d0-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "e9173444-5181-4ee4-b651-11d92ccab0d0" (UID: "e9173444-5181-4ee4-b651-11d92ccab0d0"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.399802 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1217710a-4198-4bae-9b5d-63f09834713c-node-log\") pod \"ovnkube-node-5dvmg\" (UID: \"1217710a-4198-4bae-9b5d-63f09834713c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5dvmg" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.399852 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1217710a-4198-4bae-9b5d-63f09834713c-host-cni-bin\") pod \"ovnkube-node-5dvmg\" (UID: \"1217710a-4198-4bae-9b5d-63f09834713c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5dvmg" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.399882 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c22ww\" (UniqueName: \"kubernetes.io/projected/1217710a-4198-4bae-9b5d-63f09834713c-kube-api-access-c22ww\") pod \"ovnkube-node-5dvmg\" (UID: \"1217710a-4198-4bae-9b5d-63f09834713c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5dvmg" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.399912 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1217710a-4198-4bae-9b5d-63f09834713c-ovnkube-script-lib\") pod \"ovnkube-node-5dvmg\" (UID: \"1217710a-4198-4bae-9b5d-63f09834713c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5dvmg" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.399935 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1217710a-4198-4bae-9b5d-63f09834713c-run-openvswitch\") pod \"ovnkube-node-5dvmg\" (UID: \"1217710a-4198-4bae-9b5d-63f09834713c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5dvmg" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.399959 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1217710a-4198-4bae-9b5d-63f09834713c-run-ovn\") pod \"ovnkube-node-5dvmg\" (UID: \"1217710a-4198-4bae-9b5d-63f09834713c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5dvmg" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.399984 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1217710a-4198-4bae-9b5d-63f09834713c-host-cni-netd\") pod \"ovnkube-node-5dvmg\" (UID: \"1217710a-4198-4bae-9b5d-63f09834713c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5dvmg" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.400011 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1217710a-4198-4bae-9b5d-63f09834713c-systemd-units\") pod \"ovnkube-node-5dvmg\" (UID: \"1217710a-4198-4bae-9b5d-63f09834713c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5dvmg" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.400040 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1217710a-4198-4bae-9b5d-63f09834713c-ovn-node-metrics-cert\") pod \"ovnkube-node-5dvmg\" (UID: \"1217710a-4198-4bae-9b5d-63f09834713c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5dvmg" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.400064 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1217710a-4198-4bae-9b5d-63f09834713c-env-overrides\") pod \"ovnkube-node-5dvmg\" (UID: \"1217710a-4198-4bae-9b5d-63f09834713c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5dvmg" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.400086 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1217710a-4198-4bae-9b5d-63f09834713c-host-run-netns\") pod \"ovnkube-node-5dvmg\" (UID: \"1217710a-4198-4bae-9b5d-63f09834713c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5dvmg" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.400118 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1217710a-4198-4bae-9b5d-63f09834713c-host-slash\") pod \"ovnkube-node-5dvmg\" (UID: \"1217710a-4198-4bae-9b5d-63f09834713c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5dvmg" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.400142 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1217710a-4198-4bae-9b5d-63f09834713c-host-kubelet\") pod \"ovnkube-node-5dvmg\" (UID: \"1217710a-4198-4bae-9b5d-63f09834713c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5dvmg" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.400160 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1217710a-4198-4bae-9b5d-63f09834713c-run-openvswitch\") pod \"ovnkube-node-5dvmg\" (UID: \"1217710a-4198-4bae-9b5d-63f09834713c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5dvmg" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.400246 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1217710a-4198-4bae-9b5d-63f09834713c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-5dvmg\" (UID: \"1217710a-4198-4bae-9b5d-63f09834713c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5dvmg" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.400149 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1217710a-4198-4bae-9b5d-63f09834713c-host-cni-netd\") pod \"ovnkube-node-5dvmg\" (UID: \"1217710a-4198-4bae-9b5d-63f09834713c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5dvmg" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.400249 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1217710a-4198-4bae-9b5d-63f09834713c-host-run-netns\") pod \"ovnkube-node-5dvmg\" (UID: \"1217710a-4198-4bae-9b5d-63f09834713c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5dvmg" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.400273 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1217710a-4198-4bae-9b5d-63f09834713c-host-slash\") pod \"ovnkube-node-5dvmg\" (UID: \"1217710a-4198-4bae-9b5d-63f09834713c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5dvmg" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.400338 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1217710a-4198-4bae-9b5d-63f09834713c-host-kubelet\") pod \"ovnkube-node-5dvmg\" (UID: \"1217710a-4198-4bae-9b5d-63f09834713c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5dvmg" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.400362 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1217710a-4198-4bae-9b5d-63f09834713c-node-log\") pod \"ovnkube-node-5dvmg\" (UID: \"1217710a-4198-4bae-9b5d-63f09834713c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5dvmg" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.400383 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1217710a-4198-4bae-9b5d-63f09834713c-host-cni-bin\") pod \"ovnkube-node-5dvmg\" (UID: \"1217710a-4198-4bae-9b5d-63f09834713c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5dvmg" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.400164 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1217710a-4198-4bae-9b5d-63f09834713c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-5dvmg\" (UID: \"1217710a-4198-4bae-9b5d-63f09834713c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5dvmg" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.400750 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1217710a-4198-4bae-9b5d-63f09834713c-run-ovn\") pod \"ovnkube-node-5dvmg\" (UID: \"1217710a-4198-4bae-9b5d-63f09834713c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5dvmg" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.400856 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1217710a-4198-4bae-9b5d-63f09834713c-log-socket\") pod \"ovnkube-node-5dvmg\" (UID: \"1217710a-4198-4bae-9b5d-63f09834713c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5dvmg" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.400888 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1217710a-4198-4bae-9b5d-63f09834713c-var-lib-openvswitch\") pod \"ovnkube-node-5dvmg\" (UID: \"1217710a-4198-4bae-9b5d-63f09834713c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5dvmg" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.400908 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1217710a-4198-4bae-9b5d-63f09834713c-ovnkube-config\") pod \"ovnkube-node-5dvmg\" (UID: \"1217710a-4198-4bae-9b5d-63f09834713c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5dvmg" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.400951 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1217710a-4198-4bae-9b5d-63f09834713c-run-systemd\") pod \"ovnkube-node-5dvmg\" (UID: \"1217710a-4198-4bae-9b5d-63f09834713c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5dvmg" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.400972 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1217710a-4198-4bae-9b5d-63f09834713c-host-run-ovn-kubernetes\") pod \"ovnkube-node-5dvmg\" (UID: \"1217710a-4198-4bae-9b5d-63f09834713c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5dvmg" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.401050 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1217710a-4198-4bae-9b5d-63f09834713c-etc-openvswitch\") pod \"ovnkube-node-5dvmg\" (UID: \"1217710a-4198-4bae-9b5d-63f09834713c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5dvmg" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.401120 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1217710a-4198-4bae-9b5d-63f09834713c-env-overrides\") pod \"ovnkube-node-5dvmg\" (UID: \"1217710a-4198-4bae-9b5d-63f09834713c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5dvmg" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.401140 4703 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e9173444-5181-4ee4-b651-11d92ccab0d0-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.401160 4703 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e9173444-5181-4ee4-b651-11d92ccab0d0-run-systemd\") on node \"crc\" DevicePath \"\"" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.401175 4703 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e9173444-5181-4ee4-b651-11d92ccab0d0-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.401209 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xm2fh\" (UniqueName: \"kubernetes.io/projected/e9173444-5181-4ee4-b651-11d92ccab0d0-kube-api-access-xm2fh\") on node \"crc\" DevicePath \"\"" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.401227 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1217710a-4198-4bae-9b5d-63f09834713c-etc-openvswitch\") pod \"ovnkube-node-5dvmg\" (UID: \"1217710a-4198-4bae-9b5d-63f09834713c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5dvmg" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.401408 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1217710a-4198-4bae-9b5d-63f09834713c-systemd-units\") pod \"ovnkube-node-5dvmg\" (UID: \"1217710a-4198-4bae-9b5d-63f09834713c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5dvmg" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.401412 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1217710a-4198-4bae-9b5d-63f09834713c-ovnkube-script-lib\") pod \"ovnkube-node-5dvmg\" (UID: \"1217710a-4198-4bae-9b5d-63f09834713c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5dvmg" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.401434 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1217710a-4198-4bae-9b5d-63f09834713c-var-lib-openvswitch\") pod \"ovnkube-node-5dvmg\" (UID: \"1217710a-4198-4bae-9b5d-63f09834713c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5dvmg" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.401437 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1217710a-4198-4bae-9b5d-63f09834713c-run-systemd\") pod \"ovnkube-node-5dvmg\" (UID: \"1217710a-4198-4bae-9b5d-63f09834713c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5dvmg" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.401446 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1217710a-4198-4bae-9b5d-63f09834713c-host-run-ovn-kubernetes\") pod \"ovnkube-node-5dvmg\" (UID: \"1217710a-4198-4bae-9b5d-63f09834713c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5dvmg" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.401669 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1217710a-4198-4bae-9b5d-63f09834713c-log-socket\") pod \"ovnkube-node-5dvmg\" (UID: \"1217710a-4198-4bae-9b5d-63f09834713c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5dvmg" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.401924 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1217710a-4198-4bae-9b5d-63f09834713c-ovnkube-config\") pod \"ovnkube-node-5dvmg\" (UID: \"1217710a-4198-4bae-9b5d-63f09834713c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5dvmg" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.406771 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1217710a-4198-4bae-9b5d-63f09834713c-ovn-node-metrics-cert\") pod \"ovnkube-node-5dvmg\" (UID: \"1217710a-4198-4bae-9b5d-63f09834713c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5dvmg" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.427471 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c22ww\" (UniqueName: \"kubernetes.io/projected/1217710a-4198-4bae-9b5d-63f09834713c-kube-api-access-c22ww\") pod \"ovnkube-node-5dvmg\" (UID: \"1217710a-4198-4bae-9b5d-63f09834713c\") " pod="openshift-ovn-kubernetes/ovnkube-node-5dvmg" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.572934 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-5dvmg" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.746475 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7hrm8_e9173444-5181-4ee4-b651-11d92ccab0d0/ovnkube-controller/3.log" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.753876 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7hrm8_e9173444-5181-4ee4-b651-11d92ccab0d0/ovn-acl-logging/0.log" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.754798 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7hrm8_e9173444-5181-4ee4-b651-11d92ccab0d0/ovn-controller/0.log" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.755228 4703 generic.go:334] "Generic (PLEG): container finished" podID="e9173444-5181-4ee4-b651-11d92ccab0d0" containerID="6e6972743cb84820f7134dd61623aa0e6430203e481128bb119defbfafc8d344" exitCode=0 Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.755260 4703 generic.go:334] "Generic (PLEG): container finished" podID="e9173444-5181-4ee4-b651-11d92ccab0d0" containerID="88a462e310fdbdf79496906a304fcf953c4e48c9cab9137f52cf5aafa9c4054c" exitCode=0 Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.755269 4703 generic.go:334] "Generic (PLEG): container finished" podID="e9173444-5181-4ee4-b651-11d92ccab0d0" containerID="5989b9b8e8fb619a07c3242f7c4310e042a281648593a05edd6cb16449480428" exitCode=0 Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.755279 4703 generic.go:334] "Generic (PLEG): container finished" podID="e9173444-5181-4ee4-b651-11d92ccab0d0" containerID="9652e0d783249140796d5c8586e300acb7918f3ced79a1daec01da21eac363c8" exitCode=0 Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.755289 4703 generic.go:334] "Generic (PLEG): container finished" podID="e9173444-5181-4ee4-b651-11d92ccab0d0" containerID="2ce8423b98a9127fea76c23c865d9840069fe97e1d10dc9e97816c916192bf8e" exitCode=0 Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.755297 4703 generic.go:334] "Generic (PLEG): container finished" podID="e9173444-5181-4ee4-b651-11d92ccab0d0" containerID="382198a5fbf52c39a98bc79aa4e917d50dfbcb3ff73d831389901544d400817c" exitCode=0 Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.755308 4703 generic.go:334] "Generic (PLEG): container finished" podID="e9173444-5181-4ee4-b651-11d92ccab0d0" containerID="06ebee5cde81f7c5fb299c4a346f466d3bd4667be4dd2e89712dcae660a1e08c" exitCode=143 Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.755316 4703 generic.go:334] "Generic (PLEG): container finished" podID="e9173444-5181-4ee4-b651-11d92ccab0d0" containerID="02b8fb5400b3ed69b7e65ae9f09aac617bd51972cfa0d5aec436c9f59eb65ac2" exitCode=143 Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.755362 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" event={"ID":"e9173444-5181-4ee4-b651-11d92ccab0d0","Type":"ContainerDied","Data":"6e6972743cb84820f7134dd61623aa0e6430203e481128bb119defbfafc8d344"} Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.755396 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" event={"ID":"e9173444-5181-4ee4-b651-11d92ccab0d0","Type":"ContainerDied","Data":"88a462e310fdbdf79496906a304fcf953c4e48c9cab9137f52cf5aafa9c4054c"} Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.755408 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" event={"ID":"e9173444-5181-4ee4-b651-11d92ccab0d0","Type":"ContainerDied","Data":"5989b9b8e8fb619a07c3242f7c4310e042a281648593a05edd6cb16449480428"} Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.755422 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" event={"ID":"e9173444-5181-4ee4-b651-11d92ccab0d0","Type":"ContainerDied","Data":"9652e0d783249140796d5c8586e300acb7918f3ced79a1daec01da21eac363c8"} Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.755432 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" event={"ID":"e9173444-5181-4ee4-b651-11d92ccab0d0","Type":"ContainerDied","Data":"2ce8423b98a9127fea76c23c865d9840069fe97e1d10dc9e97816c916192bf8e"} Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.755445 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" event={"ID":"e9173444-5181-4ee4-b651-11d92ccab0d0","Type":"ContainerDied","Data":"382198a5fbf52c39a98bc79aa4e917d50dfbcb3ff73d831389901544d400817c"} Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.755456 4703 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"500c141c181fc269b89b51bc952bf461e962782dfa52cec80966dce7f1ae181f"} Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.755468 4703 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"88a462e310fdbdf79496906a304fcf953c4e48c9cab9137f52cf5aafa9c4054c"} Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.755475 4703 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5989b9b8e8fb619a07c3242f7c4310e042a281648593a05edd6cb16449480428"} Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.755482 4703 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9652e0d783249140796d5c8586e300acb7918f3ced79a1daec01da21eac363c8"} Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.755491 4703 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2ce8423b98a9127fea76c23c865d9840069fe97e1d10dc9e97816c916192bf8e"} Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.755497 4703 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"382198a5fbf52c39a98bc79aa4e917d50dfbcb3ff73d831389901544d400817c"} Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.755504 4703 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"06ebee5cde81f7c5fb299c4a346f466d3bd4667be4dd2e89712dcae660a1e08c"} Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.755512 4703 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"02b8fb5400b3ed69b7e65ae9f09aac617bd51972cfa0d5aec436c9f59eb65ac2"} Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.755520 4703 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f192db4b078c6bf3ffea6db58842ce7e8b4ad76c12fa8422496288bc0cfe0c8a"} Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.755529 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" event={"ID":"e9173444-5181-4ee4-b651-11d92ccab0d0","Type":"ContainerDied","Data":"06ebee5cde81f7c5fb299c4a346f466d3bd4667be4dd2e89712dcae660a1e08c"} Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.755541 4703 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6e6972743cb84820f7134dd61623aa0e6430203e481128bb119defbfafc8d344"} Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.755548 4703 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"500c141c181fc269b89b51bc952bf461e962782dfa52cec80966dce7f1ae181f"} Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.755555 4703 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"88a462e310fdbdf79496906a304fcf953c4e48c9cab9137f52cf5aafa9c4054c"} Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.755561 4703 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5989b9b8e8fb619a07c3242f7c4310e042a281648593a05edd6cb16449480428"} Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.755568 4703 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9652e0d783249140796d5c8586e300acb7918f3ced79a1daec01da21eac363c8"} Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.755575 4703 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2ce8423b98a9127fea76c23c865d9840069fe97e1d10dc9e97816c916192bf8e"} Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.755582 4703 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"382198a5fbf52c39a98bc79aa4e917d50dfbcb3ff73d831389901544d400817c"} Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.755588 4703 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"06ebee5cde81f7c5fb299c4a346f466d3bd4667be4dd2e89712dcae660a1e08c"} Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.755594 4703 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"02b8fb5400b3ed69b7e65ae9f09aac617bd51972cfa0d5aec436c9f59eb65ac2"} Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.755601 4703 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f192db4b078c6bf3ffea6db58842ce7e8b4ad76c12fa8422496288bc0cfe0c8a"} Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.755610 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" event={"ID":"e9173444-5181-4ee4-b651-11d92ccab0d0","Type":"ContainerDied","Data":"02b8fb5400b3ed69b7e65ae9f09aac617bd51972cfa0d5aec436c9f59eb65ac2"} Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.755622 4703 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6e6972743cb84820f7134dd61623aa0e6430203e481128bb119defbfafc8d344"} Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.755629 4703 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"500c141c181fc269b89b51bc952bf461e962782dfa52cec80966dce7f1ae181f"} Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.755636 4703 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"88a462e310fdbdf79496906a304fcf953c4e48c9cab9137f52cf5aafa9c4054c"} Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.755643 4703 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5989b9b8e8fb619a07c3242f7c4310e042a281648593a05edd6cb16449480428"} Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.755649 4703 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9652e0d783249140796d5c8586e300acb7918f3ced79a1daec01da21eac363c8"} Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.755657 4703 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2ce8423b98a9127fea76c23c865d9840069fe97e1d10dc9e97816c916192bf8e"} Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.755664 4703 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"382198a5fbf52c39a98bc79aa4e917d50dfbcb3ff73d831389901544d400817c"} Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.755670 4703 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"06ebee5cde81f7c5fb299c4a346f466d3bd4667be4dd2e89712dcae660a1e08c"} Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.755678 4703 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"02b8fb5400b3ed69b7e65ae9f09aac617bd51972cfa0d5aec436c9f59eb65ac2"} Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.755685 4703 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f192db4b078c6bf3ffea6db58842ce7e8b4ad76c12fa8422496288bc0cfe0c8a"} Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.755694 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" event={"ID":"e9173444-5181-4ee4-b651-11d92ccab0d0","Type":"ContainerDied","Data":"ce5f7a9dcad5dfa3d5115162b3a187e3c3cc4bc52dbb530fcf8c0a2d0b7efa59"} Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.755705 4703 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6e6972743cb84820f7134dd61623aa0e6430203e481128bb119defbfafc8d344"} Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.755713 4703 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"500c141c181fc269b89b51bc952bf461e962782dfa52cec80966dce7f1ae181f"} Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.755719 4703 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"88a462e310fdbdf79496906a304fcf953c4e48c9cab9137f52cf5aafa9c4054c"} Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.755726 4703 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5989b9b8e8fb619a07c3242f7c4310e042a281648593a05edd6cb16449480428"} Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.755732 4703 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9652e0d783249140796d5c8586e300acb7918f3ced79a1daec01da21eac363c8"} Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.755739 4703 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2ce8423b98a9127fea76c23c865d9840069fe97e1d10dc9e97816c916192bf8e"} Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.755745 4703 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"382198a5fbf52c39a98bc79aa4e917d50dfbcb3ff73d831389901544d400817c"} Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.755751 4703 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"06ebee5cde81f7c5fb299c4a346f466d3bd4667be4dd2e89712dcae660a1e08c"} Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.755757 4703 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"02b8fb5400b3ed69b7e65ae9f09aac617bd51972cfa0d5aec436c9f59eb65ac2"} Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.755765 4703 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f192db4b078c6bf3ffea6db58842ce7e8b4ad76c12fa8422496288bc0cfe0c8a"} Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.755783 4703 scope.go:117] "RemoveContainer" containerID="6e6972743cb84820f7134dd61623aa0e6430203e481128bb119defbfafc8d344" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.755940 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-7hrm8" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.779622 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9zbgq_b57e1095-b0e1-4b30-a491-00852a5219e7/kube-multus/2.log" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.780466 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9zbgq_b57e1095-b0e1-4b30-a491-00852a5219e7/kube-multus/1.log" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.780556 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9zbgq" event={"ID":"b57e1095-b0e1-4b30-a491-00852a5219e7","Type":"ContainerDied","Data":"0fb8e3daa497dbbdcbe504e2bf923948ae25a5522138c22da61febd77f079c8d"} Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.780596 4703 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"71dde398c11e22726cc75a12c047119d7f10bc0a03bb5bda05b6cab5a65c9bda"} Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.782158 4703 scope.go:117] "RemoveContainer" containerID="0fb8e3daa497dbbdcbe504e2bf923948ae25a5522138c22da61febd77f079c8d" Dec 09 12:16:54 crc kubenswrapper[4703]: E1209 12:16:54.792359 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-9zbgq_openshift-multus(b57e1095-b0e1-4b30-a491-00852a5219e7)\"" pod="openshift-multus/multus-9zbgq" podUID="b57e1095-b0e1-4b30-a491-00852a5219e7" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.792447 4703 generic.go:334] "Generic (PLEG): container finished" podID="b57e1095-b0e1-4b30-a491-00852a5219e7" containerID="0fb8e3daa497dbbdcbe504e2bf923948ae25a5522138c22da61febd77f079c8d" exitCode=2 Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.795493 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5dvmg" event={"ID":"1217710a-4198-4bae-9b5d-63f09834713c","Type":"ContainerStarted","Data":"29e7a61b8f4c3106e77fa1c48233021f9edc49627c0f9989525461c2a4a270c1"} Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.810478 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-7hrm8"] Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.827697 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-7hrm8"] Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.912745 4703 scope.go:117] "RemoveContainer" containerID="500c141c181fc269b89b51bc952bf461e962782dfa52cec80966dce7f1ae181f" Dec 09 12:16:54 crc kubenswrapper[4703]: I1209 12:16:54.997107 4703 scope.go:117] "RemoveContainer" containerID="88a462e310fdbdf79496906a304fcf953c4e48c9cab9137f52cf5aafa9c4054c" Dec 09 12:16:55 crc kubenswrapper[4703]: I1209 12:16:55.057337 4703 scope.go:117] "RemoveContainer" containerID="5989b9b8e8fb619a07c3242f7c4310e042a281648593a05edd6cb16449480428" Dec 09 12:16:55 crc kubenswrapper[4703]: I1209 12:16:55.084486 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9173444-5181-4ee4-b651-11d92ccab0d0" path="/var/lib/kubelet/pods/e9173444-5181-4ee4-b651-11d92ccab0d0/volumes" Dec 09 12:16:55 crc kubenswrapper[4703]: I1209 12:16:55.110965 4703 scope.go:117] "RemoveContainer" containerID="9652e0d783249140796d5c8586e300acb7918f3ced79a1daec01da21eac363c8" Dec 09 12:16:55 crc kubenswrapper[4703]: I1209 12:16:55.156820 4703 scope.go:117] "RemoveContainer" containerID="2ce8423b98a9127fea76c23c865d9840069fe97e1d10dc9e97816c916192bf8e" Dec 09 12:16:55 crc kubenswrapper[4703]: I1209 12:16:55.198528 4703 scope.go:117] "RemoveContainer" containerID="382198a5fbf52c39a98bc79aa4e917d50dfbcb3ff73d831389901544d400817c" Dec 09 12:16:55 crc kubenswrapper[4703]: I1209 12:16:55.238959 4703 scope.go:117] "RemoveContainer" containerID="06ebee5cde81f7c5fb299c4a346f466d3bd4667be4dd2e89712dcae660a1e08c" Dec 09 12:16:55 crc kubenswrapper[4703]: I1209 12:16:55.339401 4703 scope.go:117] "RemoveContainer" containerID="02b8fb5400b3ed69b7e65ae9f09aac617bd51972cfa0d5aec436c9f59eb65ac2" Dec 09 12:16:55 crc kubenswrapper[4703]: I1209 12:16:55.391067 4703 scope.go:117] "RemoveContainer" containerID="f192db4b078c6bf3ffea6db58842ce7e8b4ad76c12fa8422496288bc0cfe0c8a" Dec 09 12:16:55 crc kubenswrapper[4703]: I1209 12:16:55.428772 4703 scope.go:117] "RemoveContainer" containerID="6e6972743cb84820f7134dd61623aa0e6430203e481128bb119defbfafc8d344" Dec 09 12:16:55 crc kubenswrapper[4703]: E1209 12:16:55.429330 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e6972743cb84820f7134dd61623aa0e6430203e481128bb119defbfafc8d344\": container with ID starting with 6e6972743cb84820f7134dd61623aa0e6430203e481128bb119defbfafc8d344 not found: ID does not exist" containerID="6e6972743cb84820f7134dd61623aa0e6430203e481128bb119defbfafc8d344" Dec 09 12:16:55 crc kubenswrapper[4703]: I1209 12:16:55.429376 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e6972743cb84820f7134dd61623aa0e6430203e481128bb119defbfafc8d344"} err="failed to get container status \"6e6972743cb84820f7134dd61623aa0e6430203e481128bb119defbfafc8d344\": rpc error: code = NotFound desc = could not find container \"6e6972743cb84820f7134dd61623aa0e6430203e481128bb119defbfafc8d344\": container with ID starting with 6e6972743cb84820f7134dd61623aa0e6430203e481128bb119defbfafc8d344 not found: ID does not exist" Dec 09 12:16:55 crc kubenswrapper[4703]: I1209 12:16:55.429405 4703 scope.go:117] "RemoveContainer" containerID="500c141c181fc269b89b51bc952bf461e962782dfa52cec80966dce7f1ae181f" Dec 09 12:16:55 crc kubenswrapper[4703]: E1209 12:16:55.429799 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"500c141c181fc269b89b51bc952bf461e962782dfa52cec80966dce7f1ae181f\": container with ID starting with 500c141c181fc269b89b51bc952bf461e962782dfa52cec80966dce7f1ae181f not found: ID does not exist" containerID="500c141c181fc269b89b51bc952bf461e962782dfa52cec80966dce7f1ae181f" Dec 09 12:16:55 crc kubenswrapper[4703]: I1209 12:16:55.429824 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"500c141c181fc269b89b51bc952bf461e962782dfa52cec80966dce7f1ae181f"} err="failed to get container status \"500c141c181fc269b89b51bc952bf461e962782dfa52cec80966dce7f1ae181f\": rpc error: code = NotFound desc = could not find container \"500c141c181fc269b89b51bc952bf461e962782dfa52cec80966dce7f1ae181f\": container with ID starting with 500c141c181fc269b89b51bc952bf461e962782dfa52cec80966dce7f1ae181f not found: ID does not exist" Dec 09 12:16:55 crc kubenswrapper[4703]: I1209 12:16:55.429843 4703 scope.go:117] "RemoveContainer" containerID="88a462e310fdbdf79496906a304fcf953c4e48c9cab9137f52cf5aafa9c4054c" Dec 09 12:16:55 crc kubenswrapper[4703]: E1209 12:16:55.430317 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88a462e310fdbdf79496906a304fcf953c4e48c9cab9137f52cf5aafa9c4054c\": container with ID starting with 88a462e310fdbdf79496906a304fcf953c4e48c9cab9137f52cf5aafa9c4054c not found: ID does not exist" containerID="88a462e310fdbdf79496906a304fcf953c4e48c9cab9137f52cf5aafa9c4054c" Dec 09 12:16:55 crc kubenswrapper[4703]: I1209 12:16:55.430359 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88a462e310fdbdf79496906a304fcf953c4e48c9cab9137f52cf5aafa9c4054c"} err="failed to get container status \"88a462e310fdbdf79496906a304fcf953c4e48c9cab9137f52cf5aafa9c4054c\": rpc error: code = NotFound desc = could not find container \"88a462e310fdbdf79496906a304fcf953c4e48c9cab9137f52cf5aafa9c4054c\": container with ID starting with 88a462e310fdbdf79496906a304fcf953c4e48c9cab9137f52cf5aafa9c4054c not found: ID does not exist" Dec 09 12:16:55 crc kubenswrapper[4703]: I1209 12:16:55.430395 4703 scope.go:117] "RemoveContainer" containerID="5989b9b8e8fb619a07c3242f7c4310e042a281648593a05edd6cb16449480428" Dec 09 12:16:55 crc kubenswrapper[4703]: E1209 12:16:55.432289 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5989b9b8e8fb619a07c3242f7c4310e042a281648593a05edd6cb16449480428\": container with ID starting with 5989b9b8e8fb619a07c3242f7c4310e042a281648593a05edd6cb16449480428 not found: ID does not exist" containerID="5989b9b8e8fb619a07c3242f7c4310e042a281648593a05edd6cb16449480428" Dec 09 12:16:55 crc kubenswrapper[4703]: I1209 12:16:55.432328 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5989b9b8e8fb619a07c3242f7c4310e042a281648593a05edd6cb16449480428"} err="failed to get container status \"5989b9b8e8fb619a07c3242f7c4310e042a281648593a05edd6cb16449480428\": rpc error: code = NotFound desc = could not find container \"5989b9b8e8fb619a07c3242f7c4310e042a281648593a05edd6cb16449480428\": container with ID starting with 5989b9b8e8fb619a07c3242f7c4310e042a281648593a05edd6cb16449480428 not found: ID does not exist" Dec 09 12:16:55 crc kubenswrapper[4703]: I1209 12:16:55.432356 4703 scope.go:117] "RemoveContainer" containerID="9652e0d783249140796d5c8586e300acb7918f3ced79a1daec01da21eac363c8" Dec 09 12:16:55 crc kubenswrapper[4703]: E1209 12:16:55.432919 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9652e0d783249140796d5c8586e300acb7918f3ced79a1daec01da21eac363c8\": container with ID starting with 9652e0d783249140796d5c8586e300acb7918f3ced79a1daec01da21eac363c8 not found: ID does not exist" containerID="9652e0d783249140796d5c8586e300acb7918f3ced79a1daec01da21eac363c8" Dec 09 12:16:55 crc kubenswrapper[4703]: I1209 12:16:55.432942 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9652e0d783249140796d5c8586e300acb7918f3ced79a1daec01da21eac363c8"} err="failed to get container status \"9652e0d783249140796d5c8586e300acb7918f3ced79a1daec01da21eac363c8\": rpc error: code = NotFound desc = could not find container \"9652e0d783249140796d5c8586e300acb7918f3ced79a1daec01da21eac363c8\": container with ID starting with 9652e0d783249140796d5c8586e300acb7918f3ced79a1daec01da21eac363c8 not found: ID does not exist" Dec 09 12:16:55 crc kubenswrapper[4703]: I1209 12:16:55.432956 4703 scope.go:117] "RemoveContainer" containerID="2ce8423b98a9127fea76c23c865d9840069fe97e1d10dc9e97816c916192bf8e" Dec 09 12:16:55 crc kubenswrapper[4703]: E1209 12:16:55.433670 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ce8423b98a9127fea76c23c865d9840069fe97e1d10dc9e97816c916192bf8e\": container with ID starting with 2ce8423b98a9127fea76c23c865d9840069fe97e1d10dc9e97816c916192bf8e not found: ID does not exist" containerID="2ce8423b98a9127fea76c23c865d9840069fe97e1d10dc9e97816c916192bf8e" Dec 09 12:16:55 crc kubenswrapper[4703]: I1209 12:16:55.433701 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ce8423b98a9127fea76c23c865d9840069fe97e1d10dc9e97816c916192bf8e"} err="failed to get container status \"2ce8423b98a9127fea76c23c865d9840069fe97e1d10dc9e97816c916192bf8e\": rpc error: code = NotFound desc = could not find container \"2ce8423b98a9127fea76c23c865d9840069fe97e1d10dc9e97816c916192bf8e\": container with ID starting with 2ce8423b98a9127fea76c23c865d9840069fe97e1d10dc9e97816c916192bf8e not found: ID does not exist" Dec 09 12:16:55 crc kubenswrapper[4703]: I1209 12:16:55.433739 4703 scope.go:117] "RemoveContainer" containerID="382198a5fbf52c39a98bc79aa4e917d50dfbcb3ff73d831389901544d400817c" Dec 09 12:16:55 crc kubenswrapper[4703]: E1209 12:16:55.434130 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"382198a5fbf52c39a98bc79aa4e917d50dfbcb3ff73d831389901544d400817c\": container with ID starting with 382198a5fbf52c39a98bc79aa4e917d50dfbcb3ff73d831389901544d400817c not found: ID does not exist" containerID="382198a5fbf52c39a98bc79aa4e917d50dfbcb3ff73d831389901544d400817c" Dec 09 12:16:55 crc kubenswrapper[4703]: I1209 12:16:55.434156 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"382198a5fbf52c39a98bc79aa4e917d50dfbcb3ff73d831389901544d400817c"} err="failed to get container status \"382198a5fbf52c39a98bc79aa4e917d50dfbcb3ff73d831389901544d400817c\": rpc error: code = NotFound desc = could not find container \"382198a5fbf52c39a98bc79aa4e917d50dfbcb3ff73d831389901544d400817c\": container with ID starting with 382198a5fbf52c39a98bc79aa4e917d50dfbcb3ff73d831389901544d400817c not found: ID does not exist" Dec 09 12:16:55 crc kubenswrapper[4703]: I1209 12:16:55.434175 4703 scope.go:117] "RemoveContainer" containerID="06ebee5cde81f7c5fb299c4a346f466d3bd4667be4dd2e89712dcae660a1e08c" Dec 09 12:16:55 crc kubenswrapper[4703]: E1209 12:16:55.434496 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06ebee5cde81f7c5fb299c4a346f466d3bd4667be4dd2e89712dcae660a1e08c\": container with ID starting with 06ebee5cde81f7c5fb299c4a346f466d3bd4667be4dd2e89712dcae660a1e08c not found: ID does not exist" containerID="06ebee5cde81f7c5fb299c4a346f466d3bd4667be4dd2e89712dcae660a1e08c" Dec 09 12:16:55 crc kubenswrapper[4703]: I1209 12:16:55.434519 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06ebee5cde81f7c5fb299c4a346f466d3bd4667be4dd2e89712dcae660a1e08c"} err="failed to get container status \"06ebee5cde81f7c5fb299c4a346f466d3bd4667be4dd2e89712dcae660a1e08c\": rpc error: code = NotFound desc = could not find container \"06ebee5cde81f7c5fb299c4a346f466d3bd4667be4dd2e89712dcae660a1e08c\": container with ID starting with 06ebee5cde81f7c5fb299c4a346f466d3bd4667be4dd2e89712dcae660a1e08c not found: ID does not exist" Dec 09 12:16:55 crc kubenswrapper[4703]: I1209 12:16:55.434536 4703 scope.go:117] "RemoveContainer" containerID="02b8fb5400b3ed69b7e65ae9f09aac617bd51972cfa0d5aec436c9f59eb65ac2" Dec 09 12:16:55 crc kubenswrapper[4703]: E1209 12:16:55.435331 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02b8fb5400b3ed69b7e65ae9f09aac617bd51972cfa0d5aec436c9f59eb65ac2\": container with ID starting with 02b8fb5400b3ed69b7e65ae9f09aac617bd51972cfa0d5aec436c9f59eb65ac2 not found: ID does not exist" containerID="02b8fb5400b3ed69b7e65ae9f09aac617bd51972cfa0d5aec436c9f59eb65ac2" Dec 09 12:16:55 crc kubenswrapper[4703]: I1209 12:16:55.435355 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02b8fb5400b3ed69b7e65ae9f09aac617bd51972cfa0d5aec436c9f59eb65ac2"} err="failed to get container status \"02b8fb5400b3ed69b7e65ae9f09aac617bd51972cfa0d5aec436c9f59eb65ac2\": rpc error: code = NotFound desc = could not find container \"02b8fb5400b3ed69b7e65ae9f09aac617bd51972cfa0d5aec436c9f59eb65ac2\": container with ID starting with 02b8fb5400b3ed69b7e65ae9f09aac617bd51972cfa0d5aec436c9f59eb65ac2 not found: ID does not exist" Dec 09 12:16:55 crc kubenswrapper[4703]: I1209 12:16:55.435376 4703 scope.go:117] "RemoveContainer" containerID="f192db4b078c6bf3ffea6db58842ce7e8b4ad76c12fa8422496288bc0cfe0c8a" Dec 09 12:16:55 crc kubenswrapper[4703]: E1209 12:16:55.435872 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f192db4b078c6bf3ffea6db58842ce7e8b4ad76c12fa8422496288bc0cfe0c8a\": container with ID starting with f192db4b078c6bf3ffea6db58842ce7e8b4ad76c12fa8422496288bc0cfe0c8a not found: ID does not exist" containerID="f192db4b078c6bf3ffea6db58842ce7e8b4ad76c12fa8422496288bc0cfe0c8a" Dec 09 12:16:55 crc kubenswrapper[4703]: I1209 12:16:55.435894 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f192db4b078c6bf3ffea6db58842ce7e8b4ad76c12fa8422496288bc0cfe0c8a"} err="failed to get container status \"f192db4b078c6bf3ffea6db58842ce7e8b4ad76c12fa8422496288bc0cfe0c8a\": rpc error: code = NotFound desc = could not find container \"f192db4b078c6bf3ffea6db58842ce7e8b4ad76c12fa8422496288bc0cfe0c8a\": container with ID starting with f192db4b078c6bf3ffea6db58842ce7e8b4ad76c12fa8422496288bc0cfe0c8a not found: ID does not exist" Dec 09 12:16:55 crc kubenswrapper[4703]: I1209 12:16:55.435911 4703 scope.go:117] "RemoveContainer" containerID="6e6972743cb84820f7134dd61623aa0e6430203e481128bb119defbfafc8d344" Dec 09 12:16:55 crc kubenswrapper[4703]: I1209 12:16:55.436107 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e6972743cb84820f7134dd61623aa0e6430203e481128bb119defbfafc8d344"} err="failed to get container status \"6e6972743cb84820f7134dd61623aa0e6430203e481128bb119defbfafc8d344\": rpc error: code = NotFound desc = could not find container \"6e6972743cb84820f7134dd61623aa0e6430203e481128bb119defbfafc8d344\": container with ID starting with 6e6972743cb84820f7134dd61623aa0e6430203e481128bb119defbfafc8d344 not found: ID does not exist" Dec 09 12:16:55 crc kubenswrapper[4703]: I1209 12:16:55.436130 4703 scope.go:117] "RemoveContainer" containerID="500c141c181fc269b89b51bc952bf461e962782dfa52cec80966dce7f1ae181f" Dec 09 12:16:55 crc kubenswrapper[4703]: I1209 12:16:55.436338 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"500c141c181fc269b89b51bc952bf461e962782dfa52cec80966dce7f1ae181f"} err="failed to get container status \"500c141c181fc269b89b51bc952bf461e962782dfa52cec80966dce7f1ae181f\": rpc error: code = NotFound desc = could not find container \"500c141c181fc269b89b51bc952bf461e962782dfa52cec80966dce7f1ae181f\": container with ID starting with 500c141c181fc269b89b51bc952bf461e962782dfa52cec80966dce7f1ae181f not found: ID does not exist" Dec 09 12:16:55 crc kubenswrapper[4703]: I1209 12:16:55.436353 4703 scope.go:117] "RemoveContainer" containerID="88a462e310fdbdf79496906a304fcf953c4e48c9cab9137f52cf5aafa9c4054c" Dec 09 12:16:55 crc kubenswrapper[4703]: I1209 12:16:55.436711 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88a462e310fdbdf79496906a304fcf953c4e48c9cab9137f52cf5aafa9c4054c"} err="failed to get container status \"88a462e310fdbdf79496906a304fcf953c4e48c9cab9137f52cf5aafa9c4054c\": rpc error: code = NotFound desc = could not find container \"88a462e310fdbdf79496906a304fcf953c4e48c9cab9137f52cf5aafa9c4054c\": container with ID starting with 88a462e310fdbdf79496906a304fcf953c4e48c9cab9137f52cf5aafa9c4054c not found: ID does not exist" Dec 09 12:16:55 crc kubenswrapper[4703]: I1209 12:16:55.436729 4703 scope.go:117] "RemoveContainer" containerID="5989b9b8e8fb619a07c3242f7c4310e042a281648593a05edd6cb16449480428" Dec 09 12:16:55 crc kubenswrapper[4703]: I1209 12:16:55.436935 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5989b9b8e8fb619a07c3242f7c4310e042a281648593a05edd6cb16449480428"} err="failed to get container status \"5989b9b8e8fb619a07c3242f7c4310e042a281648593a05edd6cb16449480428\": rpc error: code = NotFound desc = could not find container \"5989b9b8e8fb619a07c3242f7c4310e042a281648593a05edd6cb16449480428\": container with ID starting with 5989b9b8e8fb619a07c3242f7c4310e042a281648593a05edd6cb16449480428 not found: ID does not exist" Dec 09 12:16:55 crc kubenswrapper[4703]: I1209 12:16:55.436953 4703 scope.go:117] "RemoveContainer" containerID="9652e0d783249140796d5c8586e300acb7918f3ced79a1daec01da21eac363c8" Dec 09 12:16:55 crc kubenswrapper[4703]: I1209 12:16:55.437336 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9652e0d783249140796d5c8586e300acb7918f3ced79a1daec01da21eac363c8"} err="failed to get container status \"9652e0d783249140796d5c8586e300acb7918f3ced79a1daec01da21eac363c8\": rpc error: code = NotFound desc = could not find container \"9652e0d783249140796d5c8586e300acb7918f3ced79a1daec01da21eac363c8\": container with ID starting with 9652e0d783249140796d5c8586e300acb7918f3ced79a1daec01da21eac363c8 not found: ID does not exist" Dec 09 12:16:55 crc kubenswrapper[4703]: I1209 12:16:55.437354 4703 scope.go:117] "RemoveContainer" containerID="2ce8423b98a9127fea76c23c865d9840069fe97e1d10dc9e97816c916192bf8e" Dec 09 12:16:55 crc kubenswrapper[4703]: I1209 12:16:55.437554 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ce8423b98a9127fea76c23c865d9840069fe97e1d10dc9e97816c916192bf8e"} err="failed to get container status \"2ce8423b98a9127fea76c23c865d9840069fe97e1d10dc9e97816c916192bf8e\": rpc error: code = NotFound desc = could not find container \"2ce8423b98a9127fea76c23c865d9840069fe97e1d10dc9e97816c916192bf8e\": container with ID starting with 2ce8423b98a9127fea76c23c865d9840069fe97e1d10dc9e97816c916192bf8e not found: ID does not exist" Dec 09 12:16:55 crc kubenswrapper[4703]: I1209 12:16:55.437573 4703 scope.go:117] "RemoveContainer" containerID="382198a5fbf52c39a98bc79aa4e917d50dfbcb3ff73d831389901544d400817c" Dec 09 12:16:55 crc kubenswrapper[4703]: I1209 12:16:55.437762 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"382198a5fbf52c39a98bc79aa4e917d50dfbcb3ff73d831389901544d400817c"} err="failed to get container status \"382198a5fbf52c39a98bc79aa4e917d50dfbcb3ff73d831389901544d400817c\": rpc error: code = NotFound desc = could not find container \"382198a5fbf52c39a98bc79aa4e917d50dfbcb3ff73d831389901544d400817c\": container with ID starting with 382198a5fbf52c39a98bc79aa4e917d50dfbcb3ff73d831389901544d400817c not found: ID does not exist" Dec 09 12:16:55 crc kubenswrapper[4703]: I1209 12:16:55.437778 4703 scope.go:117] "RemoveContainer" containerID="06ebee5cde81f7c5fb299c4a346f466d3bd4667be4dd2e89712dcae660a1e08c" Dec 09 12:16:55 crc kubenswrapper[4703]: I1209 12:16:55.438216 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06ebee5cde81f7c5fb299c4a346f466d3bd4667be4dd2e89712dcae660a1e08c"} err="failed to get container status \"06ebee5cde81f7c5fb299c4a346f466d3bd4667be4dd2e89712dcae660a1e08c\": rpc error: code = NotFound desc = could not find container \"06ebee5cde81f7c5fb299c4a346f466d3bd4667be4dd2e89712dcae660a1e08c\": container with ID starting with 06ebee5cde81f7c5fb299c4a346f466d3bd4667be4dd2e89712dcae660a1e08c not found: ID does not exist" Dec 09 12:16:55 crc kubenswrapper[4703]: I1209 12:16:55.438244 4703 scope.go:117] "RemoveContainer" containerID="02b8fb5400b3ed69b7e65ae9f09aac617bd51972cfa0d5aec436c9f59eb65ac2" Dec 09 12:16:55 crc kubenswrapper[4703]: I1209 12:16:55.438479 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02b8fb5400b3ed69b7e65ae9f09aac617bd51972cfa0d5aec436c9f59eb65ac2"} err="failed to get container status \"02b8fb5400b3ed69b7e65ae9f09aac617bd51972cfa0d5aec436c9f59eb65ac2\": rpc error: code = NotFound desc = could not find container \"02b8fb5400b3ed69b7e65ae9f09aac617bd51972cfa0d5aec436c9f59eb65ac2\": container with ID starting with 02b8fb5400b3ed69b7e65ae9f09aac617bd51972cfa0d5aec436c9f59eb65ac2 not found: ID does not exist" Dec 09 12:16:55 crc kubenswrapper[4703]: I1209 12:16:55.438507 4703 scope.go:117] "RemoveContainer" containerID="f192db4b078c6bf3ffea6db58842ce7e8b4ad76c12fa8422496288bc0cfe0c8a" Dec 09 12:16:55 crc kubenswrapper[4703]: I1209 12:16:55.439826 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f192db4b078c6bf3ffea6db58842ce7e8b4ad76c12fa8422496288bc0cfe0c8a"} err="failed to get container status \"f192db4b078c6bf3ffea6db58842ce7e8b4ad76c12fa8422496288bc0cfe0c8a\": rpc error: code = NotFound desc = could not find container \"f192db4b078c6bf3ffea6db58842ce7e8b4ad76c12fa8422496288bc0cfe0c8a\": container with ID starting with f192db4b078c6bf3ffea6db58842ce7e8b4ad76c12fa8422496288bc0cfe0c8a not found: ID does not exist" Dec 09 12:16:55 crc kubenswrapper[4703]: I1209 12:16:55.439849 4703 scope.go:117] "RemoveContainer" containerID="6e6972743cb84820f7134dd61623aa0e6430203e481128bb119defbfafc8d344" Dec 09 12:16:55 crc kubenswrapper[4703]: I1209 12:16:55.440293 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e6972743cb84820f7134dd61623aa0e6430203e481128bb119defbfafc8d344"} err="failed to get container status \"6e6972743cb84820f7134dd61623aa0e6430203e481128bb119defbfafc8d344\": rpc error: code = NotFound desc = could not find container \"6e6972743cb84820f7134dd61623aa0e6430203e481128bb119defbfafc8d344\": container with ID starting with 6e6972743cb84820f7134dd61623aa0e6430203e481128bb119defbfafc8d344 not found: ID does not exist" Dec 09 12:16:55 crc kubenswrapper[4703]: I1209 12:16:55.440334 4703 scope.go:117] "RemoveContainer" containerID="500c141c181fc269b89b51bc952bf461e962782dfa52cec80966dce7f1ae181f" Dec 09 12:16:55 crc kubenswrapper[4703]: I1209 12:16:55.440759 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"500c141c181fc269b89b51bc952bf461e962782dfa52cec80966dce7f1ae181f"} err="failed to get container status \"500c141c181fc269b89b51bc952bf461e962782dfa52cec80966dce7f1ae181f\": rpc error: code = NotFound desc = could not find container \"500c141c181fc269b89b51bc952bf461e962782dfa52cec80966dce7f1ae181f\": container with ID starting with 500c141c181fc269b89b51bc952bf461e962782dfa52cec80966dce7f1ae181f not found: ID does not exist" Dec 09 12:16:55 crc kubenswrapper[4703]: I1209 12:16:55.440779 4703 scope.go:117] "RemoveContainer" containerID="88a462e310fdbdf79496906a304fcf953c4e48c9cab9137f52cf5aafa9c4054c" Dec 09 12:16:55 crc kubenswrapper[4703]: I1209 12:16:55.441144 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88a462e310fdbdf79496906a304fcf953c4e48c9cab9137f52cf5aafa9c4054c"} err="failed to get container status \"88a462e310fdbdf79496906a304fcf953c4e48c9cab9137f52cf5aafa9c4054c\": rpc error: code = NotFound desc = could not find container \"88a462e310fdbdf79496906a304fcf953c4e48c9cab9137f52cf5aafa9c4054c\": container with ID starting with 88a462e310fdbdf79496906a304fcf953c4e48c9cab9137f52cf5aafa9c4054c not found: ID does not exist" Dec 09 12:16:55 crc kubenswrapper[4703]: I1209 12:16:55.441164 4703 scope.go:117] "RemoveContainer" containerID="5989b9b8e8fb619a07c3242f7c4310e042a281648593a05edd6cb16449480428" Dec 09 12:16:55 crc kubenswrapper[4703]: I1209 12:16:55.441505 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5989b9b8e8fb619a07c3242f7c4310e042a281648593a05edd6cb16449480428"} err="failed to get container status \"5989b9b8e8fb619a07c3242f7c4310e042a281648593a05edd6cb16449480428\": rpc error: code = NotFound desc = could not find container \"5989b9b8e8fb619a07c3242f7c4310e042a281648593a05edd6cb16449480428\": container with ID starting with 5989b9b8e8fb619a07c3242f7c4310e042a281648593a05edd6cb16449480428 not found: ID does not exist" Dec 09 12:16:55 crc kubenswrapper[4703]: I1209 12:16:55.441545 4703 scope.go:117] "RemoveContainer" containerID="9652e0d783249140796d5c8586e300acb7918f3ced79a1daec01da21eac363c8" Dec 09 12:16:55 crc kubenswrapper[4703]: I1209 12:16:55.441803 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9652e0d783249140796d5c8586e300acb7918f3ced79a1daec01da21eac363c8"} err="failed to get container status \"9652e0d783249140796d5c8586e300acb7918f3ced79a1daec01da21eac363c8\": rpc error: code = NotFound desc = could not find container \"9652e0d783249140796d5c8586e300acb7918f3ced79a1daec01da21eac363c8\": container with ID starting with 9652e0d783249140796d5c8586e300acb7918f3ced79a1daec01da21eac363c8 not found: ID does not exist" Dec 09 12:16:55 crc kubenswrapper[4703]: I1209 12:16:55.441822 4703 scope.go:117] "RemoveContainer" containerID="2ce8423b98a9127fea76c23c865d9840069fe97e1d10dc9e97816c916192bf8e" Dec 09 12:16:55 crc kubenswrapper[4703]: I1209 12:16:55.442055 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ce8423b98a9127fea76c23c865d9840069fe97e1d10dc9e97816c916192bf8e"} err="failed to get container status \"2ce8423b98a9127fea76c23c865d9840069fe97e1d10dc9e97816c916192bf8e\": rpc error: code = NotFound desc = could not find container \"2ce8423b98a9127fea76c23c865d9840069fe97e1d10dc9e97816c916192bf8e\": container with ID starting with 2ce8423b98a9127fea76c23c865d9840069fe97e1d10dc9e97816c916192bf8e not found: ID does not exist" Dec 09 12:16:55 crc kubenswrapper[4703]: I1209 12:16:55.442109 4703 scope.go:117] "RemoveContainer" containerID="382198a5fbf52c39a98bc79aa4e917d50dfbcb3ff73d831389901544d400817c" Dec 09 12:16:55 crc kubenswrapper[4703]: I1209 12:16:55.442421 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"382198a5fbf52c39a98bc79aa4e917d50dfbcb3ff73d831389901544d400817c"} err="failed to get container status \"382198a5fbf52c39a98bc79aa4e917d50dfbcb3ff73d831389901544d400817c\": rpc error: code = NotFound desc = could not find container \"382198a5fbf52c39a98bc79aa4e917d50dfbcb3ff73d831389901544d400817c\": container with ID starting with 382198a5fbf52c39a98bc79aa4e917d50dfbcb3ff73d831389901544d400817c not found: ID does not exist" Dec 09 12:16:55 crc kubenswrapper[4703]: I1209 12:16:55.442501 4703 scope.go:117] "RemoveContainer" containerID="06ebee5cde81f7c5fb299c4a346f466d3bd4667be4dd2e89712dcae660a1e08c" Dec 09 12:16:55 crc kubenswrapper[4703]: I1209 12:16:55.443325 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06ebee5cde81f7c5fb299c4a346f466d3bd4667be4dd2e89712dcae660a1e08c"} err="failed to get container status \"06ebee5cde81f7c5fb299c4a346f466d3bd4667be4dd2e89712dcae660a1e08c\": rpc error: code = NotFound desc = could not find container \"06ebee5cde81f7c5fb299c4a346f466d3bd4667be4dd2e89712dcae660a1e08c\": container with ID starting with 06ebee5cde81f7c5fb299c4a346f466d3bd4667be4dd2e89712dcae660a1e08c not found: ID does not exist" Dec 09 12:16:55 crc kubenswrapper[4703]: I1209 12:16:55.443381 4703 scope.go:117] "RemoveContainer" containerID="02b8fb5400b3ed69b7e65ae9f09aac617bd51972cfa0d5aec436c9f59eb65ac2" Dec 09 12:16:55 crc kubenswrapper[4703]: I1209 12:16:55.443701 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02b8fb5400b3ed69b7e65ae9f09aac617bd51972cfa0d5aec436c9f59eb65ac2"} err="failed to get container status \"02b8fb5400b3ed69b7e65ae9f09aac617bd51972cfa0d5aec436c9f59eb65ac2\": rpc error: code = NotFound desc = could not find container \"02b8fb5400b3ed69b7e65ae9f09aac617bd51972cfa0d5aec436c9f59eb65ac2\": container with ID starting with 02b8fb5400b3ed69b7e65ae9f09aac617bd51972cfa0d5aec436c9f59eb65ac2 not found: ID does not exist" Dec 09 12:16:55 crc kubenswrapper[4703]: I1209 12:16:55.443753 4703 scope.go:117] "RemoveContainer" containerID="f192db4b078c6bf3ffea6db58842ce7e8b4ad76c12fa8422496288bc0cfe0c8a" Dec 09 12:16:55 crc kubenswrapper[4703]: I1209 12:16:55.444147 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f192db4b078c6bf3ffea6db58842ce7e8b4ad76c12fa8422496288bc0cfe0c8a"} err="failed to get container status \"f192db4b078c6bf3ffea6db58842ce7e8b4ad76c12fa8422496288bc0cfe0c8a\": rpc error: code = NotFound desc = could not find container \"f192db4b078c6bf3ffea6db58842ce7e8b4ad76c12fa8422496288bc0cfe0c8a\": container with ID starting with f192db4b078c6bf3ffea6db58842ce7e8b4ad76c12fa8422496288bc0cfe0c8a not found: ID does not exist" Dec 09 12:16:55 crc kubenswrapper[4703]: I1209 12:16:55.444181 4703 scope.go:117] "RemoveContainer" containerID="6e6972743cb84820f7134dd61623aa0e6430203e481128bb119defbfafc8d344" Dec 09 12:16:55 crc kubenswrapper[4703]: I1209 12:16:55.446452 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e6972743cb84820f7134dd61623aa0e6430203e481128bb119defbfafc8d344"} err="failed to get container status \"6e6972743cb84820f7134dd61623aa0e6430203e481128bb119defbfafc8d344\": rpc error: code = NotFound desc = could not find container \"6e6972743cb84820f7134dd61623aa0e6430203e481128bb119defbfafc8d344\": container with ID starting with 6e6972743cb84820f7134dd61623aa0e6430203e481128bb119defbfafc8d344 not found: ID does not exist" Dec 09 12:16:55 crc kubenswrapper[4703]: I1209 12:16:55.462291 4703 scope.go:117] "RemoveContainer" containerID="500c141c181fc269b89b51bc952bf461e962782dfa52cec80966dce7f1ae181f" Dec 09 12:16:55 crc kubenswrapper[4703]: I1209 12:16:55.470898 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"500c141c181fc269b89b51bc952bf461e962782dfa52cec80966dce7f1ae181f"} err="failed to get container status \"500c141c181fc269b89b51bc952bf461e962782dfa52cec80966dce7f1ae181f\": rpc error: code = NotFound desc = could not find container \"500c141c181fc269b89b51bc952bf461e962782dfa52cec80966dce7f1ae181f\": container with ID starting with 500c141c181fc269b89b51bc952bf461e962782dfa52cec80966dce7f1ae181f not found: ID does not exist" Dec 09 12:16:55 crc kubenswrapper[4703]: I1209 12:16:55.470951 4703 scope.go:117] "RemoveContainer" containerID="88a462e310fdbdf79496906a304fcf953c4e48c9cab9137f52cf5aafa9c4054c" Dec 09 12:16:55 crc kubenswrapper[4703]: I1209 12:16:55.471353 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88a462e310fdbdf79496906a304fcf953c4e48c9cab9137f52cf5aafa9c4054c"} err="failed to get container status \"88a462e310fdbdf79496906a304fcf953c4e48c9cab9137f52cf5aafa9c4054c\": rpc error: code = NotFound desc = could not find container \"88a462e310fdbdf79496906a304fcf953c4e48c9cab9137f52cf5aafa9c4054c\": container with ID starting with 88a462e310fdbdf79496906a304fcf953c4e48c9cab9137f52cf5aafa9c4054c not found: ID does not exist" Dec 09 12:16:55 crc kubenswrapper[4703]: I1209 12:16:55.471373 4703 scope.go:117] "RemoveContainer" containerID="5989b9b8e8fb619a07c3242f7c4310e042a281648593a05edd6cb16449480428" Dec 09 12:16:55 crc kubenswrapper[4703]: I1209 12:16:55.471776 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5989b9b8e8fb619a07c3242f7c4310e042a281648593a05edd6cb16449480428"} err="failed to get container status \"5989b9b8e8fb619a07c3242f7c4310e042a281648593a05edd6cb16449480428\": rpc error: code = NotFound desc = could not find container \"5989b9b8e8fb619a07c3242f7c4310e042a281648593a05edd6cb16449480428\": container with ID starting with 5989b9b8e8fb619a07c3242f7c4310e042a281648593a05edd6cb16449480428 not found: ID does not exist" Dec 09 12:16:55 crc kubenswrapper[4703]: I1209 12:16:55.471815 4703 scope.go:117] "RemoveContainer" containerID="9652e0d783249140796d5c8586e300acb7918f3ced79a1daec01da21eac363c8" Dec 09 12:16:55 crc kubenswrapper[4703]: I1209 12:16:55.472147 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9652e0d783249140796d5c8586e300acb7918f3ced79a1daec01da21eac363c8"} err="failed to get container status \"9652e0d783249140796d5c8586e300acb7918f3ced79a1daec01da21eac363c8\": rpc error: code = NotFound desc = could not find container \"9652e0d783249140796d5c8586e300acb7918f3ced79a1daec01da21eac363c8\": container with ID starting with 9652e0d783249140796d5c8586e300acb7918f3ced79a1daec01da21eac363c8 not found: ID does not exist" Dec 09 12:16:55 crc kubenswrapper[4703]: I1209 12:16:55.472167 4703 scope.go:117] "RemoveContainer" containerID="2ce8423b98a9127fea76c23c865d9840069fe97e1d10dc9e97816c916192bf8e" Dec 09 12:16:55 crc kubenswrapper[4703]: I1209 12:16:55.491430 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ce8423b98a9127fea76c23c865d9840069fe97e1d10dc9e97816c916192bf8e"} err="failed to get container status \"2ce8423b98a9127fea76c23c865d9840069fe97e1d10dc9e97816c916192bf8e\": rpc error: code = NotFound desc = could not find container \"2ce8423b98a9127fea76c23c865d9840069fe97e1d10dc9e97816c916192bf8e\": container with ID starting with 2ce8423b98a9127fea76c23c865d9840069fe97e1d10dc9e97816c916192bf8e not found: ID does not exist" Dec 09 12:16:55 crc kubenswrapper[4703]: I1209 12:16:55.491490 4703 scope.go:117] "RemoveContainer" containerID="382198a5fbf52c39a98bc79aa4e917d50dfbcb3ff73d831389901544d400817c" Dec 09 12:16:55 crc kubenswrapper[4703]: I1209 12:16:55.492026 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"382198a5fbf52c39a98bc79aa4e917d50dfbcb3ff73d831389901544d400817c"} err="failed to get container status \"382198a5fbf52c39a98bc79aa4e917d50dfbcb3ff73d831389901544d400817c\": rpc error: code = NotFound desc = could not find container \"382198a5fbf52c39a98bc79aa4e917d50dfbcb3ff73d831389901544d400817c\": container with ID starting with 382198a5fbf52c39a98bc79aa4e917d50dfbcb3ff73d831389901544d400817c not found: ID does not exist" Dec 09 12:16:55 crc kubenswrapper[4703]: I1209 12:16:55.492067 4703 scope.go:117] "RemoveContainer" containerID="06ebee5cde81f7c5fb299c4a346f466d3bd4667be4dd2e89712dcae660a1e08c" Dec 09 12:16:55 crc kubenswrapper[4703]: I1209 12:16:55.492480 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06ebee5cde81f7c5fb299c4a346f466d3bd4667be4dd2e89712dcae660a1e08c"} err="failed to get container status \"06ebee5cde81f7c5fb299c4a346f466d3bd4667be4dd2e89712dcae660a1e08c\": rpc error: code = NotFound desc = could not find container \"06ebee5cde81f7c5fb299c4a346f466d3bd4667be4dd2e89712dcae660a1e08c\": container with ID starting with 06ebee5cde81f7c5fb299c4a346f466d3bd4667be4dd2e89712dcae660a1e08c not found: ID does not exist" Dec 09 12:16:55 crc kubenswrapper[4703]: I1209 12:16:55.492510 4703 scope.go:117] "RemoveContainer" containerID="02b8fb5400b3ed69b7e65ae9f09aac617bd51972cfa0d5aec436c9f59eb65ac2" Dec 09 12:16:55 crc kubenswrapper[4703]: I1209 12:16:55.493163 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02b8fb5400b3ed69b7e65ae9f09aac617bd51972cfa0d5aec436c9f59eb65ac2"} err="failed to get container status \"02b8fb5400b3ed69b7e65ae9f09aac617bd51972cfa0d5aec436c9f59eb65ac2\": rpc error: code = NotFound desc = could not find container \"02b8fb5400b3ed69b7e65ae9f09aac617bd51972cfa0d5aec436c9f59eb65ac2\": container with ID starting with 02b8fb5400b3ed69b7e65ae9f09aac617bd51972cfa0d5aec436c9f59eb65ac2 not found: ID does not exist" Dec 09 12:16:55 crc kubenswrapper[4703]: I1209 12:16:55.493223 4703 scope.go:117] "RemoveContainer" containerID="f192db4b078c6bf3ffea6db58842ce7e8b4ad76c12fa8422496288bc0cfe0c8a" Dec 09 12:16:55 crc kubenswrapper[4703]: I1209 12:16:55.493662 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f192db4b078c6bf3ffea6db58842ce7e8b4ad76c12fa8422496288bc0cfe0c8a"} err="failed to get container status \"f192db4b078c6bf3ffea6db58842ce7e8b4ad76c12fa8422496288bc0cfe0c8a\": rpc error: code = NotFound desc = could not find container \"f192db4b078c6bf3ffea6db58842ce7e8b4ad76c12fa8422496288bc0cfe0c8a\": container with ID starting with f192db4b078c6bf3ffea6db58842ce7e8b4ad76c12fa8422496288bc0cfe0c8a not found: ID does not exist" Dec 09 12:16:55 crc kubenswrapper[4703]: I1209 12:16:55.493685 4703 scope.go:117] "RemoveContainer" containerID="6e6972743cb84820f7134dd61623aa0e6430203e481128bb119defbfafc8d344" Dec 09 12:16:55 crc kubenswrapper[4703]: I1209 12:16:55.494006 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e6972743cb84820f7134dd61623aa0e6430203e481128bb119defbfafc8d344"} err="failed to get container status \"6e6972743cb84820f7134dd61623aa0e6430203e481128bb119defbfafc8d344\": rpc error: code = NotFound desc = could not find container \"6e6972743cb84820f7134dd61623aa0e6430203e481128bb119defbfafc8d344\": container with ID starting with 6e6972743cb84820f7134dd61623aa0e6430203e481128bb119defbfafc8d344 not found: ID does not exist" Dec 09 12:16:55 crc kubenswrapper[4703]: I1209 12:16:55.821468 4703 generic.go:334] "Generic (PLEG): container finished" podID="1217710a-4198-4bae-9b5d-63f09834713c" containerID="615891cc9ced5c9a463f2cdbd90cad34ae91879abe7d545f4c63db3c546d6fb7" exitCode=0 Dec 09 12:16:55 crc kubenswrapper[4703]: I1209 12:16:55.821793 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5dvmg" event={"ID":"1217710a-4198-4bae-9b5d-63f09834713c","Type":"ContainerDied","Data":"615891cc9ced5c9a463f2cdbd90cad34ae91879abe7d545f4c63db3c546d6fb7"} Dec 09 12:16:56 crc kubenswrapper[4703]: I1209 12:16:56.833054 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5dvmg" event={"ID":"1217710a-4198-4bae-9b5d-63f09834713c","Type":"ContainerStarted","Data":"b56c491d3ad9d9e63aa9968002ab4db90f25f7e227e31ad2c1c85e23b5a6a9f9"} Dec 09 12:16:56 crc kubenswrapper[4703]: I1209 12:16:56.833391 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5dvmg" event={"ID":"1217710a-4198-4bae-9b5d-63f09834713c","Type":"ContainerStarted","Data":"80df688a3920a9d33eee48cec577a5c93e20f317c678db1247db5859bdd23b81"} Dec 09 12:16:56 crc kubenswrapper[4703]: I1209 12:16:56.833409 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5dvmg" event={"ID":"1217710a-4198-4bae-9b5d-63f09834713c","Type":"ContainerStarted","Data":"84203d11adba42ed83273a5a4324ea85c53a362fdec467e9e6f85fccf73b8a10"} Dec 09 12:16:57 crc kubenswrapper[4703]: I1209 12:16:57.852627 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5dvmg" event={"ID":"1217710a-4198-4bae-9b5d-63f09834713c","Type":"ContainerStarted","Data":"88d009f2e8216a482d0f84666a4ad4ffc9af8bfabb2503ff4704f423c45a0dba"} Dec 09 12:16:57 crc kubenswrapper[4703]: I1209 12:16:57.852687 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5dvmg" event={"ID":"1217710a-4198-4bae-9b5d-63f09834713c","Type":"ContainerStarted","Data":"bec0f91435d124b089b11ebbacad504b078f0da9bdf8d103c888f26b45323262"} Dec 09 12:16:57 crc kubenswrapper[4703]: I1209 12:16:57.852703 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5dvmg" event={"ID":"1217710a-4198-4bae-9b5d-63f09834713c","Type":"ContainerStarted","Data":"0df24c3bb4bdedf4daff22aa6d56d4185a6f383db50bf0edd45cc7fee9819bca"} Dec 09 12:17:00 crc kubenswrapper[4703]: I1209 12:17:00.083662 4703 patch_prober.go:28] interesting pod/machine-config-daemon-q8sfk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 12:17:00 crc kubenswrapper[4703]: I1209 12:17:00.083906 4703 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 12:17:01 crc kubenswrapper[4703]: I1209 12:17:01.233209 4703 scope.go:117] "RemoveContainer" containerID="71dde398c11e22726cc75a12c047119d7f10bc0a03bb5bda05b6cab5a65c9bda" Dec 09 12:17:09 crc kubenswrapper[4703]: I1209 12:17:09.074832 4703 scope.go:117] "RemoveContainer" containerID="0fb8e3daa497dbbdcbe504e2bf923948ae25a5522138c22da61febd77f079c8d" Dec 09 12:17:09 crc kubenswrapper[4703]: E1209 12:17:09.076048 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-9zbgq_openshift-multus(b57e1095-b0e1-4b30-a491-00852a5219e7)\"" pod="openshift-multus/multus-9zbgq" podUID="b57e1095-b0e1-4b30-a491-00852a5219e7" Dec 09 12:17:12 crc kubenswrapper[4703]: E1209 12:17:12.327639 4703 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/cluster-observability-rhel9-operator@sha256:ce7d2904f7b238aa37dfe74a0b76bf73629e7a14fa52bf54b0ecf030ca36f1bb" Dec 09 12:17:12 crc kubenswrapper[4703]: E1209 12:17:12.328271 4703 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:registry.redhat.io/cluster-observability-operator/cluster-observability-rhel9-operator@sha256:ce7d2904f7b238aa37dfe74a0b76bf73629e7a14fa52bf54b0ecf030ca36f1bb,Command:[],Args:[--namespace=$(NAMESPACE) --images=perses=$(RELATED_IMAGE_PERSES) --images=alertmanager=$(RELATED_IMAGE_ALERTMANAGER) --images=prometheus=$(RELATED_IMAGE_PROMETHEUS) --images=thanos=$(RELATED_IMAGE_THANOS) --images=ui-dashboards=$(RELATED_IMAGE_CONSOLE_DASHBOARDS_PLUGIN) --images=ui-distributed-tracing=$(RELATED_IMAGE_CONSOLE_DISTRIBUTED_TRACING_PLUGIN) --images=ui-distributed-tracing-pf5=$(RELATED_IMAGE_CONSOLE_DISTRIBUTED_TRACING_PLUGIN_PF5) --images=ui-distributed-tracing-pf4=$(RELATED_IMAGE_CONSOLE_DISTRIBUTED_TRACING_PLUGIN_PF4) --images=ui-logging=$(RELATED_IMAGE_CONSOLE_LOGGING_PLUGIN) --images=ui-logging-pf4=$(RELATED_IMAGE_CONSOLE_LOGGING_PLUGIN_PF4) --images=ui-troubleshooting-panel=$(RELATED_IMAGE_CONSOLE_TROUBLESHOOTING_PANEL_PLUGIN) --images=ui-monitoring=$(RELATED_IMAGE_CONSOLE_MONITORING_PLUGIN) --images=ui-monitoring-pf5=$(RELATED_IMAGE_CONSOLE_MONITORING_PLUGIN_PF5) --images=korrel8r=$(RELATED_IMAGE_KORREL8R) --images=health-analyzer=$(RELATED_IMAGE_CLUSTER_HEALTH_ANALYZER) --openshift.enabled=true],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:RELATED_IMAGE_ALERTMANAGER,Value:registry.redhat.io/cluster-observability-operator/alertmanager-rhel9@sha256:e718854a7d6ca8accf0fa72db0eb902e46c44d747ad51dc3f06bba0cefaa3c01,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PROMETHEUS,Value:registry.redhat.io/cluster-observability-operator/prometheus-rhel9@sha256:17ea20be390a94ab39f5cdd7f0cbc2498046eebcf77fe3dec9aa288d5c2cf46b,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_THANOS,Value:registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:d972f4faa5e9c121402d23ed85002f26af48ec36b1b71a7489d677b3913d08b4,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PERSES,Value:registry.redhat.io/cluster-observability-operator/perses-rhel9@sha256:91531137fc1dcd740e277e0f65e120a0176a16f788c14c27925b61aa0b792ade,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_DASHBOARDS_PLUGIN,Value:registry.redhat.io/cluster-observability-operator/dashboards-console-plugin-rhel9@sha256:a69da8bbca8a28dd2925f864d51cc31cf761b10532c553095ba40b242ef701cb,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_DISTRIBUTED_TRACING_PLUGIN,Value:registry.redhat.io/cluster-observability-operator/distributed-tracing-console-plugin-rhel9@sha256:897e1bfad1187062725b54d87107bd0155972257a50d8335dd29e1999b828a4f,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_DISTRIBUTED_TRACING_PLUGIN_PF5,Value:registry.redhat.io/cluster-observability-operator/distributed-tracing-console-plugin-pf5-rhel9@sha256:95fe5b5746ca8c07ac9217ce2d8ac8e6afad17af210f9d8e0074df1310b209a8,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_DISTRIBUTED_TRACING_PLUGIN_PF4,Value:registry.redhat.io/cluster-observability-operator/distributed-tracing-console-plugin-pf4-rhel9@sha256:e9d9a89e4d8126a62b1852055482258ee528cac6398dd5d43ebad75ace0f33c9,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_LOGGING_PLUGIN,Value:registry.redhat.io/cluster-observability-operator/logging-console-plugin-rhel9@sha256:ec684a0645ceb917b019af7ddba68c3533416e356ab0d0320a30e75ca7ebb31b,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_LOGGING_PLUGIN_PF4,Value:registry.redhat.io/cluster-observability-operator/logging-console-plugin-pf4-rhel9@sha256:3b9693fcde9b3a9494fb04735b1f7cfd0426f10be820fdc3f024175c0d3df1c9,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_TROUBLESHOOTING_PANEL_PLUGIN,Value:registry.redhat.io/cluster-observability-operator/troubleshooting-panel-console-plugin-rhel9@sha256:580606f194180accc8abba099e17a26dca7522ec6d233fa2fdd40312771703e3,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_MONITORING_PLUGIN,Value:registry.redhat.io/cluster-observability-operator/monitoring-console-plugin-rhel9@sha256:e03777be39e71701935059cd877603874a13ac94daa73219d4e5e545599d78a9,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_MONITORING_PLUGIN_PF5,Value:registry.redhat.io/cluster-observability-operator/monitoring-console-plugin-pf5-rhel9@sha256:aa47256193cfd2877853878e1ae97d2ab8b8e5deae62b387cbfad02b284d379c,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KORREL8R,Value:registry.redhat.io/cluster-observability-operator/korrel8r-rhel9@sha256:c595ff56b2cb85514bf4784db6ddb82e4e657e3e708a7fb695fc4997379a94d4,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CLUSTER_HEALTH_ANALYZER,Value:registry.redhat.io/cluster-observability-operator/cluster-health-analyzer-rhel9@sha256:45a4ec2a519bcec99e886aa91596d5356a2414a2bd103baaef9fa7838c672eb2,ValueFrom:nil,},EnvVar{Name:OPERATOR_CONDITION_NAME,Value:cluster-observability-operator.v1.3.0,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{400 -3} {} 400m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:observability-operator-tls,ReadOnly:true,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ddtbv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000350000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod observability-operator-d8bb48f5d-bfp54_openshift-operators(a3216052-a675-4452-b6b4-a63dcce7a51f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 09 12:17:12 crc kubenswrapper[4703]: E1209 12:17:12.329892 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-operators/observability-operator-d8bb48f5d-bfp54" podUID="a3216052-a675-4452-b6b4-a63dcce7a51f" Dec 09 12:17:12 crc kubenswrapper[4703]: E1209 12:17:12.354799 4703 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:43d33f0125e6b990f4a972ac4e952a065d7e72dc1690c6c836963b7341734aec" Dec 09 12:17:12 crc kubenswrapper[4703]: E1209 12:17:12.355129 4703 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:prometheus-operator-admission-webhook,Image:registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:43d33f0125e6b990f4a972ac4e952a065d7e72dc1690c6c836963b7341734aec,Command:[],Args:[--web.enable-tls=true --web.cert-file=/tmp/k8s-webhook-server/serving-certs/tls.crt --web.key-file=/tmp/k8s-webhook-server/serving-certs/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_CONDITION_NAME,Value:cluster-observability-operator.v1.3.0,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{209715200 0} {} BinarySI},},Requests:ResourceList{cpu: {{50 -3} {} 50m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:apiservice-cert,ReadOnly:false,MountPath:/apiserver.local.config/certificates,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod obo-prometheus-operator-admission-webhook-677d894988-6gqww_openshift-operators(b7467102-297a-4824-a598-f22317525002): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 09 12:17:12 crc kubenswrapper[4703]: E1209 12:17:12.356401 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator-admission-webhook\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-677d894988-6gqww" podUID="b7467102-297a-4824-a598-f22317525002" Dec 09 12:17:12 crc kubenswrapper[4703]: E1209 12:17:12.887091 4703 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/perses-rhel9-operator@sha256:9aec4c328ec43e40481e06ca5808deead74b75c0aacb90e9e72966c3fa14f385" Dec 09 12:17:12 crc kubenswrapper[4703]: E1209 12:17:12.887309 4703 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:perses-operator,Image:registry.redhat.io/cluster-observability-operator/perses-rhel9-operator@sha256:9aec4c328ec43e40481e06ca5808deead74b75c0aacb90e9e72966c3fa14f385,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:OPERATOR_CONDITION_NAME,Value:cluster-observability-operator.v1.3.0,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{134217728 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:openshift-service-ca,ReadOnly:true,MountPath:/ca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6vxh7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000350000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod perses-operator-5446b9c989-tzn85_openshift-operators(2f5ff9fe-6b53-49b8-ba78-30df51c9473e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 09 12:17:12 crc kubenswrapper[4703]: E1209 12:17:12.888498 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"perses-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-operators/perses-operator-5446b9c989-tzn85" podUID="2f5ff9fe-6b53-49b8-ba78-30df51c9473e" Dec 09 12:17:13 crc kubenswrapper[4703]: I1209 12:17:13.010462 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9zbgq_b57e1095-b0e1-4b30-a491-00852a5219e7/kube-multus/2.log" Dec 09 12:17:13 crc kubenswrapper[4703]: E1209 12:17:13.016496 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"perses-operator\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/perses-rhel9-operator@sha256:9aec4c328ec43e40481e06ca5808deead74b75c0aacb90e9e72966c3fa14f385\\\"\"" pod="openshift-operators/perses-operator-5446b9c989-tzn85" podUID="2f5ff9fe-6b53-49b8-ba78-30df51c9473e" Dec 09 12:17:13 crc kubenswrapper[4703]: E1209 12:17:13.018759 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/cluster-observability-rhel9-operator@sha256:ce7d2904f7b238aa37dfe74a0b76bf73629e7a14fa52bf54b0ecf030ca36f1bb\\\"\"" pod="openshift-operators/observability-operator-d8bb48f5d-bfp54" podUID="a3216052-a675-4452-b6b4-a63dcce7a51f" Dec 09 12:17:14 crc kubenswrapper[4703]: I1209 12:17:14.016131 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-677d894988-6gqww" event={"ID":"b7467102-297a-4824-a598-f22317525002","Type":"ContainerStarted","Data":"12542f3bb707eb0e3cec6569e68a2de10a12b99361b3873115fc531de8523899"} Dec 09 12:17:14 crc kubenswrapper[4703]: I1209 12:17:14.018024 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-dg7pb" event={"ID":"4cec0df1-f871-497e-8d5a-03ed7c99c085","Type":"ContainerStarted","Data":"1488f6ca5bedf99302c7ea7cf40e7a92df5984b49d5d82751242b40299448357"} Dec 09 12:17:14 crc kubenswrapper[4703]: I1209 12:17:14.019611 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-677d894988-5jcb6" event={"ID":"c0b01c4e-ed19-4d86-b0f7-a459744771d5","Type":"ContainerStarted","Data":"f580a3f8275c68cf4e99bca2be92b2eade3e954f52d321c3f0306431a36a7166"} Dec 09 12:17:14 crc kubenswrapper[4703]: I1209 12:17:14.023009 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5dvmg" event={"ID":"1217710a-4198-4bae-9b5d-63f09834713c","Type":"ContainerStarted","Data":"4ca1eb791c9798a5540bd4018c3d2b0ee361da0700daea0395cacaf03146a1da"} Dec 09 12:17:14 crc kubenswrapper[4703]: I1209 12:17:14.041868 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-677d894988-6gqww" podStartSLOduration=-9223372013.812933 podStartE2EDuration="23.041843244s" podCreationTimestamp="2025-12-09 12:16:51 +0000 UTC" firstStartedPulling="2025-12-09 12:16:52.394564227 +0000 UTC m=+711.643327746" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:17:14.039035456 +0000 UTC m=+733.287798975" watchObservedRunningTime="2025-12-09 12:17:14.041843244 +0000 UTC m=+733.290606753" Dec 09 12:17:14 crc kubenswrapper[4703]: I1209 12:17:14.081087 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-677d894988-5jcb6" podStartSLOduration=2.43530638 podStartE2EDuration="23.08105608s" podCreationTimestamp="2025-12-09 12:16:51 +0000 UTC" firstStartedPulling="2025-12-09 12:16:52.251761653 +0000 UTC m=+711.500525172" lastFinishedPulling="2025-12-09 12:17:12.897511353 +0000 UTC m=+732.146274872" observedRunningTime="2025-12-09 12:17:14.078346175 +0000 UTC m=+733.327109714" watchObservedRunningTime="2025-12-09 12:17:14.08105608 +0000 UTC m=+733.329819599" Dec 09 12:17:14 crc kubenswrapper[4703]: I1209 12:17:14.106810 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-dg7pb" podStartSLOduration=2.314689115 podStartE2EDuration="23.106789794s" podCreationTimestamp="2025-12-09 12:16:51 +0000 UTC" firstStartedPulling="2025-12-09 12:16:52.114514868 +0000 UTC m=+711.363278387" lastFinishedPulling="2025-12-09 12:17:12.906615547 +0000 UTC m=+732.155379066" observedRunningTime="2025-12-09 12:17:14.102460469 +0000 UTC m=+733.351223988" watchObservedRunningTime="2025-12-09 12:17:14.106789794 +0000 UTC m=+733.355553313" Dec 09 12:17:16 crc kubenswrapper[4703]: I1209 12:17:16.038927 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5dvmg" event={"ID":"1217710a-4198-4bae-9b5d-63f09834713c","Type":"ContainerStarted","Data":"7f5a74bb5eef1a44dfb78e4e4dc0ef14f1eeba4786e153681790dbfe3796b862"} Dec 09 12:17:16 crc kubenswrapper[4703]: I1209 12:17:16.039259 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5dvmg" Dec 09 12:17:16 crc kubenswrapper[4703]: I1209 12:17:16.039275 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5dvmg" Dec 09 12:17:16 crc kubenswrapper[4703]: I1209 12:17:16.068312 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-5dvmg" podStartSLOduration=22.068289987 podStartE2EDuration="22.068289987s" podCreationTimestamp="2025-12-09 12:16:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:17:16.066168681 +0000 UTC m=+735.314932220" watchObservedRunningTime="2025-12-09 12:17:16.068289987 +0000 UTC m=+735.317053506" Dec 09 12:17:16 crc kubenswrapper[4703]: I1209 12:17:16.072239 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-5dvmg" Dec 09 12:17:17 crc kubenswrapper[4703]: I1209 12:17:17.045034 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5dvmg" Dec 09 12:17:17 crc kubenswrapper[4703]: I1209 12:17:17.075604 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-5dvmg" Dec 09 12:17:20 crc kubenswrapper[4703]: I1209 12:17:20.069640 4703 scope.go:117] "RemoveContainer" containerID="0fb8e3daa497dbbdcbe504e2bf923948ae25a5522138c22da61febd77f079c8d" Dec 09 12:17:21 crc kubenswrapper[4703]: I1209 12:17:21.067391 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9zbgq_b57e1095-b0e1-4b30-a491-00852a5219e7/kube-multus/2.log" Dec 09 12:17:21 crc kubenswrapper[4703]: I1209 12:17:21.068447 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9zbgq" event={"ID":"b57e1095-b0e1-4b30-a491-00852a5219e7","Type":"ContainerStarted","Data":"211a54864fec071315903980425e34ae6b5aa7fec9015290f3894ed5ea424b23"} Dec 09 12:17:24 crc kubenswrapper[4703]: I1209 12:17:24.591921 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-5dvmg" Dec 09 12:17:25 crc kubenswrapper[4703]: I1209 12:17:25.092216 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5446b9c989-tzn85" event={"ID":"2f5ff9fe-6b53-49b8-ba78-30df51c9473e","Type":"ContainerStarted","Data":"51279873b3ea4ed25b71b8f8b458cc4938dc2150ef8ac05bcc70f3cdbc57c27c"} Dec 09 12:17:25 crc kubenswrapper[4703]: I1209 12:17:25.092713 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5446b9c989-tzn85" Dec 09 12:17:25 crc kubenswrapper[4703]: I1209 12:17:25.093694 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-d8bb48f5d-bfp54" event={"ID":"a3216052-a675-4452-b6b4-a63dcce7a51f","Type":"ContainerStarted","Data":"a4f845aafec6489802af482268981be3bf36ccaa7db9ea3dcb590deb462bb0df"} Dec 09 12:17:25 crc kubenswrapper[4703]: I1209 12:17:25.094432 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-d8bb48f5d-bfp54" Dec 09 12:17:25 crc kubenswrapper[4703]: I1209 12:17:25.095202 4703 patch_prober.go:28] interesting pod/observability-operator-d8bb48f5d-bfp54 container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.72:8081/healthz\": dial tcp 10.217.0.72:8081: connect: connection refused" start-of-body= Dec 09 12:17:25 crc kubenswrapper[4703]: I1209 12:17:25.095247 4703 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-d8bb48f5d-bfp54" podUID="a3216052-a675-4452-b6b4-a63dcce7a51f" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.72:8081/healthz\": dial tcp 10.217.0.72:8081: connect: connection refused" Dec 09 12:17:25 crc kubenswrapper[4703]: I1209 12:17:25.112513 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5446b9c989-tzn85" podStartSLOduration=2.023637179 podStartE2EDuration="34.112493517s" podCreationTimestamp="2025-12-09 12:16:51 +0000 UTC" firstStartedPulling="2025-12-09 12:16:52.667798172 +0000 UTC m=+711.916561691" lastFinishedPulling="2025-12-09 12:17:24.75665451 +0000 UTC m=+744.005418029" observedRunningTime="2025-12-09 12:17:25.112467096 +0000 UTC m=+744.361230625" watchObservedRunningTime="2025-12-09 12:17:25.112493517 +0000 UTC m=+744.361257056" Dec 09 12:17:25 crc kubenswrapper[4703]: I1209 12:17:25.145254 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-d8bb48f5d-bfp54" podStartSLOduration=1.793614309 podStartE2EDuration="34.14523853s" podCreationTimestamp="2025-12-09 12:16:51 +0000 UTC" firstStartedPulling="2025-12-09 12:16:52.403345266 +0000 UTC m=+711.652108795" lastFinishedPulling="2025-12-09 12:17:24.754969497 +0000 UTC m=+744.003733016" observedRunningTime="2025-12-09 12:17:25.143879368 +0000 UTC m=+744.392642887" watchObservedRunningTime="2025-12-09 12:17:25.14523853 +0000 UTC m=+744.394002049" Dec 09 12:17:26 crc kubenswrapper[4703]: I1209 12:17:26.131093 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-d8bb48f5d-bfp54" Dec 09 12:17:30 crc kubenswrapper[4703]: I1209 12:17:30.083435 4703 patch_prober.go:28] interesting pod/machine-config-daemon-q8sfk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 12:17:30 crc kubenswrapper[4703]: I1209 12:17:30.083961 4703 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 12:17:30 crc kubenswrapper[4703]: I1209 12:17:30.084049 4703 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" Dec 09 12:17:30 crc kubenswrapper[4703]: I1209 12:17:30.084977 4703 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"070529224aea51e7b9ab2ac8deaa225f76e2ab9d46c38789bc31e027e9fd43af"} pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 12:17:30 crc kubenswrapper[4703]: I1209 12:17:30.085066 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" containerName="machine-config-daemon" containerID="cri-o://070529224aea51e7b9ab2ac8deaa225f76e2ab9d46c38789bc31e027e9fd43af" gracePeriod=600 Dec 09 12:17:32 crc kubenswrapper[4703]: I1209 12:17:32.135666 4703 generic.go:334] "Generic (PLEG): container finished" podID="32956ceb-8540-406e-8693-e86efb46cd42" containerID="070529224aea51e7b9ab2ac8deaa225f76e2ab9d46c38789bc31e027e9fd43af" exitCode=0 Dec 09 12:17:32 crc kubenswrapper[4703]: I1209 12:17:32.135749 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" event={"ID":"32956ceb-8540-406e-8693-e86efb46cd42","Type":"ContainerDied","Data":"070529224aea51e7b9ab2ac8deaa225f76e2ab9d46c38789bc31e027e9fd43af"} Dec 09 12:17:32 crc kubenswrapper[4703]: I1209 12:17:32.136173 4703 scope.go:117] "RemoveContainer" containerID="3f15a5f5fdcf521f3b29067d1b5408a59b00b69a832381cfeb9530a825bead3b" Dec 09 12:17:32 crc kubenswrapper[4703]: I1209 12:17:32.220830 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5446b9c989-tzn85" Dec 09 12:17:33 crc kubenswrapper[4703]: I1209 12:17:33.146307 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" event={"ID":"32956ceb-8540-406e-8693-e86efb46cd42","Type":"ContainerStarted","Data":"b834b447788d8be29753e6c06c8a6c802214a19ed04f8682755c759ef6ba04af"} Dec 09 12:17:35 crc kubenswrapper[4703]: I1209 12:17:35.721487 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-5b446d88c5-bnzh5"] Dec 09 12:17:35 crc kubenswrapper[4703]: I1209 12:17:35.722781 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-bnzh5" Dec 09 12:17:35 crc kubenswrapper[4703]: I1209 12:17:35.725080 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Dec 09 12:17:35 crc kubenswrapper[4703]: I1209 12:17:35.725137 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Dec 09 12:17:35 crc kubenswrapper[4703]: I1209 12:17:35.725807 4703 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-8hqj6" Dec 09 12:17:35 crc kubenswrapper[4703]: I1209 12:17:35.729260 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-sjnhl"] Dec 09 12:17:35 crc kubenswrapper[4703]: I1209 12:17:35.730248 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-sjnhl" Dec 09 12:17:35 crc kubenswrapper[4703]: I1209 12:17:35.732484 4703 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-z2p2s" Dec 09 12:17:35 crc kubenswrapper[4703]: I1209 12:17:35.749001 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-sjnhl"] Dec 09 12:17:35 crc kubenswrapper[4703]: I1209 12:17:35.757072 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-bnzh5"] Dec 09 12:17:35 crc kubenswrapper[4703]: I1209 12:17:35.762556 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-2mlm8"] Dec 09 12:17:35 crc kubenswrapper[4703]: I1209 12:17:35.763545 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-2mlm8" Dec 09 12:17:35 crc kubenswrapper[4703]: I1209 12:17:35.775506 4703 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-mjqx2" Dec 09 12:17:35 crc kubenswrapper[4703]: I1209 12:17:35.790258 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-2mlm8"] Dec 09 12:17:35 crc kubenswrapper[4703]: I1209 12:17:35.823076 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hflvm\" (UniqueName: \"kubernetes.io/projected/ca3614a5-2aca-4c15-b0c1-418925c20ce9-kube-api-access-hflvm\") pod \"cert-manager-5b446d88c5-bnzh5\" (UID: \"ca3614a5-2aca-4c15-b0c1-418925c20ce9\") " pod="cert-manager/cert-manager-5b446d88c5-bnzh5" Dec 09 12:17:35 crc kubenswrapper[4703]: I1209 12:17:35.823179 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6phw6\" (UniqueName: \"kubernetes.io/projected/0dec1409-3c5a-4672-bc18-06c9a59876a0-kube-api-access-6phw6\") pod \"cert-manager-webhook-5655c58dd6-2mlm8\" (UID: \"0dec1409-3c5a-4672-bc18-06c9a59876a0\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-2mlm8" Dec 09 12:17:35 crc kubenswrapper[4703]: I1209 12:17:35.823374 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2x2hh\" (UniqueName: \"kubernetes.io/projected/73ce4d20-c9fc-4eda-b404-056f4dc06c03-kube-api-access-2x2hh\") pod \"cert-manager-cainjector-7f985d654d-sjnhl\" (UID: \"73ce4d20-c9fc-4eda-b404-056f4dc06c03\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-sjnhl" Dec 09 12:17:35 crc kubenswrapper[4703]: I1209 12:17:35.924638 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2x2hh\" (UniqueName: \"kubernetes.io/projected/73ce4d20-c9fc-4eda-b404-056f4dc06c03-kube-api-access-2x2hh\") pod \"cert-manager-cainjector-7f985d654d-sjnhl\" (UID: \"73ce4d20-c9fc-4eda-b404-056f4dc06c03\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-sjnhl" Dec 09 12:17:35 crc kubenswrapper[4703]: I1209 12:17:35.924715 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hflvm\" (UniqueName: \"kubernetes.io/projected/ca3614a5-2aca-4c15-b0c1-418925c20ce9-kube-api-access-hflvm\") pod \"cert-manager-5b446d88c5-bnzh5\" (UID: \"ca3614a5-2aca-4c15-b0c1-418925c20ce9\") " pod="cert-manager/cert-manager-5b446d88c5-bnzh5" Dec 09 12:17:35 crc kubenswrapper[4703]: I1209 12:17:35.924788 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6phw6\" (UniqueName: \"kubernetes.io/projected/0dec1409-3c5a-4672-bc18-06c9a59876a0-kube-api-access-6phw6\") pod \"cert-manager-webhook-5655c58dd6-2mlm8\" (UID: \"0dec1409-3c5a-4672-bc18-06c9a59876a0\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-2mlm8" Dec 09 12:17:35 crc kubenswrapper[4703]: I1209 12:17:35.950113 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2x2hh\" (UniqueName: \"kubernetes.io/projected/73ce4d20-c9fc-4eda-b404-056f4dc06c03-kube-api-access-2x2hh\") pod \"cert-manager-cainjector-7f985d654d-sjnhl\" (UID: \"73ce4d20-c9fc-4eda-b404-056f4dc06c03\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-sjnhl" Dec 09 12:17:35 crc kubenswrapper[4703]: I1209 12:17:35.950154 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hflvm\" (UniqueName: \"kubernetes.io/projected/ca3614a5-2aca-4c15-b0c1-418925c20ce9-kube-api-access-hflvm\") pod \"cert-manager-5b446d88c5-bnzh5\" (UID: \"ca3614a5-2aca-4c15-b0c1-418925c20ce9\") " pod="cert-manager/cert-manager-5b446d88c5-bnzh5" Dec 09 12:17:35 crc kubenswrapper[4703]: I1209 12:17:35.954962 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6phw6\" (UniqueName: \"kubernetes.io/projected/0dec1409-3c5a-4672-bc18-06c9a59876a0-kube-api-access-6phw6\") pod \"cert-manager-webhook-5655c58dd6-2mlm8\" (UID: \"0dec1409-3c5a-4672-bc18-06c9a59876a0\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-2mlm8" Dec 09 12:17:36 crc kubenswrapper[4703]: I1209 12:17:36.045318 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-bnzh5" Dec 09 12:17:36 crc kubenswrapper[4703]: I1209 12:17:36.054432 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-sjnhl" Dec 09 12:17:36 crc kubenswrapper[4703]: I1209 12:17:36.100992 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-2mlm8" Dec 09 12:17:36 crc kubenswrapper[4703]: I1209 12:17:36.570995 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-sjnhl"] Dec 09 12:17:36 crc kubenswrapper[4703]: I1209 12:17:36.625744 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-bnzh5"] Dec 09 12:17:36 crc kubenswrapper[4703]: W1209 12:17:36.628661 4703 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podca3614a5_2aca_4c15_b0c1_418925c20ce9.slice/crio-a0aad8f5159616f3969667cdf1d3811a36b5015b7b71e9a90270717577c0174f WatchSource:0}: Error finding container a0aad8f5159616f3969667cdf1d3811a36b5015b7b71e9a90270717577c0174f: Status 404 returned error can't find the container with id a0aad8f5159616f3969667cdf1d3811a36b5015b7b71e9a90270717577c0174f Dec 09 12:17:36 crc kubenswrapper[4703]: I1209 12:17:36.682516 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-2mlm8"] Dec 09 12:17:37 crc kubenswrapper[4703]: I1209 12:17:37.169953 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-2mlm8" event={"ID":"0dec1409-3c5a-4672-bc18-06c9a59876a0","Type":"ContainerStarted","Data":"7b370c260d2739f0c9621aa0db3682277c3ff99dbeda390ea7e99f2ef57a8b6e"} Dec 09 12:17:37 crc kubenswrapper[4703]: I1209 12:17:37.171467 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-bnzh5" event={"ID":"ca3614a5-2aca-4c15-b0c1-418925c20ce9","Type":"ContainerStarted","Data":"a0aad8f5159616f3969667cdf1d3811a36b5015b7b71e9a90270717577c0174f"} Dec 09 12:17:37 crc kubenswrapper[4703]: I1209 12:17:37.171794 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-sjnhl" event={"ID":"73ce4d20-c9fc-4eda-b404-056f4dc06c03","Type":"ContainerStarted","Data":"5a3d4ce7b63eb9ba971b89790ca8afe5e61d03fa8e0961a0a162f02b85a9fe9a"} Dec 09 12:17:40 crc kubenswrapper[4703]: I1209 12:17:40.743132 4703 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 09 12:17:41 crc kubenswrapper[4703]: I1209 12:17:41.199967 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-sjnhl" event={"ID":"73ce4d20-c9fc-4eda-b404-056f4dc06c03","Type":"ContainerStarted","Data":"cac05cb3b095f834b2952bb31d4f195c06bfdc25450246cc7bb55b0b3e3246e0"} Dec 09 12:17:41 crc kubenswrapper[4703]: I1209 12:17:41.204996 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-2mlm8" event={"ID":"0dec1409-3c5a-4672-bc18-06c9a59876a0","Type":"ContainerStarted","Data":"a263f55663ea9196a08297c9ada05fc1832b6273b5e1a81315ba0536ddaa8f3b"} Dec 09 12:17:41 crc kubenswrapper[4703]: I1209 12:17:41.205156 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-5655c58dd6-2mlm8" Dec 09 12:17:41 crc kubenswrapper[4703]: I1209 12:17:41.224121 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7f985d654d-sjnhl" podStartSLOduration=2.120321902 podStartE2EDuration="6.2241022s" podCreationTimestamp="2025-12-09 12:17:35 +0000 UTC" firstStartedPulling="2025-12-09 12:17:36.575453875 +0000 UTC m=+755.824217394" lastFinishedPulling="2025-12-09 12:17:40.679234183 +0000 UTC m=+759.927997692" observedRunningTime="2025-12-09 12:17:41.217939817 +0000 UTC m=+760.466703336" watchObservedRunningTime="2025-12-09 12:17:41.2241022 +0000 UTC m=+760.472865719" Dec 09 12:17:41 crc kubenswrapper[4703]: I1209 12:17:41.245684 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-5655c58dd6-2mlm8" podStartSLOduration=2.2919937 podStartE2EDuration="6.24555089s" podCreationTimestamp="2025-12-09 12:17:35 +0000 UTC" firstStartedPulling="2025-12-09 12:17:36.726664823 +0000 UTC m=+755.975428332" lastFinishedPulling="2025-12-09 12:17:40.680221853 +0000 UTC m=+759.928985522" observedRunningTime="2025-12-09 12:17:41.238926503 +0000 UTC m=+760.487690022" watchObservedRunningTime="2025-12-09 12:17:41.24555089 +0000 UTC m=+760.494314409" Dec 09 12:17:43 crc kubenswrapper[4703]: I1209 12:17:43.220545 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-bnzh5" event={"ID":"ca3614a5-2aca-4c15-b0c1-418925c20ce9","Type":"ContainerStarted","Data":"583ebb2601e96628a03eec58596bd48f4b3b018162741db9682897ffb68a8f1b"} Dec 09 12:17:43 crc kubenswrapper[4703]: I1209 12:17:43.239633 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-5b446d88c5-bnzh5" podStartSLOduration=2.758459034 podStartE2EDuration="8.239613411s" podCreationTimestamp="2025-12-09 12:17:35 +0000 UTC" firstStartedPulling="2025-12-09 12:17:36.630511386 +0000 UTC m=+755.879274905" lastFinishedPulling="2025-12-09 12:17:42.111665763 +0000 UTC m=+761.360429282" observedRunningTime="2025-12-09 12:17:43.236312518 +0000 UTC m=+762.485076037" watchObservedRunningTime="2025-12-09 12:17:43.239613411 +0000 UTC m=+762.488376930" Dec 09 12:17:46 crc kubenswrapper[4703]: I1209 12:17:46.104419 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-5655c58dd6-2mlm8" Dec 09 12:18:11 crc kubenswrapper[4703]: I1209 12:18:11.271867 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c127cl5"] Dec 09 12:18:11 crc kubenswrapper[4703]: I1209 12:18:11.276100 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c127cl5" Dec 09 12:18:11 crc kubenswrapper[4703]: I1209 12:18:11.278859 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 09 12:18:11 crc kubenswrapper[4703]: I1209 12:18:11.282991 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c127cl5"] Dec 09 12:18:11 crc kubenswrapper[4703]: I1209 12:18:11.459396 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/61119125-f807-4f7b-b62e-e76f6cbfe8d2-util\") pod \"7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c127cl5\" (UID: \"61119125-f807-4f7b-b62e-e76f6cbfe8d2\") " pod="openshift-marketplace/7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c127cl5" Dec 09 12:18:11 crc kubenswrapper[4703]: I1209 12:18:11.459960 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7c2hp\" (UniqueName: \"kubernetes.io/projected/61119125-f807-4f7b-b62e-e76f6cbfe8d2-kube-api-access-7c2hp\") pod \"7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c127cl5\" (UID: \"61119125-f807-4f7b-b62e-e76f6cbfe8d2\") " pod="openshift-marketplace/7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c127cl5" Dec 09 12:18:11 crc kubenswrapper[4703]: I1209 12:18:11.460028 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/61119125-f807-4f7b-b62e-e76f6cbfe8d2-bundle\") pod \"7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c127cl5\" (UID: \"61119125-f807-4f7b-b62e-e76f6cbfe8d2\") " pod="openshift-marketplace/7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c127cl5" Dec 09 12:18:11 crc kubenswrapper[4703]: I1209 12:18:11.561110 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7c2hp\" (UniqueName: \"kubernetes.io/projected/61119125-f807-4f7b-b62e-e76f6cbfe8d2-kube-api-access-7c2hp\") pod \"7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c127cl5\" (UID: \"61119125-f807-4f7b-b62e-e76f6cbfe8d2\") " pod="openshift-marketplace/7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c127cl5" Dec 09 12:18:11 crc kubenswrapper[4703]: I1209 12:18:11.561499 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/61119125-f807-4f7b-b62e-e76f6cbfe8d2-bundle\") pod \"7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c127cl5\" (UID: \"61119125-f807-4f7b-b62e-e76f6cbfe8d2\") " pod="openshift-marketplace/7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c127cl5" Dec 09 12:18:11 crc kubenswrapper[4703]: I1209 12:18:11.561693 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/61119125-f807-4f7b-b62e-e76f6cbfe8d2-util\") pod \"7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c127cl5\" (UID: \"61119125-f807-4f7b-b62e-e76f6cbfe8d2\") " pod="openshift-marketplace/7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c127cl5" Dec 09 12:18:11 crc kubenswrapper[4703]: I1209 12:18:11.562330 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/61119125-f807-4f7b-b62e-e76f6cbfe8d2-util\") pod \"7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c127cl5\" (UID: \"61119125-f807-4f7b-b62e-e76f6cbfe8d2\") " pod="openshift-marketplace/7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c127cl5" Dec 09 12:18:11 crc kubenswrapper[4703]: I1209 12:18:11.562339 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/61119125-f807-4f7b-b62e-e76f6cbfe8d2-bundle\") pod \"7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c127cl5\" (UID: \"61119125-f807-4f7b-b62e-e76f6cbfe8d2\") " pod="openshift-marketplace/7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c127cl5" Dec 09 12:18:11 crc kubenswrapper[4703]: I1209 12:18:11.597340 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7c2hp\" (UniqueName: \"kubernetes.io/projected/61119125-f807-4f7b-b62e-e76f6cbfe8d2-kube-api-access-7c2hp\") pod \"7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c127cl5\" (UID: \"61119125-f807-4f7b-b62e-e76f6cbfe8d2\") " pod="openshift-marketplace/7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c127cl5" Dec 09 12:18:11 crc kubenswrapper[4703]: I1209 12:18:11.895114 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c127cl5" Dec 09 12:18:12 crc kubenswrapper[4703]: I1209 12:18:12.135930 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c127cl5"] Dec 09 12:18:12 crc kubenswrapper[4703]: I1209 12:18:12.379597 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c127cl5" event={"ID":"61119125-f807-4f7b-b62e-e76f6cbfe8d2","Type":"ContainerStarted","Data":"6484e9650f37a54cd42b172bb0c9826ccb063ef44a3f1eed4260381571b09f79"} Dec 09 12:18:12 crc kubenswrapper[4703]: I1209 12:18:12.876119 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["minio-dev/minio"] Dec 09 12:18:12 crc kubenswrapper[4703]: I1209 12:18:12.876885 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Dec 09 12:18:12 crc kubenswrapper[4703]: I1209 12:18:12.883283 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"kube-root-ca.crt" Dec 09 12:18:12 crc kubenswrapper[4703]: I1209 12:18:12.884477 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"openshift-service-ca.crt" Dec 09 12:18:12 crc kubenswrapper[4703]: I1209 12:18:12.889077 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Dec 09 12:18:12 crc kubenswrapper[4703]: I1209 12:18:12.980683 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-307007b5-51fb-40b3-ada3-2288f826cf8c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-307007b5-51fb-40b3-ada3-2288f826cf8c\") pod \"minio\" (UID: \"d07d8d87-4b9f-4573-b886-596650efa33e\") " pod="minio-dev/minio" Dec 09 12:18:12 crc kubenswrapper[4703]: I1209 12:18:12.980950 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwxfb\" (UniqueName: \"kubernetes.io/projected/d07d8d87-4b9f-4573-b886-596650efa33e-kube-api-access-vwxfb\") pod \"minio\" (UID: \"d07d8d87-4b9f-4573-b886-596650efa33e\") " pod="minio-dev/minio" Dec 09 12:18:13 crc kubenswrapper[4703]: I1209 12:18:13.081977 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-307007b5-51fb-40b3-ada3-2288f826cf8c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-307007b5-51fb-40b3-ada3-2288f826cf8c\") pod \"minio\" (UID: \"d07d8d87-4b9f-4573-b886-596650efa33e\") " pod="minio-dev/minio" Dec 09 12:18:13 crc kubenswrapper[4703]: I1209 12:18:13.082037 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwxfb\" (UniqueName: \"kubernetes.io/projected/d07d8d87-4b9f-4573-b886-596650efa33e-kube-api-access-vwxfb\") pod \"minio\" (UID: \"d07d8d87-4b9f-4573-b886-596650efa33e\") " pod="minio-dev/minio" Dec 09 12:18:13 crc kubenswrapper[4703]: I1209 12:18:13.085404 4703 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 09 12:18:13 crc kubenswrapper[4703]: I1209 12:18:13.085611 4703 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-307007b5-51fb-40b3-ada3-2288f826cf8c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-307007b5-51fb-40b3-ada3-2288f826cf8c\") pod \"minio\" (UID: \"d07d8d87-4b9f-4573-b886-596650efa33e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/dba05030a2531486286433062f5e7c53c4eedd11c7311f00058d0d628c96401a/globalmount\"" pod="minio-dev/minio" Dec 09 12:18:13 crc kubenswrapper[4703]: I1209 12:18:13.104900 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwxfb\" (UniqueName: \"kubernetes.io/projected/d07d8d87-4b9f-4573-b886-596650efa33e-kube-api-access-vwxfb\") pod \"minio\" (UID: \"d07d8d87-4b9f-4573-b886-596650efa33e\") " pod="minio-dev/minio" Dec 09 12:18:13 crc kubenswrapper[4703]: I1209 12:18:13.113335 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-307007b5-51fb-40b3-ada3-2288f826cf8c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-307007b5-51fb-40b3-ada3-2288f826cf8c\") pod \"minio\" (UID: \"d07d8d87-4b9f-4573-b886-596650efa33e\") " pod="minio-dev/minio" Dec 09 12:18:13 crc kubenswrapper[4703]: I1209 12:18:13.190498 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Dec 09 12:18:13 crc kubenswrapper[4703]: I1209 12:18:13.395007 4703 generic.go:334] "Generic (PLEG): container finished" podID="61119125-f807-4f7b-b62e-e76f6cbfe8d2" containerID="fbb0a8ce61f9b92607b0c5f033f0bf551ee992944af7fd2925365a8219b5373c" exitCode=0 Dec 09 12:18:13 crc kubenswrapper[4703]: I1209 12:18:13.395239 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c127cl5" event={"ID":"61119125-f807-4f7b-b62e-e76f6cbfe8d2","Type":"ContainerDied","Data":"fbb0a8ce61f9b92607b0c5f033f0bf551ee992944af7fd2925365a8219b5373c"} Dec 09 12:18:13 crc kubenswrapper[4703]: I1209 12:18:13.449310 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Dec 09 12:18:13 crc kubenswrapper[4703]: W1209 12:18:13.462546 4703 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd07d8d87_4b9f_4573_b886_596650efa33e.slice/crio-1b44af0752c29b3b9cbb9e0ac531ad422315e8c1ef5547c223665deb91e6ec3e WatchSource:0}: Error finding container 1b44af0752c29b3b9cbb9e0ac531ad422315e8c1ef5547c223665deb91e6ec3e: Status 404 returned error can't find the container with id 1b44af0752c29b3b9cbb9e0ac531ad422315e8c1ef5547c223665deb91e6ec3e Dec 09 12:18:13 crc kubenswrapper[4703]: I1209 12:18:13.612963 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-n5v8c"] Dec 09 12:18:13 crc kubenswrapper[4703]: I1209 12:18:13.618066 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n5v8c" Dec 09 12:18:13 crc kubenswrapper[4703]: I1209 12:18:13.622962 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-n5v8c"] Dec 09 12:18:13 crc kubenswrapper[4703]: I1209 12:18:13.791230 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5a41fd7-a4e8-406a-9a10-2171a304ec62-utilities\") pod \"redhat-operators-n5v8c\" (UID: \"e5a41fd7-a4e8-406a-9a10-2171a304ec62\") " pod="openshift-marketplace/redhat-operators-n5v8c" Dec 09 12:18:13 crc kubenswrapper[4703]: I1209 12:18:13.791311 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5a41fd7-a4e8-406a-9a10-2171a304ec62-catalog-content\") pod \"redhat-operators-n5v8c\" (UID: \"e5a41fd7-a4e8-406a-9a10-2171a304ec62\") " pod="openshift-marketplace/redhat-operators-n5v8c" Dec 09 12:18:13 crc kubenswrapper[4703]: I1209 12:18:13.791362 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9krdq\" (UniqueName: \"kubernetes.io/projected/e5a41fd7-a4e8-406a-9a10-2171a304ec62-kube-api-access-9krdq\") pod \"redhat-operators-n5v8c\" (UID: \"e5a41fd7-a4e8-406a-9a10-2171a304ec62\") " pod="openshift-marketplace/redhat-operators-n5v8c" Dec 09 12:18:13 crc kubenswrapper[4703]: I1209 12:18:13.892210 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5a41fd7-a4e8-406a-9a10-2171a304ec62-utilities\") pod \"redhat-operators-n5v8c\" (UID: \"e5a41fd7-a4e8-406a-9a10-2171a304ec62\") " pod="openshift-marketplace/redhat-operators-n5v8c" Dec 09 12:18:13 crc kubenswrapper[4703]: I1209 12:18:13.892423 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5a41fd7-a4e8-406a-9a10-2171a304ec62-catalog-content\") pod \"redhat-operators-n5v8c\" (UID: \"e5a41fd7-a4e8-406a-9a10-2171a304ec62\") " pod="openshift-marketplace/redhat-operators-n5v8c" Dec 09 12:18:13 crc kubenswrapper[4703]: I1209 12:18:13.892583 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9krdq\" (UniqueName: \"kubernetes.io/projected/e5a41fd7-a4e8-406a-9a10-2171a304ec62-kube-api-access-9krdq\") pod \"redhat-operators-n5v8c\" (UID: \"e5a41fd7-a4e8-406a-9a10-2171a304ec62\") " pod="openshift-marketplace/redhat-operators-n5v8c" Dec 09 12:18:13 crc kubenswrapper[4703]: I1209 12:18:13.892774 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5a41fd7-a4e8-406a-9a10-2171a304ec62-utilities\") pod \"redhat-operators-n5v8c\" (UID: \"e5a41fd7-a4e8-406a-9a10-2171a304ec62\") " pod="openshift-marketplace/redhat-operators-n5v8c" Dec 09 12:18:13 crc kubenswrapper[4703]: I1209 12:18:13.892821 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5a41fd7-a4e8-406a-9a10-2171a304ec62-catalog-content\") pod \"redhat-operators-n5v8c\" (UID: \"e5a41fd7-a4e8-406a-9a10-2171a304ec62\") " pod="openshift-marketplace/redhat-operators-n5v8c" Dec 09 12:18:13 crc kubenswrapper[4703]: I1209 12:18:13.914213 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9krdq\" (UniqueName: \"kubernetes.io/projected/e5a41fd7-a4e8-406a-9a10-2171a304ec62-kube-api-access-9krdq\") pod \"redhat-operators-n5v8c\" (UID: \"e5a41fd7-a4e8-406a-9a10-2171a304ec62\") " pod="openshift-marketplace/redhat-operators-n5v8c" Dec 09 12:18:13 crc kubenswrapper[4703]: I1209 12:18:13.936396 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n5v8c" Dec 09 12:18:14 crc kubenswrapper[4703]: I1209 12:18:14.232222 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-n5v8c"] Dec 09 12:18:14 crc kubenswrapper[4703]: I1209 12:18:14.405067 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n5v8c" event={"ID":"e5a41fd7-a4e8-406a-9a10-2171a304ec62","Type":"ContainerStarted","Data":"b9011a2e89c244b8a57ce8e6f56e00a407e4602aecc7369ca3ba964366bce277"} Dec 09 12:18:14 crc kubenswrapper[4703]: I1209 12:18:14.408953 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"d07d8d87-4b9f-4573-b886-596650efa33e","Type":"ContainerStarted","Data":"1b44af0752c29b3b9cbb9e0ac531ad422315e8c1ef5547c223665deb91e6ec3e"} Dec 09 12:18:15 crc kubenswrapper[4703]: I1209 12:18:15.418660 4703 generic.go:334] "Generic (PLEG): container finished" podID="e5a41fd7-a4e8-406a-9a10-2171a304ec62" containerID="e6698a957d327ba7faf45e79d69bebbb91f5144d5cf943ff3b8a2ae9c72c5fa1" exitCode=0 Dec 09 12:18:15 crc kubenswrapper[4703]: I1209 12:18:15.418762 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n5v8c" event={"ID":"e5a41fd7-a4e8-406a-9a10-2171a304ec62","Type":"ContainerDied","Data":"e6698a957d327ba7faf45e79d69bebbb91f5144d5cf943ff3b8a2ae9c72c5fa1"} Dec 09 12:18:21 crc kubenswrapper[4703]: I1209 12:18:21.507843 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"d07d8d87-4b9f-4573-b886-596650efa33e","Type":"ContainerStarted","Data":"a218d52f9fa5bf3a456d36aed28d8c296c6357f5330f7cb476f877d489e5060e"} Dec 09 12:18:21 crc kubenswrapper[4703]: I1209 12:18:21.512769 4703 generic.go:334] "Generic (PLEG): container finished" podID="61119125-f807-4f7b-b62e-e76f6cbfe8d2" containerID="75725e854c54c6a094659e98a25ab441a45c44512a0a1c6053b49a57962f86fe" exitCode=0 Dec 09 12:18:21 crc kubenswrapper[4703]: I1209 12:18:21.512865 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c127cl5" event={"ID":"61119125-f807-4f7b-b62e-e76f6cbfe8d2","Type":"ContainerDied","Data":"75725e854c54c6a094659e98a25ab441a45c44512a0a1c6053b49a57962f86fe"} Dec 09 12:18:21 crc kubenswrapper[4703]: I1209 12:18:21.527149 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="minio-dev/minio" podStartSLOduration=4.085752299 podStartE2EDuration="11.527115287s" podCreationTimestamp="2025-12-09 12:18:10 +0000 UTC" firstStartedPulling="2025-12-09 12:18:13.46378005 +0000 UTC m=+792.712543569" lastFinishedPulling="2025-12-09 12:18:20.905143038 +0000 UTC m=+800.153906557" observedRunningTime="2025-12-09 12:18:21.524873066 +0000 UTC m=+800.773636595" watchObservedRunningTime="2025-12-09 12:18:21.527115287 +0000 UTC m=+800.775878806" Dec 09 12:18:22 crc kubenswrapper[4703]: I1209 12:18:22.522784 4703 generic.go:334] "Generic (PLEG): container finished" podID="61119125-f807-4f7b-b62e-e76f6cbfe8d2" containerID="ea7f17778c698a8b0f0d0bb2ffe37235b238f85464b8896a4442f87046b25360" exitCode=0 Dec 09 12:18:22 crc kubenswrapper[4703]: I1209 12:18:22.523880 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c127cl5" event={"ID":"61119125-f807-4f7b-b62e-e76f6cbfe8d2","Type":"ContainerDied","Data":"ea7f17778c698a8b0f0d0bb2ffe37235b238f85464b8896a4442f87046b25360"} Dec 09 12:18:33 crc kubenswrapper[4703]: E1209 12:18:33.319388 4703 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Dec 09 12:18:33 crc kubenswrapper[4703]: E1209 12:18:33.320217 4703 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9krdq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-n5v8c_openshift-marketplace(e5a41fd7-a4e8-406a-9a10-2171a304ec62): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 09 12:18:33 crc kubenswrapper[4703]: E1209 12:18:33.321450 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-n5v8c" podUID="e5a41fd7-a4e8-406a-9a10-2171a304ec62" Dec 09 12:18:33 crc kubenswrapper[4703]: I1209 12:18:33.336151 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c127cl5" Dec 09 12:18:33 crc kubenswrapper[4703]: I1209 12:18:33.422365 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/61119125-f807-4f7b-b62e-e76f6cbfe8d2-bundle\") pod \"61119125-f807-4f7b-b62e-e76f6cbfe8d2\" (UID: \"61119125-f807-4f7b-b62e-e76f6cbfe8d2\") " Dec 09 12:18:33 crc kubenswrapper[4703]: I1209 12:18:33.422501 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/61119125-f807-4f7b-b62e-e76f6cbfe8d2-util\") pod \"61119125-f807-4f7b-b62e-e76f6cbfe8d2\" (UID: \"61119125-f807-4f7b-b62e-e76f6cbfe8d2\") " Dec 09 12:18:33 crc kubenswrapper[4703]: I1209 12:18:33.422545 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c2hp\" (UniqueName: \"kubernetes.io/projected/61119125-f807-4f7b-b62e-e76f6cbfe8d2-kube-api-access-7c2hp\") pod \"61119125-f807-4f7b-b62e-e76f6cbfe8d2\" (UID: \"61119125-f807-4f7b-b62e-e76f6cbfe8d2\") " Dec 09 12:18:33 crc kubenswrapper[4703]: I1209 12:18:33.423645 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61119125-f807-4f7b-b62e-e76f6cbfe8d2-bundle" (OuterVolumeSpecName: "bundle") pod "61119125-f807-4f7b-b62e-e76f6cbfe8d2" (UID: "61119125-f807-4f7b-b62e-e76f6cbfe8d2"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:18:33 crc kubenswrapper[4703]: I1209 12:18:33.428867 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61119125-f807-4f7b-b62e-e76f6cbfe8d2-kube-api-access-7c2hp" (OuterVolumeSpecName: "kube-api-access-7c2hp") pod "61119125-f807-4f7b-b62e-e76f6cbfe8d2" (UID: "61119125-f807-4f7b-b62e-e76f6cbfe8d2"). InnerVolumeSpecName "kube-api-access-7c2hp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:18:33 crc kubenswrapper[4703]: I1209 12:18:33.432953 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61119125-f807-4f7b-b62e-e76f6cbfe8d2-util" (OuterVolumeSpecName: "util") pod "61119125-f807-4f7b-b62e-e76f6cbfe8d2" (UID: "61119125-f807-4f7b-b62e-e76f6cbfe8d2"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:18:33 crc kubenswrapper[4703]: I1209 12:18:33.524015 4703 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/61119125-f807-4f7b-b62e-e76f6cbfe8d2-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:18:33 crc kubenswrapper[4703]: I1209 12:18:33.524061 4703 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/61119125-f807-4f7b-b62e-e76f6cbfe8d2-util\") on node \"crc\" DevicePath \"\"" Dec 09 12:18:33 crc kubenswrapper[4703]: I1209 12:18:33.524080 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c2hp\" (UniqueName: \"kubernetes.io/projected/61119125-f807-4f7b-b62e-e76f6cbfe8d2-kube-api-access-7c2hp\") on node \"crc\" DevicePath \"\"" Dec 09 12:18:33 crc kubenswrapper[4703]: I1209 12:18:33.585632 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c127cl5" event={"ID":"61119125-f807-4f7b-b62e-e76f6cbfe8d2","Type":"ContainerDied","Data":"6484e9650f37a54cd42b172bb0c9826ccb063ef44a3f1eed4260381571b09f79"} Dec 09 12:18:33 crc kubenswrapper[4703]: I1209 12:18:33.585729 4703 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6484e9650f37a54cd42b172bb0c9826ccb063ef44a3f1eed4260381571b09f79" Dec 09 12:18:33 crc kubenswrapper[4703]: I1209 12:18:33.585659 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c127cl5" Dec 09 12:18:33 crc kubenswrapper[4703]: E1209 12:18:33.587561 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-n5v8c" podUID="e5a41fd7-a4e8-406a-9a10-2171a304ec62" Dec 09 12:18:40 crc kubenswrapper[4703]: I1209 12:18:40.572441 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-78784949d8-qbts8"] Dec 09 12:18:40 crc kubenswrapper[4703]: E1209 12:18:40.572956 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61119125-f807-4f7b-b62e-e76f6cbfe8d2" containerName="util" Dec 09 12:18:40 crc kubenswrapper[4703]: I1209 12:18:40.572968 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="61119125-f807-4f7b-b62e-e76f6cbfe8d2" containerName="util" Dec 09 12:18:40 crc kubenswrapper[4703]: E1209 12:18:40.572978 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61119125-f807-4f7b-b62e-e76f6cbfe8d2" containerName="pull" Dec 09 12:18:40 crc kubenswrapper[4703]: I1209 12:18:40.572984 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="61119125-f807-4f7b-b62e-e76f6cbfe8d2" containerName="pull" Dec 09 12:18:40 crc kubenswrapper[4703]: E1209 12:18:40.572995 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61119125-f807-4f7b-b62e-e76f6cbfe8d2" containerName="extract" Dec 09 12:18:40 crc kubenswrapper[4703]: I1209 12:18:40.573001 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="61119125-f807-4f7b-b62e-e76f6cbfe8d2" containerName="extract" Dec 09 12:18:40 crc kubenswrapper[4703]: I1209 12:18:40.573099 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="61119125-f807-4f7b-b62e-e76f6cbfe8d2" containerName="extract" Dec 09 12:18:40 crc kubenswrapper[4703]: I1209 12:18:40.573722 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-78784949d8-qbts8" Dec 09 12:18:40 crc kubenswrapper[4703]: I1209 12:18:40.576027 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"kube-root-ca.crt" Dec 09 12:18:40 crc kubenswrapper[4703]: I1209 12:18:40.576245 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"openshift-service-ca.crt" Dec 09 12:18:40 crc kubenswrapper[4703]: I1209 12:18:40.576499 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-metrics" Dec 09 12:18:40 crc kubenswrapper[4703]: I1209 12:18:40.576703 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-service-cert" Dec 09 12:18:40 crc kubenswrapper[4703]: I1209 12:18:40.576949 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"loki-operator-manager-config" Dec 09 12:18:40 crc kubenswrapper[4703]: I1209 12:18:40.586820 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-dockercfg-w7gzg" Dec 09 12:18:40 crc kubenswrapper[4703]: I1209 12:18:40.594909 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-78784949d8-qbts8"] Dec 09 12:18:40 crc kubenswrapper[4703]: I1209 12:18:40.716338 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/253ad9ab-9f72-4252-91c2-8a79577155a2-apiservice-cert\") pod \"loki-operator-controller-manager-78784949d8-qbts8\" (UID: \"253ad9ab-9f72-4252-91c2-8a79577155a2\") " pod="openshift-operators-redhat/loki-operator-controller-manager-78784949d8-qbts8" Dec 09 12:18:40 crc kubenswrapper[4703]: I1209 12:18:40.716423 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/253ad9ab-9f72-4252-91c2-8a79577155a2-webhook-cert\") pod \"loki-operator-controller-manager-78784949d8-qbts8\" (UID: \"253ad9ab-9f72-4252-91c2-8a79577155a2\") " pod="openshift-operators-redhat/loki-operator-controller-manager-78784949d8-qbts8" Dec 09 12:18:40 crc kubenswrapper[4703]: I1209 12:18:40.716472 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/253ad9ab-9f72-4252-91c2-8a79577155a2-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-78784949d8-qbts8\" (UID: \"253ad9ab-9f72-4252-91c2-8a79577155a2\") " pod="openshift-operators-redhat/loki-operator-controller-manager-78784949d8-qbts8" Dec 09 12:18:40 crc kubenswrapper[4703]: I1209 12:18:40.716519 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/253ad9ab-9f72-4252-91c2-8a79577155a2-manager-config\") pod \"loki-operator-controller-manager-78784949d8-qbts8\" (UID: \"253ad9ab-9f72-4252-91c2-8a79577155a2\") " pod="openshift-operators-redhat/loki-operator-controller-manager-78784949d8-qbts8" Dec 09 12:18:40 crc kubenswrapper[4703]: I1209 12:18:40.716545 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlphm\" (UniqueName: \"kubernetes.io/projected/253ad9ab-9f72-4252-91c2-8a79577155a2-kube-api-access-zlphm\") pod \"loki-operator-controller-manager-78784949d8-qbts8\" (UID: \"253ad9ab-9f72-4252-91c2-8a79577155a2\") " pod="openshift-operators-redhat/loki-operator-controller-manager-78784949d8-qbts8" Dec 09 12:18:40 crc kubenswrapper[4703]: I1209 12:18:40.817685 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/253ad9ab-9f72-4252-91c2-8a79577155a2-apiservice-cert\") pod \"loki-operator-controller-manager-78784949d8-qbts8\" (UID: \"253ad9ab-9f72-4252-91c2-8a79577155a2\") " pod="openshift-operators-redhat/loki-operator-controller-manager-78784949d8-qbts8" Dec 09 12:18:40 crc kubenswrapper[4703]: I1209 12:18:40.817762 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/253ad9ab-9f72-4252-91c2-8a79577155a2-webhook-cert\") pod \"loki-operator-controller-manager-78784949d8-qbts8\" (UID: \"253ad9ab-9f72-4252-91c2-8a79577155a2\") " pod="openshift-operators-redhat/loki-operator-controller-manager-78784949d8-qbts8" Dec 09 12:18:40 crc kubenswrapper[4703]: I1209 12:18:40.817808 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/253ad9ab-9f72-4252-91c2-8a79577155a2-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-78784949d8-qbts8\" (UID: \"253ad9ab-9f72-4252-91c2-8a79577155a2\") " pod="openshift-operators-redhat/loki-operator-controller-manager-78784949d8-qbts8" Dec 09 12:18:40 crc kubenswrapper[4703]: I1209 12:18:40.817855 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/253ad9ab-9f72-4252-91c2-8a79577155a2-manager-config\") pod \"loki-operator-controller-manager-78784949d8-qbts8\" (UID: \"253ad9ab-9f72-4252-91c2-8a79577155a2\") " pod="openshift-operators-redhat/loki-operator-controller-manager-78784949d8-qbts8" Dec 09 12:18:40 crc kubenswrapper[4703]: I1209 12:18:40.817883 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlphm\" (UniqueName: \"kubernetes.io/projected/253ad9ab-9f72-4252-91c2-8a79577155a2-kube-api-access-zlphm\") pod \"loki-operator-controller-manager-78784949d8-qbts8\" (UID: \"253ad9ab-9f72-4252-91c2-8a79577155a2\") " pod="openshift-operators-redhat/loki-operator-controller-manager-78784949d8-qbts8" Dec 09 12:18:40 crc kubenswrapper[4703]: I1209 12:18:40.820490 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/253ad9ab-9f72-4252-91c2-8a79577155a2-manager-config\") pod \"loki-operator-controller-manager-78784949d8-qbts8\" (UID: \"253ad9ab-9f72-4252-91c2-8a79577155a2\") " pod="openshift-operators-redhat/loki-operator-controller-manager-78784949d8-qbts8" Dec 09 12:18:40 crc kubenswrapper[4703]: I1209 12:18:40.825911 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/253ad9ab-9f72-4252-91c2-8a79577155a2-webhook-cert\") pod \"loki-operator-controller-manager-78784949d8-qbts8\" (UID: \"253ad9ab-9f72-4252-91c2-8a79577155a2\") " pod="openshift-operators-redhat/loki-operator-controller-manager-78784949d8-qbts8" Dec 09 12:18:40 crc kubenswrapper[4703]: I1209 12:18:40.827841 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/253ad9ab-9f72-4252-91c2-8a79577155a2-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-78784949d8-qbts8\" (UID: \"253ad9ab-9f72-4252-91c2-8a79577155a2\") " pod="openshift-operators-redhat/loki-operator-controller-manager-78784949d8-qbts8" Dec 09 12:18:40 crc kubenswrapper[4703]: I1209 12:18:40.827846 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/253ad9ab-9f72-4252-91c2-8a79577155a2-apiservice-cert\") pod \"loki-operator-controller-manager-78784949d8-qbts8\" (UID: \"253ad9ab-9f72-4252-91c2-8a79577155a2\") " pod="openshift-operators-redhat/loki-operator-controller-manager-78784949d8-qbts8" Dec 09 12:18:40 crc kubenswrapper[4703]: I1209 12:18:40.842124 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlphm\" (UniqueName: \"kubernetes.io/projected/253ad9ab-9f72-4252-91c2-8a79577155a2-kube-api-access-zlphm\") pod \"loki-operator-controller-manager-78784949d8-qbts8\" (UID: \"253ad9ab-9f72-4252-91c2-8a79577155a2\") " pod="openshift-operators-redhat/loki-operator-controller-manager-78784949d8-qbts8" Dec 09 12:18:40 crc kubenswrapper[4703]: I1209 12:18:40.890789 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-78784949d8-qbts8" Dec 09 12:18:41 crc kubenswrapper[4703]: I1209 12:18:41.215850 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-78784949d8-qbts8"] Dec 09 12:18:41 crc kubenswrapper[4703]: W1209 12:18:41.222986 4703 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod253ad9ab_9f72_4252_91c2_8a79577155a2.slice/crio-1c35c34a9ffe5ab9f21e448153ada404561330627db93ad57eb29a58c05c0879 WatchSource:0}: Error finding container 1c35c34a9ffe5ab9f21e448153ada404561330627db93ad57eb29a58c05c0879: Status 404 returned error can't find the container with id 1c35c34a9ffe5ab9f21e448153ada404561330627db93ad57eb29a58c05c0879 Dec 09 12:18:41 crc kubenswrapper[4703]: I1209 12:18:41.773445 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-78784949d8-qbts8" event={"ID":"253ad9ab-9f72-4252-91c2-8a79577155a2","Type":"ContainerStarted","Data":"1c35c34a9ffe5ab9f21e448153ada404561330627db93ad57eb29a58c05c0879"} Dec 09 12:18:52 crc kubenswrapper[4703]: I1209 12:18:52.954145 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-78784949d8-qbts8" event={"ID":"253ad9ab-9f72-4252-91c2-8a79577155a2","Type":"ContainerStarted","Data":"44062d273349f0b59ce31ae190e338c1d01831ce61e9843dfee02efa249c056b"} Dec 09 12:18:52 crc kubenswrapper[4703]: I1209 12:18:52.959211 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n5v8c" event={"ID":"e5a41fd7-a4e8-406a-9a10-2171a304ec62","Type":"ContainerStarted","Data":"93cee9e216d7cedc8637af40de84195564c3664b5a55770c37b7ba34bce9279b"} Dec 09 12:18:59 crc kubenswrapper[4703]: I1209 12:18:59.219361 4703 generic.go:334] "Generic (PLEG): container finished" podID="e5a41fd7-a4e8-406a-9a10-2171a304ec62" containerID="93cee9e216d7cedc8637af40de84195564c3664b5a55770c37b7ba34bce9279b" exitCode=0 Dec 09 12:18:59 crc kubenswrapper[4703]: I1209 12:18:59.219495 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n5v8c" event={"ID":"e5a41fd7-a4e8-406a-9a10-2171a304ec62","Type":"ContainerDied","Data":"93cee9e216d7cedc8637af40de84195564c3664b5a55770c37b7ba34bce9279b"} Dec 09 12:19:07 crc kubenswrapper[4703]: I1209 12:19:07.296240 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-78784949d8-qbts8" event={"ID":"253ad9ab-9f72-4252-91c2-8a79577155a2","Type":"ContainerStarted","Data":"3c780c13f7dc4da3705c1a55977c50463ac0338203c123765d0d63e61e95f5af"} Dec 09 12:19:07 crc kubenswrapper[4703]: I1209 12:19:07.296926 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-78784949d8-qbts8" Dec 09 12:19:07 crc kubenswrapper[4703]: I1209 12:19:07.299477 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n5v8c" event={"ID":"e5a41fd7-a4e8-406a-9a10-2171a304ec62","Type":"ContainerStarted","Data":"1c3570f4e3735076fcd4950fb348bd7c8d42feb423e60392b2c0baa6a3bb5b7b"} Dec 09 12:19:07 crc kubenswrapper[4703]: I1209 12:19:07.299914 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-78784949d8-qbts8" Dec 09 12:19:07 crc kubenswrapper[4703]: I1209 12:19:07.322613 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators-redhat/loki-operator-controller-manager-78784949d8-qbts8" podStartSLOduration=2.165518567 podStartE2EDuration="27.322593431s" podCreationTimestamp="2025-12-09 12:18:40 +0000 UTC" firstStartedPulling="2025-12-09 12:18:41.227649211 +0000 UTC m=+820.476412730" lastFinishedPulling="2025-12-09 12:19:06.384724075 +0000 UTC m=+845.633487594" observedRunningTime="2025-12-09 12:19:07.31774044 +0000 UTC m=+846.566504019" watchObservedRunningTime="2025-12-09 12:19:07.322593431 +0000 UTC m=+846.571356970" Dec 09 12:19:07 crc kubenswrapper[4703]: I1209 12:19:07.401091 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-n5v8c" podStartSLOduration=3.449448433 podStartE2EDuration="54.401068561s" podCreationTimestamp="2025-12-09 12:18:13 +0000 UTC" firstStartedPulling="2025-12-09 12:18:15.420420101 +0000 UTC m=+794.669183620" lastFinishedPulling="2025-12-09 12:19:06.372040229 +0000 UTC m=+845.620803748" observedRunningTime="2025-12-09 12:19:07.399758295 +0000 UTC m=+846.648521804" watchObservedRunningTime="2025-12-09 12:19:07.401068561 +0000 UTC m=+846.649832080" Dec 09 12:19:13 crc kubenswrapper[4703]: I1209 12:19:13.937271 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-n5v8c" Dec 09 12:19:13 crc kubenswrapper[4703]: I1209 12:19:13.938142 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-n5v8c" Dec 09 12:19:13 crc kubenswrapper[4703]: I1209 12:19:13.983016 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-n5v8c" Dec 09 12:19:14 crc kubenswrapper[4703]: I1209 12:19:14.382154 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-n5v8c" Dec 09 12:19:14 crc kubenswrapper[4703]: I1209 12:19:14.695694 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-n5v8c"] Dec 09 12:19:14 crc kubenswrapper[4703]: I1209 12:19:14.900978 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t6tnb"] Dec 09 12:19:14 crc kubenswrapper[4703]: I1209 12:19:14.901532 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-t6tnb" podUID="fd23e539-b882-4063-88f9-2927e5439ade" containerName="registry-server" containerID="cri-o://129c0307304ea4e9ff3df1f1fb84e811d8f812c8d2fd1333c5a99d50e4a53ba6" gracePeriod=2 Dec 09 12:19:16 crc kubenswrapper[4703]: I1209 12:19:16.357785 4703 generic.go:334] "Generic (PLEG): container finished" podID="fd23e539-b882-4063-88f9-2927e5439ade" containerID="129c0307304ea4e9ff3df1f1fb84e811d8f812c8d2fd1333c5a99d50e4a53ba6" exitCode=0 Dec 09 12:19:16 crc kubenswrapper[4703]: I1209 12:19:16.357957 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t6tnb" event={"ID":"fd23e539-b882-4063-88f9-2927e5439ade","Type":"ContainerDied","Data":"129c0307304ea4e9ff3df1f1fb84e811d8f812c8d2fd1333c5a99d50e4a53ba6"} Dec 09 12:19:16 crc kubenswrapper[4703]: I1209 12:19:16.435534 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t6tnb" Dec 09 12:19:16 crc kubenswrapper[4703]: I1209 12:19:16.568961 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rpcrp\" (UniqueName: \"kubernetes.io/projected/fd23e539-b882-4063-88f9-2927e5439ade-kube-api-access-rpcrp\") pod \"fd23e539-b882-4063-88f9-2927e5439ade\" (UID: \"fd23e539-b882-4063-88f9-2927e5439ade\") " Dec 09 12:19:16 crc kubenswrapper[4703]: I1209 12:19:16.569115 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd23e539-b882-4063-88f9-2927e5439ade-utilities\") pod \"fd23e539-b882-4063-88f9-2927e5439ade\" (UID: \"fd23e539-b882-4063-88f9-2927e5439ade\") " Dec 09 12:19:16 crc kubenswrapper[4703]: I1209 12:19:16.569162 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd23e539-b882-4063-88f9-2927e5439ade-catalog-content\") pod \"fd23e539-b882-4063-88f9-2927e5439ade\" (UID: \"fd23e539-b882-4063-88f9-2927e5439ade\") " Dec 09 12:19:16 crc kubenswrapper[4703]: I1209 12:19:16.571247 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd23e539-b882-4063-88f9-2927e5439ade-utilities" (OuterVolumeSpecName: "utilities") pod "fd23e539-b882-4063-88f9-2927e5439ade" (UID: "fd23e539-b882-4063-88f9-2927e5439ade"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:19:16 crc kubenswrapper[4703]: I1209 12:19:16.577163 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd23e539-b882-4063-88f9-2927e5439ade-kube-api-access-rpcrp" (OuterVolumeSpecName: "kube-api-access-rpcrp") pod "fd23e539-b882-4063-88f9-2927e5439ade" (UID: "fd23e539-b882-4063-88f9-2927e5439ade"). InnerVolumeSpecName "kube-api-access-rpcrp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:19:16 crc kubenswrapper[4703]: I1209 12:19:16.670808 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rpcrp\" (UniqueName: \"kubernetes.io/projected/fd23e539-b882-4063-88f9-2927e5439ade-kube-api-access-rpcrp\") on node \"crc\" DevicePath \"\"" Dec 09 12:19:16 crc kubenswrapper[4703]: I1209 12:19:16.670850 4703 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd23e539-b882-4063-88f9-2927e5439ade-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 12:19:16 crc kubenswrapper[4703]: I1209 12:19:16.696450 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd23e539-b882-4063-88f9-2927e5439ade-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fd23e539-b882-4063-88f9-2927e5439ade" (UID: "fd23e539-b882-4063-88f9-2927e5439ade"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:19:16 crc kubenswrapper[4703]: I1209 12:19:16.772403 4703 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd23e539-b882-4063-88f9-2927e5439ade-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 12:19:17 crc kubenswrapper[4703]: I1209 12:19:17.366208 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t6tnb" event={"ID":"fd23e539-b882-4063-88f9-2927e5439ade","Type":"ContainerDied","Data":"8339bcba1c924eb678578fab225bf6ccbcb5dfe8f518fbce8d505644dd392b52"} Dec 09 12:19:17 crc kubenswrapper[4703]: I1209 12:19:17.366294 4703 scope.go:117] "RemoveContainer" containerID="129c0307304ea4e9ff3df1f1fb84e811d8f812c8d2fd1333c5a99d50e4a53ba6" Dec 09 12:19:17 crc kubenswrapper[4703]: I1209 12:19:17.366333 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t6tnb" Dec 09 12:19:17 crc kubenswrapper[4703]: I1209 12:19:17.387525 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t6tnb"] Dec 09 12:19:17 crc kubenswrapper[4703]: I1209 12:19:17.390151 4703 scope.go:117] "RemoveContainer" containerID="5a49c8641dc1c8474a99d6e93adf0d3ec6ffdd944f93dccc3fea5fbfc8d974d7" Dec 09 12:19:17 crc kubenswrapper[4703]: I1209 12:19:17.392299 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-t6tnb"] Dec 09 12:19:17 crc kubenswrapper[4703]: I1209 12:19:17.408721 4703 scope.go:117] "RemoveContainer" containerID="4d2019961aa8594eea876849e5ef4e1f5d704cadad53a8e463ecefba8c2d4805" Dec 09 12:19:19 crc kubenswrapper[4703]: I1209 12:19:19.077484 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd23e539-b882-4063-88f9-2927e5439ade" path="/var/lib/kubelet/pods/fd23e539-b882-4063-88f9-2927e5439ade/volumes" Dec 09 12:19:41 crc kubenswrapper[4703]: I1209 12:19:41.181042 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fccg8t"] Dec 09 12:19:41 crc kubenswrapper[4703]: E1209 12:19:41.191601 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd23e539-b882-4063-88f9-2927e5439ade" containerName="extract-content" Dec 09 12:19:41 crc kubenswrapper[4703]: I1209 12:19:41.191630 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd23e539-b882-4063-88f9-2927e5439ade" containerName="extract-content" Dec 09 12:19:41 crc kubenswrapper[4703]: E1209 12:19:41.191664 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd23e539-b882-4063-88f9-2927e5439ade" containerName="extract-utilities" Dec 09 12:19:41 crc kubenswrapper[4703]: I1209 12:19:41.191673 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd23e539-b882-4063-88f9-2927e5439ade" containerName="extract-utilities" Dec 09 12:19:41 crc kubenswrapper[4703]: E1209 12:19:41.191683 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd23e539-b882-4063-88f9-2927e5439ade" containerName="registry-server" Dec 09 12:19:41 crc kubenswrapper[4703]: I1209 12:19:41.191691 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd23e539-b882-4063-88f9-2927e5439ade" containerName="registry-server" Dec 09 12:19:41 crc kubenswrapper[4703]: I1209 12:19:41.191830 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd23e539-b882-4063-88f9-2927e5439ade" containerName="registry-server" Dec 09 12:19:41 crc kubenswrapper[4703]: I1209 12:19:41.192928 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fccg8t"] Dec 09 12:19:41 crc kubenswrapper[4703]: I1209 12:19:41.193021 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fccg8t" Dec 09 12:19:41 crc kubenswrapper[4703]: I1209 12:19:41.199375 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 09 12:19:41 crc kubenswrapper[4703]: I1209 12:19:41.309518 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4fdac27b-2cf2-4a2d-851e-75dcb928860f-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fccg8t\" (UID: \"4fdac27b-2cf2-4a2d-851e-75dcb928860f\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fccg8t" Dec 09 12:19:41 crc kubenswrapper[4703]: I1209 12:19:41.309929 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crbx4\" (UniqueName: \"kubernetes.io/projected/4fdac27b-2cf2-4a2d-851e-75dcb928860f-kube-api-access-crbx4\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fccg8t\" (UID: \"4fdac27b-2cf2-4a2d-851e-75dcb928860f\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fccg8t" Dec 09 12:19:41 crc kubenswrapper[4703]: I1209 12:19:41.310077 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4fdac27b-2cf2-4a2d-851e-75dcb928860f-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fccg8t\" (UID: \"4fdac27b-2cf2-4a2d-851e-75dcb928860f\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fccg8t" Dec 09 12:19:41 crc kubenswrapper[4703]: I1209 12:19:41.411701 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crbx4\" (UniqueName: \"kubernetes.io/projected/4fdac27b-2cf2-4a2d-851e-75dcb928860f-kube-api-access-crbx4\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fccg8t\" (UID: \"4fdac27b-2cf2-4a2d-851e-75dcb928860f\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fccg8t" Dec 09 12:19:41 crc kubenswrapper[4703]: I1209 12:19:41.411748 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4fdac27b-2cf2-4a2d-851e-75dcb928860f-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fccg8t\" (UID: \"4fdac27b-2cf2-4a2d-851e-75dcb928860f\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fccg8t" Dec 09 12:19:41 crc kubenswrapper[4703]: I1209 12:19:41.411795 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4fdac27b-2cf2-4a2d-851e-75dcb928860f-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fccg8t\" (UID: \"4fdac27b-2cf2-4a2d-851e-75dcb928860f\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fccg8t" Dec 09 12:19:41 crc kubenswrapper[4703]: I1209 12:19:41.412422 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4fdac27b-2cf2-4a2d-851e-75dcb928860f-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fccg8t\" (UID: \"4fdac27b-2cf2-4a2d-851e-75dcb928860f\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fccg8t" Dec 09 12:19:41 crc kubenswrapper[4703]: I1209 12:19:41.412454 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4fdac27b-2cf2-4a2d-851e-75dcb928860f-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fccg8t\" (UID: \"4fdac27b-2cf2-4a2d-851e-75dcb928860f\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fccg8t" Dec 09 12:19:41 crc kubenswrapper[4703]: I1209 12:19:41.435122 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crbx4\" (UniqueName: \"kubernetes.io/projected/4fdac27b-2cf2-4a2d-851e-75dcb928860f-kube-api-access-crbx4\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fccg8t\" (UID: \"4fdac27b-2cf2-4a2d-851e-75dcb928860f\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fccg8t" Dec 09 12:19:41 crc kubenswrapper[4703]: I1209 12:19:41.517002 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fccg8t" Dec 09 12:19:41 crc kubenswrapper[4703]: I1209 12:19:41.974034 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fccg8t"] Dec 09 12:19:42 crc kubenswrapper[4703]: I1209 12:19:42.513576 4703 generic.go:334] "Generic (PLEG): container finished" podID="4fdac27b-2cf2-4a2d-851e-75dcb928860f" containerID="4872141fbf0c0347fa630c8d8d408e174b2650845c758fd36e2d119a6559c866" exitCode=0 Dec 09 12:19:42 crc kubenswrapper[4703]: I1209 12:19:42.513879 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fccg8t" event={"ID":"4fdac27b-2cf2-4a2d-851e-75dcb928860f","Type":"ContainerDied","Data":"4872141fbf0c0347fa630c8d8d408e174b2650845c758fd36e2d119a6559c866"} Dec 09 12:19:42 crc kubenswrapper[4703]: I1209 12:19:42.514002 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fccg8t" event={"ID":"4fdac27b-2cf2-4a2d-851e-75dcb928860f","Type":"ContainerStarted","Data":"31ed7b6c52bdb2f3128bbccd266f77ea139ea75c34e3a54b757a06c317f5559f"} Dec 09 12:19:44 crc kubenswrapper[4703]: I1209 12:19:44.528092 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fccg8t" event={"ID":"4fdac27b-2cf2-4a2d-851e-75dcb928860f","Type":"ContainerDied","Data":"8a6b643e3ae62a06f237c3543d05902109b2ae57a115c9d96694218f2d084ed0"} Dec 09 12:19:44 crc kubenswrapper[4703]: I1209 12:19:44.528017 4703 generic.go:334] "Generic (PLEG): container finished" podID="4fdac27b-2cf2-4a2d-851e-75dcb928860f" containerID="8a6b643e3ae62a06f237c3543d05902109b2ae57a115c9d96694218f2d084ed0" exitCode=0 Dec 09 12:19:45 crc kubenswrapper[4703]: I1209 12:19:45.539033 4703 generic.go:334] "Generic (PLEG): container finished" podID="4fdac27b-2cf2-4a2d-851e-75dcb928860f" containerID="78bc437030d6c1674a2930920455f6644a193cac5080affc256f46148a8cfecd" exitCode=0 Dec 09 12:19:45 crc kubenswrapper[4703]: I1209 12:19:45.539110 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fccg8t" event={"ID":"4fdac27b-2cf2-4a2d-851e-75dcb928860f","Type":"ContainerDied","Data":"78bc437030d6c1674a2930920455f6644a193cac5080affc256f46148a8cfecd"} Dec 09 12:19:46 crc kubenswrapper[4703]: I1209 12:19:46.790380 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fccg8t" Dec 09 12:19:46 crc kubenswrapper[4703]: I1209 12:19:46.892774 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4fdac27b-2cf2-4a2d-851e-75dcb928860f-bundle\") pod \"4fdac27b-2cf2-4a2d-851e-75dcb928860f\" (UID: \"4fdac27b-2cf2-4a2d-851e-75dcb928860f\") " Dec 09 12:19:46 crc kubenswrapper[4703]: I1209 12:19:46.892826 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4fdac27b-2cf2-4a2d-851e-75dcb928860f-util\") pod \"4fdac27b-2cf2-4a2d-851e-75dcb928860f\" (UID: \"4fdac27b-2cf2-4a2d-851e-75dcb928860f\") " Dec 09 12:19:46 crc kubenswrapper[4703]: I1209 12:19:46.892893 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-crbx4\" (UniqueName: \"kubernetes.io/projected/4fdac27b-2cf2-4a2d-851e-75dcb928860f-kube-api-access-crbx4\") pod \"4fdac27b-2cf2-4a2d-851e-75dcb928860f\" (UID: \"4fdac27b-2cf2-4a2d-851e-75dcb928860f\") " Dec 09 12:19:46 crc kubenswrapper[4703]: I1209 12:19:46.894826 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4fdac27b-2cf2-4a2d-851e-75dcb928860f-bundle" (OuterVolumeSpecName: "bundle") pod "4fdac27b-2cf2-4a2d-851e-75dcb928860f" (UID: "4fdac27b-2cf2-4a2d-851e-75dcb928860f"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:19:46 crc kubenswrapper[4703]: I1209 12:19:46.901004 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fdac27b-2cf2-4a2d-851e-75dcb928860f-kube-api-access-crbx4" (OuterVolumeSpecName: "kube-api-access-crbx4") pod "4fdac27b-2cf2-4a2d-851e-75dcb928860f" (UID: "4fdac27b-2cf2-4a2d-851e-75dcb928860f"). InnerVolumeSpecName "kube-api-access-crbx4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:19:46 crc kubenswrapper[4703]: I1209 12:19:46.911598 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4fdac27b-2cf2-4a2d-851e-75dcb928860f-util" (OuterVolumeSpecName: "util") pod "4fdac27b-2cf2-4a2d-851e-75dcb928860f" (UID: "4fdac27b-2cf2-4a2d-851e-75dcb928860f"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:19:46 crc kubenswrapper[4703]: I1209 12:19:46.995784 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-crbx4\" (UniqueName: \"kubernetes.io/projected/4fdac27b-2cf2-4a2d-851e-75dcb928860f-kube-api-access-crbx4\") on node \"crc\" DevicePath \"\"" Dec 09 12:19:46 crc kubenswrapper[4703]: I1209 12:19:46.995828 4703 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4fdac27b-2cf2-4a2d-851e-75dcb928860f-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:19:46 crc kubenswrapper[4703]: I1209 12:19:46.995837 4703 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4fdac27b-2cf2-4a2d-851e-75dcb928860f-util\") on node \"crc\" DevicePath \"\"" Dec 09 12:19:47 crc kubenswrapper[4703]: I1209 12:19:47.553886 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fccg8t" event={"ID":"4fdac27b-2cf2-4a2d-851e-75dcb928860f","Type":"ContainerDied","Data":"31ed7b6c52bdb2f3128bbccd266f77ea139ea75c34e3a54b757a06c317f5559f"} Dec 09 12:19:47 crc kubenswrapper[4703]: I1209 12:19:47.553926 4703 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="31ed7b6c52bdb2f3128bbccd266f77ea139ea75c34e3a54b757a06c317f5559f" Dec 09 12:19:47 crc kubenswrapper[4703]: I1209 12:19:47.553981 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fccg8t" Dec 09 12:19:52 crc kubenswrapper[4703]: I1209 12:19:52.888938 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-6r94d"] Dec 09 12:19:52 crc kubenswrapper[4703]: E1209 12:19:52.889509 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fdac27b-2cf2-4a2d-851e-75dcb928860f" containerName="extract" Dec 09 12:19:52 crc kubenswrapper[4703]: I1209 12:19:52.889523 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fdac27b-2cf2-4a2d-851e-75dcb928860f" containerName="extract" Dec 09 12:19:52 crc kubenswrapper[4703]: E1209 12:19:52.889541 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fdac27b-2cf2-4a2d-851e-75dcb928860f" containerName="util" Dec 09 12:19:52 crc kubenswrapper[4703]: I1209 12:19:52.889549 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fdac27b-2cf2-4a2d-851e-75dcb928860f" containerName="util" Dec 09 12:19:52 crc kubenswrapper[4703]: E1209 12:19:52.889561 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fdac27b-2cf2-4a2d-851e-75dcb928860f" containerName="pull" Dec 09 12:19:52 crc kubenswrapper[4703]: I1209 12:19:52.889569 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fdac27b-2cf2-4a2d-851e-75dcb928860f" containerName="pull" Dec 09 12:19:52 crc kubenswrapper[4703]: I1209 12:19:52.889679 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fdac27b-2cf2-4a2d-851e-75dcb928860f" containerName="extract" Dec 09 12:19:52 crc kubenswrapper[4703]: I1209 12:19:52.890255 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-6r94d" Dec 09 12:19:52 crc kubenswrapper[4703]: I1209 12:19:52.892714 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Dec 09 12:19:52 crc kubenswrapper[4703]: I1209 12:19:52.892989 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Dec 09 12:19:52 crc kubenswrapper[4703]: I1209 12:19:52.893010 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-t7nvz" Dec 09 12:19:52 crc kubenswrapper[4703]: I1209 12:19:52.905502 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-6r94d"] Dec 09 12:19:52 crc kubenswrapper[4703]: I1209 12:19:52.973230 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrh8g\" (UniqueName: \"kubernetes.io/projected/6d0f6831-61e4-49f6-996d-2c1dcc081e40-kube-api-access-hrh8g\") pod \"nmstate-operator-5b5b58f5c8-6r94d\" (UID: \"6d0f6831-61e4-49f6-996d-2c1dcc081e40\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-6r94d" Dec 09 12:19:53 crc kubenswrapper[4703]: I1209 12:19:53.074304 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrh8g\" (UniqueName: \"kubernetes.io/projected/6d0f6831-61e4-49f6-996d-2c1dcc081e40-kube-api-access-hrh8g\") pod \"nmstate-operator-5b5b58f5c8-6r94d\" (UID: \"6d0f6831-61e4-49f6-996d-2c1dcc081e40\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-6r94d" Dec 09 12:19:53 crc kubenswrapper[4703]: I1209 12:19:53.100205 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrh8g\" (UniqueName: \"kubernetes.io/projected/6d0f6831-61e4-49f6-996d-2c1dcc081e40-kube-api-access-hrh8g\") pod \"nmstate-operator-5b5b58f5c8-6r94d\" (UID: \"6d0f6831-61e4-49f6-996d-2c1dcc081e40\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-6r94d" Dec 09 12:19:53 crc kubenswrapper[4703]: I1209 12:19:53.210567 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-6r94d" Dec 09 12:19:53 crc kubenswrapper[4703]: I1209 12:19:53.509290 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-6r94d"] Dec 09 12:19:53 crc kubenswrapper[4703]: I1209 12:19:53.610554 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-6r94d" event={"ID":"6d0f6831-61e4-49f6-996d-2c1dcc081e40","Type":"ContainerStarted","Data":"db8f98c3ca914e453ae898254a5e1312484750767efc2bc86f2e6af630aa8805"} Dec 09 12:19:56 crc kubenswrapper[4703]: I1209 12:19:56.632310 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-6r94d" event={"ID":"6d0f6831-61e4-49f6-996d-2c1dcc081e40","Type":"ContainerStarted","Data":"2936a3a03a0156a08c32a5083133c6de8a4ab0608c061cde514d59f18a8ed4e9"} Dec 09 12:19:56 crc kubenswrapper[4703]: I1209 12:19:56.653545 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-6r94d" podStartSLOduration=2.238601382 podStartE2EDuration="4.65351437s" podCreationTimestamp="2025-12-09 12:19:52 +0000 UTC" firstStartedPulling="2025-12-09 12:19:53.518534414 +0000 UTC m=+892.767297933" lastFinishedPulling="2025-12-09 12:19:55.933447412 +0000 UTC m=+895.182210921" observedRunningTime="2025-12-09 12:19:56.651969828 +0000 UTC m=+895.900733347" watchObservedRunningTime="2025-12-09 12:19:56.65351437 +0000 UTC m=+895.902277889" Dec 09 12:20:00 crc kubenswrapper[4703]: I1209 12:20:00.083398 4703 patch_prober.go:28] interesting pod/machine-config-daemon-q8sfk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 12:20:00 crc kubenswrapper[4703]: I1209 12:20:00.083951 4703 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 12:20:02 crc kubenswrapper[4703]: I1209 12:20:02.455934 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-x4ffh"] Dec 09 12:20:02 crc kubenswrapper[4703]: I1209 12:20:02.457831 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-x4ffh" Dec 09 12:20:02 crc kubenswrapper[4703]: I1209 12:20:02.471841 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-x4ffh"] Dec 09 12:20:02 crc kubenswrapper[4703]: I1209 12:20:02.483260 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-49s88"] Dec 09 12:20:02 crc kubenswrapper[4703]: I1209 12:20:02.484231 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-49s88" Dec 09 12:20:02 crc kubenswrapper[4703]: I1209 12:20:02.488148 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Dec 09 12:20:02 crc kubenswrapper[4703]: I1209 12:20:02.488803 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-5zvkt" Dec 09 12:20:02 crc kubenswrapper[4703]: I1209 12:20:02.515693 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-8tjxd"] Dec 09 12:20:02 crc kubenswrapper[4703]: I1209 12:20:02.517006 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-8tjxd" Dec 09 12:20:02 crc kubenswrapper[4703]: I1209 12:20:02.518692 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdn5r\" (UniqueName: \"kubernetes.io/projected/3cd7328d-79a9-479a-b16d-e0b8562cb246-kube-api-access-gdn5r\") pod \"nmstate-metrics-7f946cbc9-x4ffh\" (UID: \"3cd7328d-79a9-479a-b16d-e0b8562cb246\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-x4ffh" Dec 09 12:20:02 crc kubenswrapper[4703]: I1209 12:20:02.522259 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-49s88"] Dec 09 12:20:02 crc kubenswrapper[4703]: I1209 12:20:02.620268 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/f7efe488-c603-4417-bc88-d790f961bd6b-nmstate-lock\") pod \"nmstate-handler-8tjxd\" (UID: \"f7efe488-c603-4417-bc88-d790f961bd6b\") " pod="openshift-nmstate/nmstate-handler-8tjxd" Dec 09 12:20:02 crc kubenswrapper[4703]: I1209 12:20:02.620343 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzmkw\" (UniqueName: \"kubernetes.io/projected/ad79ffaf-8e65-45be-bd0a-4abcd3bafb06-kube-api-access-jzmkw\") pod \"nmstate-webhook-5f6d4c5ccb-49s88\" (UID: \"ad79ffaf-8e65-45be-bd0a-4abcd3bafb06\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-49s88" Dec 09 12:20:02 crc kubenswrapper[4703]: I1209 12:20:02.620384 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/f7efe488-c603-4417-bc88-d790f961bd6b-ovs-socket\") pod \"nmstate-handler-8tjxd\" (UID: \"f7efe488-c603-4417-bc88-d790f961bd6b\") " pod="openshift-nmstate/nmstate-handler-8tjxd" Dec 09 12:20:02 crc kubenswrapper[4703]: I1209 12:20:02.620412 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbzff\" (UniqueName: \"kubernetes.io/projected/f7efe488-c603-4417-bc88-d790f961bd6b-kube-api-access-cbzff\") pod \"nmstate-handler-8tjxd\" (UID: \"f7efe488-c603-4417-bc88-d790f961bd6b\") " pod="openshift-nmstate/nmstate-handler-8tjxd" Dec 09 12:20:02 crc kubenswrapper[4703]: I1209 12:20:02.620462 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/f7efe488-c603-4417-bc88-d790f961bd6b-dbus-socket\") pod \"nmstate-handler-8tjxd\" (UID: \"f7efe488-c603-4417-bc88-d790f961bd6b\") " pod="openshift-nmstate/nmstate-handler-8tjxd" Dec 09 12:20:02 crc kubenswrapper[4703]: I1209 12:20:02.620633 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/ad79ffaf-8e65-45be-bd0a-4abcd3bafb06-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-49s88\" (UID: \"ad79ffaf-8e65-45be-bd0a-4abcd3bafb06\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-49s88" Dec 09 12:20:02 crc kubenswrapper[4703]: I1209 12:20:02.620821 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdn5r\" (UniqueName: \"kubernetes.io/projected/3cd7328d-79a9-479a-b16d-e0b8562cb246-kube-api-access-gdn5r\") pod \"nmstate-metrics-7f946cbc9-x4ffh\" (UID: \"3cd7328d-79a9-479a-b16d-e0b8562cb246\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-x4ffh" Dec 09 12:20:02 crc kubenswrapper[4703]: I1209 12:20:02.658850 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-psqsd"] Dec 09 12:20:02 crc kubenswrapper[4703]: I1209 12:20:02.662233 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-psqsd" Dec 09 12:20:02 crc kubenswrapper[4703]: I1209 12:20:02.665801 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Dec 09 12:20:02 crc kubenswrapper[4703]: I1209 12:20:02.666526 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdn5r\" (UniqueName: \"kubernetes.io/projected/3cd7328d-79a9-479a-b16d-e0b8562cb246-kube-api-access-gdn5r\") pod \"nmstate-metrics-7f946cbc9-x4ffh\" (UID: \"3cd7328d-79a9-479a-b16d-e0b8562cb246\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-x4ffh" Dec 09 12:20:02 crc kubenswrapper[4703]: I1209 12:20:02.666706 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-dbtkl" Dec 09 12:20:02 crc kubenswrapper[4703]: I1209 12:20:02.675458 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Dec 09 12:20:02 crc kubenswrapper[4703]: I1209 12:20:02.677855 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-psqsd"] Dec 09 12:20:02 crc kubenswrapper[4703]: I1209 12:20:02.722867 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/f7efe488-c603-4417-bc88-d790f961bd6b-nmstate-lock\") pod \"nmstate-handler-8tjxd\" (UID: \"f7efe488-c603-4417-bc88-d790f961bd6b\") " pod="openshift-nmstate/nmstate-handler-8tjxd" Dec 09 12:20:02 crc kubenswrapper[4703]: I1209 12:20:02.722943 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvtpm\" (UniqueName: \"kubernetes.io/projected/70fee6a1-39ea-49c7-9af1-a1479caed970-kube-api-access-kvtpm\") pod \"nmstate-console-plugin-7fbb5f6569-psqsd\" (UID: \"70fee6a1-39ea-49c7-9af1-a1479caed970\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-psqsd" Dec 09 12:20:02 crc kubenswrapper[4703]: I1209 12:20:02.722983 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/70fee6a1-39ea-49c7-9af1-a1479caed970-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-psqsd\" (UID: \"70fee6a1-39ea-49c7-9af1-a1479caed970\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-psqsd" Dec 09 12:20:02 crc kubenswrapper[4703]: I1209 12:20:02.723012 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzmkw\" (UniqueName: \"kubernetes.io/projected/ad79ffaf-8e65-45be-bd0a-4abcd3bafb06-kube-api-access-jzmkw\") pod \"nmstate-webhook-5f6d4c5ccb-49s88\" (UID: \"ad79ffaf-8e65-45be-bd0a-4abcd3bafb06\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-49s88" Dec 09 12:20:02 crc kubenswrapper[4703]: I1209 12:20:02.723041 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/70fee6a1-39ea-49c7-9af1-a1479caed970-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-psqsd\" (UID: \"70fee6a1-39ea-49c7-9af1-a1479caed970\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-psqsd" Dec 09 12:20:02 crc kubenswrapper[4703]: I1209 12:20:02.723055 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/f7efe488-c603-4417-bc88-d790f961bd6b-nmstate-lock\") pod \"nmstate-handler-8tjxd\" (UID: \"f7efe488-c603-4417-bc88-d790f961bd6b\") " pod="openshift-nmstate/nmstate-handler-8tjxd" Dec 09 12:20:02 crc kubenswrapper[4703]: I1209 12:20:02.723073 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/f7efe488-c603-4417-bc88-d790f961bd6b-ovs-socket\") pod \"nmstate-handler-8tjxd\" (UID: \"f7efe488-c603-4417-bc88-d790f961bd6b\") " pod="openshift-nmstate/nmstate-handler-8tjxd" Dec 09 12:20:02 crc kubenswrapper[4703]: I1209 12:20:02.723207 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbzff\" (UniqueName: \"kubernetes.io/projected/f7efe488-c603-4417-bc88-d790f961bd6b-kube-api-access-cbzff\") pod \"nmstate-handler-8tjxd\" (UID: \"f7efe488-c603-4417-bc88-d790f961bd6b\") " pod="openshift-nmstate/nmstate-handler-8tjxd" Dec 09 12:20:02 crc kubenswrapper[4703]: I1209 12:20:02.723292 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/f7efe488-c603-4417-bc88-d790f961bd6b-dbus-socket\") pod \"nmstate-handler-8tjxd\" (UID: \"f7efe488-c603-4417-bc88-d790f961bd6b\") " pod="openshift-nmstate/nmstate-handler-8tjxd" Dec 09 12:20:02 crc kubenswrapper[4703]: I1209 12:20:02.723369 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/ad79ffaf-8e65-45be-bd0a-4abcd3bafb06-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-49s88\" (UID: \"ad79ffaf-8e65-45be-bd0a-4abcd3bafb06\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-49s88" Dec 09 12:20:02 crc kubenswrapper[4703]: I1209 12:20:02.723530 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/f7efe488-c603-4417-bc88-d790f961bd6b-ovs-socket\") pod \"nmstate-handler-8tjxd\" (UID: \"f7efe488-c603-4417-bc88-d790f961bd6b\") " pod="openshift-nmstate/nmstate-handler-8tjxd" Dec 09 12:20:02 crc kubenswrapper[4703]: I1209 12:20:02.723863 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/f7efe488-c603-4417-bc88-d790f961bd6b-dbus-socket\") pod \"nmstate-handler-8tjxd\" (UID: \"f7efe488-c603-4417-bc88-d790f961bd6b\") " pod="openshift-nmstate/nmstate-handler-8tjxd" Dec 09 12:20:02 crc kubenswrapper[4703]: I1209 12:20:02.741257 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzmkw\" (UniqueName: \"kubernetes.io/projected/ad79ffaf-8e65-45be-bd0a-4abcd3bafb06-kube-api-access-jzmkw\") pod \"nmstate-webhook-5f6d4c5ccb-49s88\" (UID: \"ad79ffaf-8e65-45be-bd0a-4abcd3bafb06\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-49s88" Dec 09 12:20:02 crc kubenswrapper[4703]: I1209 12:20:02.742964 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/ad79ffaf-8e65-45be-bd0a-4abcd3bafb06-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-49s88\" (UID: \"ad79ffaf-8e65-45be-bd0a-4abcd3bafb06\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-49s88" Dec 09 12:20:02 crc kubenswrapper[4703]: I1209 12:20:02.745469 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbzff\" (UniqueName: \"kubernetes.io/projected/f7efe488-c603-4417-bc88-d790f961bd6b-kube-api-access-cbzff\") pod \"nmstate-handler-8tjxd\" (UID: \"f7efe488-c603-4417-bc88-d790f961bd6b\") " pod="openshift-nmstate/nmstate-handler-8tjxd" Dec 09 12:20:02 crc kubenswrapper[4703]: I1209 12:20:02.786829 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-x4ffh" Dec 09 12:20:02 crc kubenswrapper[4703]: I1209 12:20:02.806241 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-49s88" Dec 09 12:20:02 crc kubenswrapper[4703]: I1209 12:20:02.825014 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvtpm\" (UniqueName: \"kubernetes.io/projected/70fee6a1-39ea-49c7-9af1-a1479caed970-kube-api-access-kvtpm\") pod \"nmstate-console-plugin-7fbb5f6569-psqsd\" (UID: \"70fee6a1-39ea-49c7-9af1-a1479caed970\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-psqsd" Dec 09 12:20:02 crc kubenswrapper[4703]: I1209 12:20:02.825237 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/70fee6a1-39ea-49c7-9af1-a1479caed970-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-psqsd\" (UID: \"70fee6a1-39ea-49c7-9af1-a1479caed970\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-psqsd" Dec 09 12:20:02 crc kubenswrapper[4703]: I1209 12:20:02.825438 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/70fee6a1-39ea-49c7-9af1-a1479caed970-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-psqsd\" (UID: \"70fee6a1-39ea-49c7-9af1-a1479caed970\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-psqsd" Dec 09 12:20:02 crc kubenswrapper[4703]: I1209 12:20:02.826570 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/70fee6a1-39ea-49c7-9af1-a1479caed970-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-psqsd\" (UID: \"70fee6a1-39ea-49c7-9af1-a1479caed970\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-psqsd" Dec 09 12:20:02 crc kubenswrapper[4703]: E1209 12:20:02.827256 4703 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Dec 09 12:20:02 crc kubenswrapper[4703]: E1209 12:20:02.827398 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/70fee6a1-39ea-49c7-9af1-a1479caed970-plugin-serving-cert podName:70fee6a1-39ea-49c7-9af1-a1479caed970 nodeName:}" failed. No retries permitted until 2025-12-09 12:20:03.327381146 +0000 UTC m=+902.576144665 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/70fee6a1-39ea-49c7-9af1-a1479caed970-plugin-serving-cert") pod "nmstate-console-plugin-7fbb5f6569-psqsd" (UID: "70fee6a1-39ea-49c7-9af1-a1479caed970") : secret "plugin-serving-cert" not found Dec 09 12:20:02 crc kubenswrapper[4703]: I1209 12:20:02.835573 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-8tjxd" Dec 09 12:20:02 crc kubenswrapper[4703]: I1209 12:20:02.868230 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvtpm\" (UniqueName: \"kubernetes.io/projected/70fee6a1-39ea-49c7-9af1-a1479caed970-kube-api-access-kvtpm\") pod \"nmstate-console-plugin-7fbb5f6569-psqsd\" (UID: \"70fee6a1-39ea-49c7-9af1-a1479caed970\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-psqsd" Dec 09 12:20:02 crc kubenswrapper[4703]: I1209 12:20:02.901522 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-57f4674b96-v2dcb"] Dec 09 12:20:02 crc kubenswrapper[4703]: I1209 12:20:02.902530 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-57f4674b96-v2dcb" Dec 09 12:20:02 crc kubenswrapper[4703]: W1209 12:20:02.911669 4703 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf7efe488_c603_4417_bc88_d790f961bd6b.slice/crio-45a0541ca550282ae70d21d59c2c716c6b87d7b88c328cbfea2bae60d138745c WatchSource:0}: Error finding container 45a0541ca550282ae70d21d59c2c716c6b87d7b88c328cbfea2bae60d138745c: Status 404 returned error can't find the container with id 45a0541ca550282ae70d21d59c2c716c6b87d7b88c328cbfea2bae60d138745c Dec 09 12:20:02 crc kubenswrapper[4703]: I1209 12:20:02.925940 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-57f4674b96-v2dcb"] Dec 09 12:20:03 crc kubenswrapper[4703]: I1209 12:20:03.036516 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a9763b25-ada6-4830-bd89-755e7df653f0-console-oauth-config\") pod \"console-57f4674b96-v2dcb\" (UID: \"a9763b25-ada6-4830-bd89-755e7df653f0\") " pod="openshift-console/console-57f4674b96-v2dcb" Dec 09 12:20:03 crc kubenswrapper[4703]: I1209 12:20:03.037117 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g986g\" (UniqueName: \"kubernetes.io/projected/a9763b25-ada6-4830-bd89-755e7df653f0-kube-api-access-g986g\") pod \"console-57f4674b96-v2dcb\" (UID: \"a9763b25-ada6-4830-bd89-755e7df653f0\") " pod="openshift-console/console-57f4674b96-v2dcb" Dec 09 12:20:03 crc kubenswrapper[4703]: I1209 12:20:03.037166 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a9763b25-ada6-4830-bd89-755e7df653f0-console-serving-cert\") pod \"console-57f4674b96-v2dcb\" (UID: \"a9763b25-ada6-4830-bd89-755e7df653f0\") " pod="openshift-console/console-57f4674b96-v2dcb" Dec 09 12:20:03 crc kubenswrapper[4703]: I1209 12:20:03.037218 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a9763b25-ada6-4830-bd89-755e7df653f0-console-config\") pod \"console-57f4674b96-v2dcb\" (UID: \"a9763b25-ada6-4830-bd89-755e7df653f0\") " pod="openshift-console/console-57f4674b96-v2dcb" Dec 09 12:20:03 crc kubenswrapper[4703]: I1209 12:20:03.037252 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a9763b25-ada6-4830-bd89-755e7df653f0-service-ca\") pod \"console-57f4674b96-v2dcb\" (UID: \"a9763b25-ada6-4830-bd89-755e7df653f0\") " pod="openshift-console/console-57f4674b96-v2dcb" Dec 09 12:20:03 crc kubenswrapper[4703]: I1209 12:20:03.037304 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a9763b25-ada6-4830-bd89-755e7df653f0-oauth-serving-cert\") pod \"console-57f4674b96-v2dcb\" (UID: \"a9763b25-ada6-4830-bd89-755e7df653f0\") " pod="openshift-console/console-57f4674b96-v2dcb" Dec 09 12:20:03 crc kubenswrapper[4703]: I1209 12:20:03.037433 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a9763b25-ada6-4830-bd89-755e7df653f0-trusted-ca-bundle\") pod \"console-57f4674b96-v2dcb\" (UID: \"a9763b25-ada6-4830-bd89-755e7df653f0\") " pod="openshift-console/console-57f4674b96-v2dcb" Dec 09 12:20:03 crc kubenswrapper[4703]: I1209 12:20:03.106890 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-49s88"] Dec 09 12:20:03 crc kubenswrapper[4703]: I1209 12:20:03.138712 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g986g\" (UniqueName: \"kubernetes.io/projected/a9763b25-ada6-4830-bd89-755e7df653f0-kube-api-access-g986g\") pod \"console-57f4674b96-v2dcb\" (UID: \"a9763b25-ada6-4830-bd89-755e7df653f0\") " pod="openshift-console/console-57f4674b96-v2dcb" Dec 09 12:20:03 crc kubenswrapper[4703]: I1209 12:20:03.138778 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a9763b25-ada6-4830-bd89-755e7df653f0-console-serving-cert\") pod \"console-57f4674b96-v2dcb\" (UID: \"a9763b25-ada6-4830-bd89-755e7df653f0\") " pod="openshift-console/console-57f4674b96-v2dcb" Dec 09 12:20:03 crc kubenswrapper[4703]: I1209 12:20:03.138832 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a9763b25-ada6-4830-bd89-755e7df653f0-console-config\") pod \"console-57f4674b96-v2dcb\" (UID: \"a9763b25-ada6-4830-bd89-755e7df653f0\") " pod="openshift-console/console-57f4674b96-v2dcb" Dec 09 12:20:03 crc kubenswrapper[4703]: I1209 12:20:03.138855 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a9763b25-ada6-4830-bd89-755e7df653f0-service-ca\") pod \"console-57f4674b96-v2dcb\" (UID: \"a9763b25-ada6-4830-bd89-755e7df653f0\") " pod="openshift-console/console-57f4674b96-v2dcb" Dec 09 12:20:03 crc kubenswrapper[4703]: I1209 12:20:03.138914 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a9763b25-ada6-4830-bd89-755e7df653f0-oauth-serving-cert\") pod \"console-57f4674b96-v2dcb\" (UID: \"a9763b25-ada6-4830-bd89-755e7df653f0\") " pod="openshift-console/console-57f4674b96-v2dcb" Dec 09 12:20:03 crc kubenswrapper[4703]: I1209 12:20:03.139004 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a9763b25-ada6-4830-bd89-755e7df653f0-trusted-ca-bundle\") pod \"console-57f4674b96-v2dcb\" (UID: \"a9763b25-ada6-4830-bd89-755e7df653f0\") " pod="openshift-console/console-57f4674b96-v2dcb" Dec 09 12:20:03 crc kubenswrapper[4703]: I1209 12:20:03.139052 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a9763b25-ada6-4830-bd89-755e7df653f0-console-oauth-config\") pod \"console-57f4674b96-v2dcb\" (UID: \"a9763b25-ada6-4830-bd89-755e7df653f0\") " pod="openshift-console/console-57f4674b96-v2dcb" Dec 09 12:20:03 crc kubenswrapper[4703]: I1209 12:20:03.141303 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a9763b25-ada6-4830-bd89-755e7df653f0-console-config\") pod \"console-57f4674b96-v2dcb\" (UID: \"a9763b25-ada6-4830-bd89-755e7df653f0\") " pod="openshift-console/console-57f4674b96-v2dcb" Dec 09 12:20:03 crc kubenswrapper[4703]: I1209 12:20:03.141819 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a9763b25-ada6-4830-bd89-755e7df653f0-trusted-ca-bundle\") pod \"console-57f4674b96-v2dcb\" (UID: \"a9763b25-ada6-4830-bd89-755e7df653f0\") " pod="openshift-console/console-57f4674b96-v2dcb" Dec 09 12:20:03 crc kubenswrapper[4703]: I1209 12:20:03.141913 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a9763b25-ada6-4830-bd89-755e7df653f0-service-ca\") pod \"console-57f4674b96-v2dcb\" (UID: \"a9763b25-ada6-4830-bd89-755e7df653f0\") " pod="openshift-console/console-57f4674b96-v2dcb" Dec 09 12:20:03 crc kubenswrapper[4703]: I1209 12:20:03.142426 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a9763b25-ada6-4830-bd89-755e7df653f0-oauth-serving-cert\") pod \"console-57f4674b96-v2dcb\" (UID: \"a9763b25-ada6-4830-bd89-755e7df653f0\") " pod="openshift-console/console-57f4674b96-v2dcb" Dec 09 12:20:03 crc kubenswrapper[4703]: I1209 12:20:03.143444 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a9763b25-ada6-4830-bd89-755e7df653f0-console-serving-cert\") pod \"console-57f4674b96-v2dcb\" (UID: \"a9763b25-ada6-4830-bd89-755e7df653f0\") " pod="openshift-console/console-57f4674b96-v2dcb" Dec 09 12:20:03 crc kubenswrapper[4703]: I1209 12:20:03.144583 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a9763b25-ada6-4830-bd89-755e7df653f0-console-oauth-config\") pod \"console-57f4674b96-v2dcb\" (UID: \"a9763b25-ada6-4830-bd89-755e7df653f0\") " pod="openshift-console/console-57f4674b96-v2dcb" Dec 09 12:20:03 crc kubenswrapper[4703]: I1209 12:20:03.162120 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g986g\" (UniqueName: \"kubernetes.io/projected/a9763b25-ada6-4830-bd89-755e7df653f0-kube-api-access-g986g\") pod \"console-57f4674b96-v2dcb\" (UID: \"a9763b25-ada6-4830-bd89-755e7df653f0\") " pod="openshift-console/console-57f4674b96-v2dcb" Dec 09 12:20:03 crc kubenswrapper[4703]: I1209 12:20:03.222881 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-57f4674b96-v2dcb" Dec 09 12:20:03 crc kubenswrapper[4703]: I1209 12:20:03.347960 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/70fee6a1-39ea-49c7-9af1-a1479caed970-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-psqsd\" (UID: \"70fee6a1-39ea-49c7-9af1-a1479caed970\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-psqsd" Dec 09 12:20:03 crc kubenswrapper[4703]: E1209 12:20:03.348141 4703 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Dec 09 12:20:03 crc kubenswrapper[4703]: E1209 12:20:03.348212 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/70fee6a1-39ea-49c7-9af1-a1479caed970-plugin-serving-cert podName:70fee6a1-39ea-49c7-9af1-a1479caed970 nodeName:}" failed. No retries permitted until 2025-12-09 12:20:04.348180531 +0000 UTC m=+903.596944040 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/70fee6a1-39ea-49c7-9af1-a1479caed970-plugin-serving-cert") pod "nmstate-console-plugin-7fbb5f6569-psqsd" (UID: "70fee6a1-39ea-49c7-9af1-a1479caed970") : secret "plugin-serving-cert" not found Dec 09 12:20:03 crc kubenswrapper[4703]: I1209 12:20:03.384621 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-x4ffh"] Dec 09 12:20:03 crc kubenswrapper[4703]: I1209 12:20:03.469992 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-57f4674b96-v2dcb"] Dec 09 12:20:03 crc kubenswrapper[4703]: W1209 12:20:03.475809 4703 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda9763b25_ada6_4830_bd89_755e7df653f0.slice/crio-5457c884ee11fa8d3878f0cdcf04f92aea1d285ffdc4e1e37f839f035b2a3a17 WatchSource:0}: Error finding container 5457c884ee11fa8d3878f0cdcf04f92aea1d285ffdc4e1e37f839f035b2a3a17: Status 404 returned error can't find the container with id 5457c884ee11fa8d3878f0cdcf04f92aea1d285ffdc4e1e37f839f035b2a3a17 Dec 09 12:20:03 crc kubenswrapper[4703]: I1209 12:20:03.681455 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-x4ffh" event={"ID":"3cd7328d-79a9-479a-b16d-e0b8562cb246","Type":"ContainerStarted","Data":"5c1b211f7754e7f52e612bd287dc088bd4ff664e5f38bdda218426ceb49a803f"} Dec 09 12:20:03 crc kubenswrapper[4703]: I1209 12:20:03.682550 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-49s88" event={"ID":"ad79ffaf-8e65-45be-bd0a-4abcd3bafb06","Type":"ContainerStarted","Data":"5a6901b610a9a6f1370dc836c77e3866426e9f00635b0ac1273034c9a006fa90"} Dec 09 12:20:03 crc kubenswrapper[4703]: I1209 12:20:03.684725 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-57f4674b96-v2dcb" event={"ID":"a9763b25-ada6-4830-bd89-755e7df653f0","Type":"ContainerStarted","Data":"bb8d66a634c772f790b33ce8844ec400142db6fef45f806fa528434567d361ec"} Dec 09 12:20:03 crc kubenswrapper[4703]: I1209 12:20:03.684782 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-57f4674b96-v2dcb" event={"ID":"a9763b25-ada6-4830-bd89-755e7df653f0","Type":"ContainerStarted","Data":"5457c884ee11fa8d3878f0cdcf04f92aea1d285ffdc4e1e37f839f035b2a3a17"} Dec 09 12:20:03 crc kubenswrapper[4703]: I1209 12:20:03.686704 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-8tjxd" event={"ID":"f7efe488-c603-4417-bc88-d790f961bd6b","Type":"ContainerStarted","Data":"45a0541ca550282ae70d21d59c2c716c6b87d7b88c328cbfea2bae60d138745c"} Dec 09 12:20:03 crc kubenswrapper[4703]: I1209 12:20:03.706704 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-57f4674b96-v2dcb" podStartSLOduration=1.706688328 podStartE2EDuration="1.706688328s" podCreationTimestamp="2025-12-09 12:20:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:20:03.705314331 +0000 UTC m=+902.954077850" watchObservedRunningTime="2025-12-09 12:20:03.706688328 +0000 UTC m=+902.955451847" Dec 09 12:20:04 crc kubenswrapper[4703]: I1209 12:20:04.363065 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/70fee6a1-39ea-49c7-9af1-a1479caed970-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-psqsd\" (UID: \"70fee6a1-39ea-49c7-9af1-a1479caed970\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-psqsd" Dec 09 12:20:04 crc kubenswrapper[4703]: I1209 12:20:04.371926 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/70fee6a1-39ea-49c7-9af1-a1479caed970-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-psqsd\" (UID: \"70fee6a1-39ea-49c7-9af1-a1479caed970\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-psqsd" Dec 09 12:20:04 crc kubenswrapper[4703]: I1209 12:20:04.506426 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-psqsd" Dec 09 12:20:04 crc kubenswrapper[4703]: I1209 12:20:04.785260 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-psqsd"] Dec 09 12:20:05 crc kubenswrapper[4703]: I1209 12:20:05.707394 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-psqsd" event={"ID":"70fee6a1-39ea-49c7-9af1-a1479caed970","Type":"ContainerStarted","Data":"4f6727ec09c02c56f8b9d06725523f52b9335f859e1306cdffbb0f17a815695d"} Dec 09 12:20:06 crc kubenswrapper[4703]: I1209 12:20:06.735658 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-x4ffh" event={"ID":"3cd7328d-79a9-479a-b16d-e0b8562cb246","Type":"ContainerStarted","Data":"54c5b6cef42a4fd507c3f9dcae7d26d534e8aafff2695cc9b8d75f19f3420494"} Dec 09 12:20:06 crc kubenswrapper[4703]: I1209 12:20:06.749912 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-49s88" event={"ID":"ad79ffaf-8e65-45be-bd0a-4abcd3bafb06","Type":"ContainerStarted","Data":"03843c898d4d4ad35cbc9f5d4810aa0229f420e761d33e758e0e79ca96be4699"} Dec 09 12:20:06 crc kubenswrapper[4703]: I1209 12:20:06.750392 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-49s88" Dec 09 12:20:06 crc kubenswrapper[4703]: I1209 12:20:06.770934 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-8tjxd" Dec 09 12:20:06 crc kubenswrapper[4703]: I1209 12:20:06.780363 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-49s88" podStartSLOduration=1.44662402 podStartE2EDuration="4.780330765s" podCreationTimestamp="2025-12-09 12:20:02 +0000 UTC" firstStartedPulling="2025-12-09 12:20:03.123740322 +0000 UTC m=+902.372503841" lastFinishedPulling="2025-12-09 12:20:06.457447067 +0000 UTC m=+905.706210586" observedRunningTime="2025-12-09 12:20:06.77390521 +0000 UTC m=+906.022668739" watchObservedRunningTime="2025-12-09 12:20:06.780330765 +0000 UTC m=+906.029094284" Dec 09 12:20:06 crc kubenswrapper[4703]: I1209 12:20:06.803536 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-8tjxd" podStartSLOduration=1.264255807 podStartE2EDuration="4.803513006s" podCreationTimestamp="2025-12-09 12:20:02 +0000 UTC" firstStartedPulling="2025-12-09 12:20:02.917244202 +0000 UTC m=+902.166007721" lastFinishedPulling="2025-12-09 12:20:06.456501391 +0000 UTC m=+905.705264920" observedRunningTime="2025-12-09 12:20:06.800881004 +0000 UTC m=+906.049644533" watchObservedRunningTime="2025-12-09 12:20:06.803513006 +0000 UTC m=+906.052276525" Dec 09 12:20:07 crc kubenswrapper[4703]: I1209 12:20:07.780778 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-8tjxd" event={"ID":"f7efe488-c603-4417-bc88-d790f961bd6b","Type":"ContainerStarted","Data":"e1304ec886ecd89748c25821ba7e89b03f8f8d2794fc2d7325df8b8e27648077"} Dec 09 12:20:09 crc kubenswrapper[4703]: I1209 12:20:09.802556 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-x4ffh" event={"ID":"3cd7328d-79a9-479a-b16d-e0b8562cb246","Type":"ContainerStarted","Data":"4a0849788ba51f282869d2fc16a3ae819e6c488b223f6be3f2253ad203e073bb"} Dec 09 12:20:09 crc kubenswrapper[4703]: I1209 12:20:09.824698 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-x4ffh" podStartSLOduration=2.273736493 podStartE2EDuration="7.824678154s" podCreationTimestamp="2025-12-09 12:20:02 +0000 UTC" firstStartedPulling="2025-12-09 12:20:03.395122049 +0000 UTC m=+902.643885568" lastFinishedPulling="2025-12-09 12:20:08.94606371 +0000 UTC m=+908.194827229" observedRunningTime="2025-12-09 12:20:09.821010044 +0000 UTC m=+909.069773563" watchObservedRunningTime="2025-12-09 12:20:09.824678154 +0000 UTC m=+909.073441673" Dec 09 12:20:12 crc kubenswrapper[4703]: I1209 12:20:12.825671 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-psqsd" event={"ID":"70fee6a1-39ea-49c7-9af1-a1479caed970","Type":"ContainerStarted","Data":"e572a93a1f9157b710e8650c071d39a6387c4bccc6bb167eaf18db2bc3a116b8"} Dec 09 12:20:12 crc kubenswrapper[4703]: I1209 12:20:12.845034 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-psqsd" podStartSLOduration=3.206436818 podStartE2EDuration="10.84501065s" podCreationTimestamp="2025-12-09 12:20:02 +0000 UTC" firstStartedPulling="2025-12-09 12:20:04.801496156 +0000 UTC m=+904.050259685" lastFinishedPulling="2025-12-09 12:20:12.440069998 +0000 UTC m=+911.688833517" observedRunningTime="2025-12-09 12:20:12.843096978 +0000 UTC m=+912.091860497" watchObservedRunningTime="2025-12-09 12:20:12.84501065 +0000 UTC m=+912.093774169" Dec 09 12:20:12 crc kubenswrapper[4703]: I1209 12:20:12.862966 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-8tjxd" Dec 09 12:20:13 crc kubenswrapper[4703]: I1209 12:20:13.223798 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-57f4674b96-v2dcb" Dec 09 12:20:13 crc kubenswrapper[4703]: I1209 12:20:13.223886 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-57f4674b96-v2dcb" Dec 09 12:20:13 crc kubenswrapper[4703]: I1209 12:20:13.232095 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-57f4674b96-v2dcb" Dec 09 12:20:13 crc kubenswrapper[4703]: I1209 12:20:13.835921 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-57f4674b96-v2dcb" Dec 09 12:20:13 crc kubenswrapper[4703]: I1209 12:20:13.896297 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-w77wb"] Dec 09 12:20:22 crc kubenswrapper[4703]: I1209 12:20:22.811756 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-49s88" Dec 09 12:20:30 crc kubenswrapper[4703]: I1209 12:20:30.083932 4703 patch_prober.go:28] interesting pod/machine-config-daemon-q8sfk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 12:20:30 crc kubenswrapper[4703]: I1209 12:20:30.084556 4703 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 12:20:38 crc kubenswrapper[4703]: I1209 12:20:38.934744 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83s57fh"] Dec 09 12:20:38 crc kubenswrapper[4703]: I1209 12:20:38.936492 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83s57fh" Dec 09 12:20:38 crc kubenswrapper[4703]: I1209 12:20:38.939332 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 09 12:20:38 crc kubenswrapper[4703]: I1209 12:20:38.944215 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-w77wb" podUID="2078d397-e8a5-4dbf-8573-360a9c373084" containerName="console" containerID="cri-o://7f2220a3c5eef3d8f446b06831ebef88ed7ca8de767af8eae163248b8b6221ad" gracePeriod=15 Dec 09 12:20:38 crc kubenswrapper[4703]: I1209 12:20:38.952156 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83s57fh"] Dec 09 12:20:38 crc kubenswrapper[4703]: I1209 12:20:38.993437 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/870e333f-b82c-439d-9d53-ce57aa5c83c9-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83s57fh\" (UID: \"870e333f-b82c-439d-9d53-ce57aa5c83c9\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83s57fh" Dec 09 12:20:38 crc kubenswrapper[4703]: I1209 12:20:38.993483 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4lhs\" (UniqueName: \"kubernetes.io/projected/870e333f-b82c-439d-9d53-ce57aa5c83c9-kube-api-access-j4lhs\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83s57fh\" (UID: \"870e333f-b82c-439d-9d53-ce57aa5c83c9\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83s57fh" Dec 09 12:20:38 crc kubenswrapper[4703]: I1209 12:20:38.993566 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/870e333f-b82c-439d-9d53-ce57aa5c83c9-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83s57fh\" (UID: \"870e333f-b82c-439d-9d53-ce57aa5c83c9\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83s57fh" Dec 09 12:20:39 crc kubenswrapper[4703]: I1209 12:20:39.095705 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/870e333f-b82c-439d-9d53-ce57aa5c83c9-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83s57fh\" (UID: \"870e333f-b82c-439d-9d53-ce57aa5c83c9\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83s57fh" Dec 09 12:20:39 crc kubenswrapper[4703]: I1209 12:20:39.095819 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/870e333f-b82c-439d-9d53-ce57aa5c83c9-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83s57fh\" (UID: \"870e333f-b82c-439d-9d53-ce57aa5c83c9\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83s57fh" Dec 09 12:20:39 crc kubenswrapper[4703]: I1209 12:20:39.095871 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4lhs\" (UniqueName: \"kubernetes.io/projected/870e333f-b82c-439d-9d53-ce57aa5c83c9-kube-api-access-j4lhs\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83s57fh\" (UID: \"870e333f-b82c-439d-9d53-ce57aa5c83c9\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83s57fh" Dec 09 12:20:39 crc kubenswrapper[4703]: I1209 12:20:39.096503 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/870e333f-b82c-439d-9d53-ce57aa5c83c9-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83s57fh\" (UID: \"870e333f-b82c-439d-9d53-ce57aa5c83c9\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83s57fh" Dec 09 12:20:39 crc kubenswrapper[4703]: I1209 12:20:39.096716 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/870e333f-b82c-439d-9d53-ce57aa5c83c9-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83s57fh\" (UID: \"870e333f-b82c-439d-9d53-ce57aa5c83c9\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83s57fh" Dec 09 12:20:39 crc kubenswrapper[4703]: I1209 12:20:39.115387 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4lhs\" (UniqueName: \"kubernetes.io/projected/870e333f-b82c-439d-9d53-ce57aa5c83c9-kube-api-access-j4lhs\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83s57fh\" (UID: \"870e333f-b82c-439d-9d53-ce57aa5c83c9\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83s57fh" Dec 09 12:20:39 crc kubenswrapper[4703]: I1209 12:20:39.260513 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83s57fh" Dec 09 12:20:39 crc kubenswrapper[4703]: I1209 12:20:39.389411 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-w77wb_2078d397-e8a5-4dbf-8573-360a9c373084/console/0.log" Dec 09 12:20:39 crc kubenswrapper[4703]: I1209 12:20:39.389495 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-w77wb" Dec 09 12:20:39 crc kubenswrapper[4703]: I1209 12:20:39.502796 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2078d397-e8a5-4dbf-8573-360a9c373084-console-config\") pod \"2078d397-e8a5-4dbf-8573-360a9c373084\" (UID: \"2078d397-e8a5-4dbf-8573-360a9c373084\") " Dec 09 12:20:39 crc kubenswrapper[4703]: I1209 12:20:39.502842 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2078d397-e8a5-4dbf-8573-360a9c373084-console-serving-cert\") pod \"2078d397-e8a5-4dbf-8573-360a9c373084\" (UID: \"2078d397-e8a5-4dbf-8573-360a9c373084\") " Dec 09 12:20:39 crc kubenswrapper[4703]: I1209 12:20:39.502868 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2078d397-e8a5-4dbf-8573-360a9c373084-service-ca\") pod \"2078d397-e8a5-4dbf-8573-360a9c373084\" (UID: \"2078d397-e8a5-4dbf-8573-360a9c373084\") " Dec 09 12:20:39 crc kubenswrapper[4703]: I1209 12:20:39.502918 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2078d397-e8a5-4dbf-8573-360a9c373084-oauth-serving-cert\") pod \"2078d397-e8a5-4dbf-8573-360a9c373084\" (UID: \"2078d397-e8a5-4dbf-8573-360a9c373084\") " Dec 09 12:20:39 crc kubenswrapper[4703]: I1209 12:20:39.503002 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2078d397-e8a5-4dbf-8573-360a9c373084-trusted-ca-bundle\") pod \"2078d397-e8a5-4dbf-8573-360a9c373084\" (UID: \"2078d397-e8a5-4dbf-8573-360a9c373084\") " Dec 09 12:20:39 crc kubenswrapper[4703]: I1209 12:20:39.503027 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-snfpv\" (UniqueName: \"kubernetes.io/projected/2078d397-e8a5-4dbf-8573-360a9c373084-kube-api-access-snfpv\") pod \"2078d397-e8a5-4dbf-8573-360a9c373084\" (UID: \"2078d397-e8a5-4dbf-8573-360a9c373084\") " Dec 09 12:20:39 crc kubenswrapper[4703]: I1209 12:20:39.503053 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2078d397-e8a5-4dbf-8573-360a9c373084-console-oauth-config\") pod \"2078d397-e8a5-4dbf-8573-360a9c373084\" (UID: \"2078d397-e8a5-4dbf-8573-360a9c373084\") " Dec 09 12:20:39 crc kubenswrapper[4703]: I1209 12:20:39.503667 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2078d397-e8a5-4dbf-8573-360a9c373084-console-config" (OuterVolumeSpecName: "console-config") pod "2078d397-e8a5-4dbf-8573-360a9c373084" (UID: "2078d397-e8a5-4dbf-8573-360a9c373084"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:20:39 crc kubenswrapper[4703]: I1209 12:20:39.504073 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2078d397-e8a5-4dbf-8573-360a9c373084-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "2078d397-e8a5-4dbf-8573-360a9c373084" (UID: "2078d397-e8a5-4dbf-8573-360a9c373084"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:20:39 crc kubenswrapper[4703]: I1209 12:20:39.504308 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2078d397-e8a5-4dbf-8573-360a9c373084-service-ca" (OuterVolumeSpecName: "service-ca") pod "2078d397-e8a5-4dbf-8573-360a9c373084" (UID: "2078d397-e8a5-4dbf-8573-360a9c373084"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:20:39 crc kubenswrapper[4703]: I1209 12:20:39.504730 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2078d397-e8a5-4dbf-8573-360a9c373084-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "2078d397-e8a5-4dbf-8573-360a9c373084" (UID: "2078d397-e8a5-4dbf-8573-360a9c373084"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:20:39 crc kubenswrapper[4703]: I1209 12:20:39.507150 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2078d397-e8a5-4dbf-8573-360a9c373084-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "2078d397-e8a5-4dbf-8573-360a9c373084" (UID: "2078d397-e8a5-4dbf-8573-360a9c373084"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:20:39 crc kubenswrapper[4703]: I1209 12:20:39.507669 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2078d397-e8a5-4dbf-8573-360a9c373084-kube-api-access-snfpv" (OuterVolumeSpecName: "kube-api-access-snfpv") pod "2078d397-e8a5-4dbf-8573-360a9c373084" (UID: "2078d397-e8a5-4dbf-8573-360a9c373084"). InnerVolumeSpecName "kube-api-access-snfpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:20:39 crc kubenswrapper[4703]: I1209 12:20:39.507971 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2078d397-e8a5-4dbf-8573-360a9c373084-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "2078d397-e8a5-4dbf-8573-360a9c373084" (UID: "2078d397-e8a5-4dbf-8573-360a9c373084"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:20:39 crc kubenswrapper[4703]: I1209 12:20:39.604443 4703 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2078d397-e8a5-4dbf-8573-360a9c373084-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 12:20:39 crc kubenswrapper[4703]: I1209 12:20:39.604479 4703 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2078d397-e8a5-4dbf-8573-360a9c373084-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:20:39 crc kubenswrapper[4703]: I1209 12:20:39.604488 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-snfpv\" (UniqueName: \"kubernetes.io/projected/2078d397-e8a5-4dbf-8573-360a9c373084-kube-api-access-snfpv\") on node \"crc\" DevicePath \"\"" Dec 09 12:20:39 crc kubenswrapper[4703]: I1209 12:20:39.604498 4703 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2078d397-e8a5-4dbf-8573-360a9c373084-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 09 12:20:39 crc kubenswrapper[4703]: I1209 12:20:39.604507 4703 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2078d397-e8a5-4dbf-8573-360a9c373084-console-config\") on node \"crc\" DevicePath \"\"" Dec 09 12:20:39 crc kubenswrapper[4703]: I1209 12:20:39.604517 4703 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2078d397-e8a5-4dbf-8573-360a9c373084-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 12:20:39 crc kubenswrapper[4703]: I1209 12:20:39.604525 4703 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2078d397-e8a5-4dbf-8573-360a9c373084-service-ca\") on node \"crc\" DevicePath \"\"" Dec 09 12:20:39 crc kubenswrapper[4703]: I1209 12:20:39.724108 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83s57fh"] Dec 09 12:20:39 crc kubenswrapper[4703]: I1209 12:20:39.999385 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-w77wb_2078d397-e8a5-4dbf-8573-360a9c373084/console/0.log" Dec 09 12:20:39 crc kubenswrapper[4703]: I1209 12:20:39.999445 4703 generic.go:334] "Generic (PLEG): container finished" podID="2078d397-e8a5-4dbf-8573-360a9c373084" containerID="7f2220a3c5eef3d8f446b06831ebef88ed7ca8de767af8eae163248b8b6221ad" exitCode=2 Dec 09 12:20:40 crc kubenswrapper[4703]: I1209 12:20:39.999517 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-w77wb" event={"ID":"2078d397-e8a5-4dbf-8573-360a9c373084","Type":"ContainerDied","Data":"7f2220a3c5eef3d8f446b06831ebef88ed7ca8de767af8eae163248b8b6221ad"} Dec 09 12:20:40 crc kubenswrapper[4703]: I1209 12:20:39.999551 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-w77wb" event={"ID":"2078d397-e8a5-4dbf-8573-360a9c373084","Type":"ContainerDied","Data":"e11cbdcb9eaeb83f21e728183ff63cfb3227504496e5bb4a5a249049888f13a1"} Dec 09 12:20:40 crc kubenswrapper[4703]: I1209 12:20:39.999550 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-w77wb" Dec 09 12:20:40 crc kubenswrapper[4703]: I1209 12:20:39.999569 4703 scope.go:117] "RemoveContainer" containerID="7f2220a3c5eef3d8f446b06831ebef88ed7ca8de767af8eae163248b8b6221ad" Dec 09 12:20:40 crc kubenswrapper[4703]: I1209 12:20:40.000477 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83s57fh" event={"ID":"870e333f-b82c-439d-9d53-ce57aa5c83c9","Type":"ContainerStarted","Data":"3b3ad70ea540c4d2c929c83f81d8efbd61ec738b62e317d253217ef93505e478"} Dec 09 12:20:40 crc kubenswrapper[4703]: I1209 12:20:40.041916 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-w77wb"] Dec 09 12:20:40 crc kubenswrapper[4703]: I1209 12:20:40.047001 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-w77wb"] Dec 09 12:20:40 crc kubenswrapper[4703]: I1209 12:20:40.216757 4703 scope.go:117] "RemoveContainer" containerID="7f2220a3c5eef3d8f446b06831ebef88ed7ca8de767af8eae163248b8b6221ad" Dec 09 12:20:40 crc kubenswrapper[4703]: E1209 12:20:40.217369 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f2220a3c5eef3d8f446b06831ebef88ed7ca8de767af8eae163248b8b6221ad\": container with ID starting with 7f2220a3c5eef3d8f446b06831ebef88ed7ca8de767af8eae163248b8b6221ad not found: ID does not exist" containerID="7f2220a3c5eef3d8f446b06831ebef88ed7ca8de767af8eae163248b8b6221ad" Dec 09 12:20:40 crc kubenswrapper[4703]: I1209 12:20:40.217413 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f2220a3c5eef3d8f446b06831ebef88ed7ca8de767af8eae163248b8b6221ad"} err="failed to get container status \"7f2220a3c5eef3d8f446b06831ebef88ed7ca8de767af8eae163248b8b6221ad\": rpc error: code = NotFound desc = could not find container \"7f2220a3c5eef3d8f446b06831ebef88ed7ca8de767af8eae163248b8b6221ad\": container with ID starting with 7f2220a3c5eef3d8f446b06831ebef88ed7ca8de767af8eae163248b8b6221ad not found: ID does not exist" Dec 09 12:20:41 crc kubenswrapper[4703]: I1209 12:20:41.013027 4703 generic.go:334] "Generic (PLEG): container finished" podID="870e333f-b82c-439d-9d53-ce57aa5c83c9" containerID="e54e786dfecda980762ee566cbd01e8d0223ece045d3055f1c2d8ff9c033efdf" exitCode=0 Dec 09 12:20:41 crc kubenswrapper[4703]: I1209 12:20:41.013169 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83s57fh" event={"ID":"870e333f-b82c-439d-9d53-ce57aa5c83c9","Type":"ContainerDied","Data":"e54e786dfecda980762ee566cbd01e8d0223ece045d3055f1c2d8ff9c033efdf"} Dec 09 12:20:41 crc kubenswrapper[4703]: I1209 12:20:41.079112 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2078d397-e8a5-4dbf-8573-360a9c373084" path="/var/lib/kubelet/pods/2078d397-e8a5-4dbf-8573-360a9c373084/volumes" Dec 09 12:20:43 crc kubenswrapper[4703]: I1209 12:20:43.032249 4703 generic.go:334] "Generic (PLEG): container finished" podID="870e333f-b82c-439d-9d53-ce57aa5c83c9" containerID="325303c9a2554b0013a604ba89f7b441e0d7b7c1a21ee15ab3a44310ac817c3a" exitCode=0 Dec 09 12:20:43 crc kubenswrapper[4703]: I1209 12:20:43.032363 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83s57fh" event={"ID":"870e333f-b82c-439d-9d53-ce57aa5c83c9","Type":"ContainerDied","Data":"325303c9a2554b0013a604ba89f7b441e0d7b7c1a21ee15ab3a44310ac817c3a"} Dec 09 12:20:44 crc kubenswrapper[4703]: I1209 12:20:44.040447 4703 generic.go:334] "Generic (PLEG): container finished" podID="870e333f-b82c-439d-9d53-ce57aa5c83c9" containerID="f9657b8717d3e0dc2889499ea1408b94f7815ca7752a98576a3e97c6c527b31f" exitCode=0 Dec 09 12:20:44 crc kubenswrapper[4703]: I1209 12:20:44.040507 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83s57fh" event={"ID":"870e333f-b82c-439d-9d53-ce57aa5c83c9","Type":"ContainerDied","Data":"f9657b8717d3e0dc2889499ea1408b94f7815ca7752a98576a3e97c6c527b31f"} Dec 09 12:20:45 crc kubenswrapper[4703]: I1209 12:20:45.388383 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83s57fh" Dec 09 12:20:45 crc kubenswrapper[4703]: I1209 12:20:45.486151 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/870e333f-b82c-439d-9d53-ce57aa5c83c9-util\") pod \"870e333f-b82c-439d-9d53-ce57aa5c83c9\" (UID: \"870e333f-b82c-439d-9d53-ce57aa5c83c9\") " Dec 09 12:20:45 crc kubenswrapper[4703]: I1209 12:20:45.486505 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/870e333f-b82c-439d-9d53-ce57aa5c83c9-bundle\") pod \"870e333f-b82c-439d-9d53-ce57aa5c83c9\" (UID: \"870e333f-b82c-439d-9d53-ce57aa5c83c9\") " Dec 09 12:20:45 crc kubenswrapper[4703]: I1209 12:20:45.486584 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j4lhs\" (UniqueName: \"kubernetes.io/projected/870e333f-b82c-439d-9d53-ce57aa5c83c9-kube-api-access-j4lhs\") pod \"870e333f-b82c-439d-9d53-ce57aa5c83c9\" (UID: \"870e333f-b82c-439d-9d53-ce57aa5c83c9\") " Dec 09 12:20:45 crc kubenswrapper[4703]: I1209 12:20:45.487580 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/870e333f-b82c-439d-9d53-ce57aa5c83c9-bundle" (OuterVolumeSpecName: "bundle") pod "870e333f-b82c-439d-9d53-ce57aa5c83c9" (UID: "870e333f-b82c-439d-9d53-ce57aa5c83c9"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:20:45 crc kubenswrapper[4703]: I1209 12:20:45.492377 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/870e333f-b82c-439d-9d53-ce57aa5c83c9-kube-api-access-j4lhs" (OuterVolumeSpecName: "kube-api-access-j4lhs") pod "870e333f-b82c-439d-9d53-ce57aa5c83c9" (UID: "870e333f-b82c-439d-9d53-ce57aa5c83c9"). InnerVolumeSpecName "kube-api-access-j4lhs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:20:45 crc kubenswrapper[4703]: I1209 12:20:45.501583 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/870e333f-b82c-439d-9d53-ce57aa5c83c9-util" (OuterVolumeSpecName: "util") pod "870e333f-b82c-439d-9d53-ce57aa5c83c9" (UID: "870e333f-b82c-439d-9d53-ce57aa5c83c9"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:20:45 crc kubenswrapper[4703]: I1209 12:20:45.588165 4703 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/870e333f-b82c-439d-9d53-ce57aa5c83c9-util\") on node \"crc\" DevicePath \"\"" Dec 09 12:20:45 crc kubenswrapper[4703]: I1209 12:20:45.588548 4703 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/870e333f-b82c-439d-9d53-ce57aa5c83c9-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:20:45 crc kubenswrapper[4703]: I1209 12:20:45.588623 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j4lhs\" (UniqueName: \"kubernetes.io/projected/870e333f-b82c-439d-9d53-ce57aa5c83c9-kube-api-access-j4lhs\") on node \"crc\" DevicePath \"\"" Dec 09 12:20:46 crc kubenswrapper[4703]: I1209 12:20:46.064783 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83s57fh" event={"ID":"870e333f-b82c-439d-9d53-ce57aa5c83c9","Type":"ContainerDied","Data":"3b3ad70ea540c4d2c929c83f81d8efbd61ec738b62e317d253217ef93505e478"} Dec 09 12:20:46 crc kubenswrapper[4703]: I1209 12:20:46.065229 4703 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b3ad70ea540c4d2c929c83f81d8efbd61ec738b62e317d253217ef93505e478" Dec 09 12:20:46 crc kubenswrapper[4703]: I1209 12:20:46.064889 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83s57fh" Dec 09 12:20:53 crc kubenswrapper[4703]: I1209 12:20:53.076956 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-drrdm"] Dec 09 12:20:53 crc kubenswrapper[4703]: E1209 12:20:53.077655 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="870e333f-b82c-439d-9d53-ce57aa5c83c9" containerName="util" Dec 09 12:20:53 crc kubenswrapper[4703]: I1209 12:20:53.077673 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="870e333f-b82c-439d-9d53-ce57aa5c83c9" containerName="util" Dec 09 12:20:53 crc kubenswrapper[4703]: E1209 12:20:53.077696 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="870e333f-b82c-439d-9d53-ce57aa5c83c9" containerName="pull" Dec 09 12:20:53 crc kubenswrapper[4703]: I1209 12:20:53.077705 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="870e333f-b82c-439d-9d53-ce57aa5c83c9" containerName="pull" Dec 09 12:20:53 crc kubenswrapper[4703]: E1209 12:20:53.077720 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2078d397-e8a5-4dbf-8573-360a9c373084" containerName="console" Dec 09 12:20:53 crc kubenswrapper[4703]: I1209 12:20:53.077727 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="2078d397-e8a5-4dbf-8573-360a9c373084" containerName="console" Dec 09 12:20:53 crc kubenswrapper[4703]: E1209 12:20:53.077745 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="870e333f-b82c-439d-9d53-ce57aa5c83c9" containerName="extract" Dec 09 12:20:53 crc kubenswrapper[4703]: I1209 12:20:53.077753 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="870e333f-b82c-439d-9d53-ce57aa5c83c9" containerName="extract" Dec 09 12:20:53 crc kubenswrapper[4703]: I1209 12:20:53.077951 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="2078d397-e8a5-4dbf-8573-360a9c373084" containerName="console" Dec 09 12:20:53 crc kubenswrapper[4703]: I1209 12:20:53.077967 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="870e333f-b82c-439d-9d53-ce57aa5c83c9" containerName="extract" Dec 09 12:20:53 crc kubenswrapper[4703]: I1209 12:20:53.078777 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-drrdm" Dec 09 12:20:53 crc kubenswrapper[4703]: I1209 12:20:53.081276 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-drrdm"] Dec 09 12:20:53 crc kubenswrapper[4703]: I1209 12:20:53.104896 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmqkf\" (UniqueName: \"kubernetes.io/projected/7ec4b8a0-7198-479a-9853-6ba5b9606365-kube-api-access-qmqkf\") pod \"certified-operators-drrdm\" (UID: \"7ec4b8a0-7198-479a-9853-6ba5b9606365\") " pod="openshift-marketplace/certified-operators-drrdm" Dec 09 12:20:53 crc kubenswrapper[4703]: I1209 12:20:53.104986 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ec4b8a0-7198-479a-9853-6ba5b9606365-catalog-content\") pod \"certified-operators-drrdm\" (UID: \"7ec4b8a0-7198-479a-9853-6ba5b9606365\") " pod="openshift-marketplace/certified-operators-drrdm" Dec 09 12:20:53 crc kubenswrapper[4703]: I1209 12:20:53.105183 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ec4b8a0-7198-479a-9853-6ba5b9606365-utilities\") pod \"certified-operators-drrdm\" (UID: \"7ec4b8a0-7198-479a-9853-6ba5b9606365\") " pod="openshift-marketplace/certified-operators-drrdm" Dec 09 12:20:53 crc kubenswrapper[4703]: I1209 12:20:53.206568 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmqkf\" (UniqueName: \"kubernetes.io/projected/7ec4b8a0-7198-479a-9853-6ba5b9606365-kube-api-access-qmqkf\") pod \"certified-operators-drrdm\" (UID: \"7ec4b8a0-7198-479a-9853-6ba5b9606365\") " pod="openshift-marketplace/certified-operators-drrdm" Dec 09 12:20:53 crc kubenswrapper[4703]: I1209 12:20:53.206687 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ec4b8a0-7198-479a-9853-6ba5b9606365-catalog-content\") pod \"certified-operators-drrdm\" (UID: \"7ec4b8a0-7198-479a-9853-6ba5b9606365\") " pod="openshift-marketplace/certified-operators-drrdm" Dec 09 12:20:53 crc kubenswrapper[4703]: I1209 12:20:53.206740 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ec4b8a0-7198-479a-9853-6ba5b9606365-utilities\") pod \"certified-operators-drrdm\" (UID: \"7ec4b8a0-7198-479a-9853-6ba5b9606365\") " pod="openshift-marketplace/certified-operators-drrdm" Dec 09 12:20:53 crc kubenswrapper[4703]: I1209 12:20:53.208117 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ec4b8a0-7198-479a-9853-6ba5b9606365-utilities\") pod \"certified-operators-drrdm\" (UID: \"7ec4b8a0-7198-479a-9853-6ba5b9606365\") " pod="openshift-marketplace/certified-operators-drrdm" Dec 09 12:20:53 crc kubenswrapper[4703]: I1209 12:20:53.208226 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ec4b8a0-7198-479a-9853-6ba5b9606365-catalog-content\") pod \"certified-operators-drrdm\" (UID: \"7ec4b8a0-7198-479a-9853-6ba5b9606365\") " pod="openshift-marketplace/certified-operators-drrdm" Dec 09 12:20:53 crc kubenswrapper[4703]: I1209 12:20:53.238053 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmqkf\" (UniqueName: \"kubernetes.io/projected/7ec4b8a0-7198-479a-9853-6ba5b9606365-kube-api-access-qmqkf\") pod \"certified-operators-drrdm\" (UID: \"7ec4b8a0-7198-479a-9853-6ba5b9606365\") " pod="openshift-marketplace/certified-operators-drrdm" Dec 09 12:20:53 crc kubenswrapper[4703]: I1209 12:20:53.404207 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-drrdm" Dec 09 12:20:53 crc kubenswrapper[4703]: I1209 12:20:53.775074 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-drrdm"] Dec 09 12:20:54 crc kubenswrapper[4703]: I1209 12:20:54.122141 4703 generic.go:334] "Generic (PLEG): container finished" podID="7ec4b8a0-7198-479a-9853-6ba5b9606365" containerID="a510ac3bf84fd3dc976a75f103d3ed9c6ee56b933b9b37f659bfa402552efade" exitCode=0 Dec 09 12:20:54 crc kubenswrapper[4703]: I1209 12:20:54.122219 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-drrdm" event={"ID":"7ec4b8a0-7198-479a-9853-6ba5b9606365","Type":"ContainerDied","Data":"a510ac3bf84fd3dc976a75f103d3ed9c6ee56b933b9b37f659bfa402552efade"} Dec 09 12:20:54 crc kubenswrapper[4703]: I1209 12:20:54.122252 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-drrdm" event={"ID":"7ec4b8a0-7198-479a-9853-6ba5b9606365","Type":"ContainerStarted","Data":"4bb151290b24d04d4bea50aea25296f3c87ac70ac793b65fb2fcbc8105456370"} Dec 09 12:20:55 crc kubenswrapper[4703]: I1209 12:20:55.137845 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-drrdm" event={"ID":"7ec4b8a0-7198-479a-9853-6ba5b9606365","Type":"ContainerStarted","Data":"41dbda2a7cd59c09a7eb608848a34efa5b9603c03952f9e3a2c6c94a54e5076b"} Dec 09 12:20:56 crc kubenswrapper[4703]: I1209 12:20:56.145441 4703 generic.go:334] "Generic (PLEG): container finished" podID="7ec4b8a0-7198-479a-9853-6ba5b9606365" containerID="41dbda2a7cd59c09a7eb608848a34efa5b9603c03952f9e3a2c6c94a54e5076b" exitCode=0 Dec 09 12:20:56 crc kubenswrapper[4703]: I1209 12:20:56.145537 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-drrdm" event={"ID":"7ec4b8a0-7198-479a-9853-6ba5b9606365","Type":"ContainerDied","Data":"41dbda2a7cd59c09a7eb608848a34efa5b9603c03952f9e3a2c6c94a54e5076b"} Dec 09 12:20:57 crc kubenswrapper[4703]: I1209 12:20:57.160413 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-drrdm" event={"ID":"7ec4b8a0-7198-479a-9853-6ba5b9606365","Type":"ContainerStarted","Data":"6067f9558d2910f5bf407908ac076d2efec569aa29363e662cfd62a059d026c0"} Dec 09 12:20:57 crc kubenswrapper[4703]: I1209 12:20:57.182981 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-drrdm" podStartSLOduration=1.597005561 podStartE2EDuration="4.182934044s" podCreationTimestamp="2025-12-09 12:20:53 +0000 UTC" firstStartedPulling="2025-12-09 12:20:54.123904744 +0000 UTC m=+953.372668263" lastFinishedPulling="2025-12-09 12:20:56.709833227 +0000 UTC m=+955.958596746" observedRunningTime="2025-12-09 12:20:57.176593641 +0000 UTC m=+956.425357160" watchObservedRunningTime="2025-12-09 12:20:57.182934044 +0000 UTC m=+956.431697553" Dec 09 12:20:57 crc kubenswrapper[4703]: I1209 12:20:57.986842 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-6cdb947f87-7qh5t"] Dec 09 12:20:57 crc kubenswrapper[4703]: I1209 12:20:57.988014 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6cdb947f87-7qh5t" Dec 09 12:20:57 crc kubenswrapper[4703]: W1209 12:20:57.991550 4703 reflector.go:561] object-"metallb-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "metallb-system": no relationship found between node 'crc' and this object Dec 09 12:20:57 crc kubenswrapper[4703]: E1209 12:20:57.991595 4703 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"metallb-system\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 09 12:20:57 crc kubenswrapper[4703]: I1209 12:20:57.991616 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Dec 09 12:20:57 crc kubenswrapper[4703]: I1209 12:20:57.991854 4703 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Dec 09 12:20:57 crc kubenswrapper[4703]: I1209 12:20:57.994438 4703 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Dec 09 12:20:58 crc kubenswrapper[4703]: I1209 12:20:58.009574 4703 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-dscvh" Dec 09 12:20:58 crc kubenswrapper[4703]: I1209 12:20:58.022074 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6cdb947f87-7qh5t"] Dec 09 12:20:58 crc kubenswrapper[4703]: I1209 12:20:58.131348 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/099a4868-6d13-416d-bace-7c2a09de41a2-apiservice-cert\") pod \"metallb-operator-controller-manager-6cdb947f87-7qh5t\" (UID: \"099a4868-6d13-416d-bace-7c2a09de41a2\") " pod="metallb-system/metallb-operator-controller-manager-6cdb947f87-7qh5t" Dec 09 12:20:58 crc kubenswrapper[4703]: I1209 12:20:58.131431 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drcpw\" (UniqueName: \"kubernetes.io/projected/099a4868-6d13-416d-bace-7c2a09de41a2-kube-api-access-drcpw\") pod \"metallb-operator-controller-manager-6cdb947f87-7qh5t\" (UID: \"099a4868-6d13-416d-bace-7c2a09de41a2\") " pod="metallb-system/metallb-operator-controller-manager-6cdb947f87-7qh5t" Dec 09 12:20:58 crc kubenswrapper[4703]: I1209 12:20:58.131465 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/099a4868-6d13-416d-bace-7c2a09de41a2-webhook-cert\") pod \"metallb-operator-controller-manager-6cdb947f87-7qh5t\" (UID: \"099a4868-6d13-416d-bace-7c2a09de41a2\") " pod="metallb-system/metallb-operator-controller-manager-6cdb947f87-7qh5t" Dec 09 12:20:58 crc kubenswrapper[4703]: I1209 12:20:58.233407 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/099a4868-6d13-416d-bace-7c2a09de41a2-webhook-cert\") pod \"metallb-operator-controller-manager-6cdb947f87-7qh5t\" (UID: \"099a4868-6d13-416d-bace-7c2a09de41a2\") " pod="metallb-system/metallb-operator-controller-manager-6cdb947f87-7qh5t" Dec 09 12:20:58 crc kubenswrapper[4703]: I1209 12:20:58.233544 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/099a4868-6d13-416d-bace-7c2a09de41a2-apiservice-cert\") pod \"metallb-operator-controller-manager-6cdb947f87-7qh5t\" (UID: \"099a4868-6d13-416d-bace-7c2a09de41a2\") " pod="metallb-system/metallb-operator-controller-manager-6cdb947f87-7qh5t" Dec 09 12:20:58 crc kubenswrapper[4703]: I1209 12:20:58.233584 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drcpw\" (UniqueName: \"kubernetes.io/projected/099a4868-6d13-416d-bace-7c2a09de41a2-kube-api-access-drcpw\") pod \"metallb-operator-controller-manager-6cdb947f87-7qh5t\" (UID: \"099a4868-6d13-416d-bace-7c2a09de41a2\") " pod="metallb-system/metallb-operator-controller-manager-6cdb947f87-7qh5t" Dec 09 12:20:58 crc kubenswrapper[4703]: I1209 12:20:58.242230 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/099a4868-6d13-416d-bace-7c2a09de41a2-apiservice-cert\") pod \"metallb-operator-controller-manager-6cdb947f87-7qh5t\" (UID: \"099a4868-6d13-416d-bace-7c2a09de41a2\") " pod="metallb-system/metallb-operator-controller-manager-6cdb947f87-7qh5t" Dec 09 12:20:58 crc kubenswrapper[4703]: I1209 12:20:58.242308 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/099a4868-6d13-416d-bace-7c2a09de41a2-webhook-cert\") pod \"metallb-operator-controller-manager-6cdb947f87-7qh5t\" (UID: \"099a4868-6d13-416d-bace-7c2a09de41a2\") " pod="metallb-system/metallb-operator-controller-manager-6cdb947f87-7qh5t" Dec 09 12:20:58 crc kubenswrapper[4703]: I1209 12:20:58.283380 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-867f57ddcd-4kfk5"] Dec 09 12:20:58 crc kubenswrapper[4703]: I1209 12:20:58.284393 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-867f57ddcd-4kfk5" Dec 09 12:20:58 crc kubenswrapper[4703]: I1209 12:20:58.289739 4703 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Dec 09 12:20:58 crc kubenswrapper[4703]: I1209 12:20:58.296353 4703 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Dec 09 12:20:58 crc kubenswrapper[4703]: I1209 12:20:58.296398 4703 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-fjj8c" Dec 09 12:20:58 crc kubenswrapper[4703]: I1209 12:20:58.317019 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-867f57ddcd-4kfk5"] Dec 09 12:20:58 crc kubenswrapper[4703]: I1209 12:20:58.437417 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f8b98ceb-52b8-4666-8f76-06b1b3e6c01a-webhook-cert\") pod \"metallb-operator-webhook-server-867f57ddcd-4kfk5\" (UID: \"f8b98ceb-52b8-4666-8f76-06b1b3e6c01a\") " pod="metallb-system/metallb-operator-webhook-server-867f57ddcd-4kfk5" Dec 09 12:20:58 crc kubenswrapper[4703]: I1209 12:20:58.437567 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f8b98ceb-52b8-4666-8f76-06b1b3e6c01a-apiservice-cert\") pod \"metallb-operator-webhook-server-867f57ddcd-4kfk5\" (UID: \"f8b98ceb-52b8-4666-8f76-06b1b3e6c01a\") " pod="metallb-system/metallb-operator-webhook-server-867f57ddcd-4kfk5" Dec 09 12:20:58 crc kubenswrapper[4703]: I1209 12:20:58.437672 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvsz8\" (UniqueName: \"kubernetes.io/projected/f8b98ceb-52b8-4666-8f76-06b1b3e6c01a-kube-api-access-kvsz8\") pod \"metallb-operator-webhook-server-867f57ddcd-4kfk5\" (UID: \"f8b98ceb-52b8-4666-8f76-06b1b3e6c01a\") " pod="metallb-system/metallb-operator-webhook-server-867f57ddcd-4kfk5" Dec 09 12:20:58 crc kubenswrapper[4703]: I1209 12:20:58.538656 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvsz8\" (UniqueName: \"kubernetes.io/projected/f8b98ceb-52b8-4666-8f76-06b1b3e6c01a-kube-api-access-kvsz8\") pod \"metallb-operator-webhook-server-867f57ddcd-4kfk5\" (UID: \"f8b98ceb-52b8-4666-8f76-06b1b3e6c01a\") " pod="metallb-system/metallb-operator-webhook-server-867f57ddcd-4kfk5" Dec 09 12:20:58 crc kubenswrapper[4703]: I1209 12:20:58.538736 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f8b98ceb-52b8-4666-8f76-06b1b3e6c01a-webhook-cert\") pod \"metallb-operator-webhook-server-867f57ddcd-4kfk5\" (UID: \"f8b98ceb-52b8-4666-8f76-06b1b3e6c01a\") " pod="metallb-system/metallb-operator-webhook-server-867f57ddcd-4kfk5" Dec 09 12:20:58 crc kubenswrapper[4703]: I1209 12:20:58.538765 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f8b98ceb-52b8-4666-8f76-06b1b3e6c01a-apiservice-cert\") pod \"metallb-operator-webhook-server-867f57ddcd-4kfk5\" (UID: \"f8b98ceb-52b8-4666-8f76-06b1b3e6c01a\") " pod="metallb-system/metallb-operator-webhook-server-867f57ddcd-4kfk5" Dec 09 12:20:58 crc kubenswrapper[4703]: I1209 12:20:58.548099 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f8b98ceb-52b8-4666-8f76-06b1b3e6c01a-apiservice-cert\") pod \"metallb-operator-webhook-server-867f57ddcd-4kfk5\" (UID: \"f8b98ceb-52b8-4666-8f76-06b1b3e6c01a\") " pod="metallb-system/metallb-operator-webhook-server-867f57ddcd-4kfk5" Dec 09 12:20:58 crc kubenswrapper[4703]: I1209 12:20:58.562713 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f8b98ceb-52b8-4666-8f76-06b1b3e6c01a-webhook-cert\") pod \"metallb-operator-webhook-server-867f57ddcd-4kfk5\" (UID: \"f8b98ceb-52b8-4666-8f76-06b1b3e6c01a\") " pod="metallb-system/metallb-operator-webhook-server-867f57ddcd-4kfk5" Dec 09 12:20:59 crc kubenswrapper[4703]: E1209 12:20:59.252090 4703 projected.go:288] Couldn't get configMap metallb-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Dec 09 12:20:59 crc kubenswrapper[4703]: E1209 12:20:59.252150 4703 projected.go:194] Error preparing data for projected volume kube-api-access-drcpw for pod metallb-system/metallb-operator-controller-manager-6cdb947f87-7qh5t: failed to sync configmap cache: timed out waiting for the condition Dec 09 12:20:59 crc kubenswrapper[4703]: E1209 12:20:59.252245 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/099a4868-6d13-416d-bace-7c2a09de41a2-kube-api-access-drcpw podName:099a4868-6d13-416d-bace-7c2a09de41a2 nodeName:}" failed. No retries permitted until 2025-12-09 12:20:59.752217715 +0000 UTC m=+959.000981234 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-drcpw" (UniqueName: "kubernetes.io/projected/099a4868-6d13-416d-bace-7c2a09de41a2-kube-api-access-drcpw") pod "metallb-operator-controller-manager-6cdb947f87-7qh5t" (UID: "099a4868-6d13-416d-bace-7c2a09de41a2") : failed to sync configmap cache: timed out waiting for the condition Dec 09 12:20:59 crc kubenswrapper[4703]: I1209 12:20:59.371209 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Dec 09 12:20:59 crc kubenswrapper[4703]: I1209 12:20:59.383038 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvsz8\" (UniqueName: \"kubernetes.io/projected/f8b98ceb-52b8-4666-8f76-06b1b3e6c01a-kube-api-access-kvsz8\") pod \"metallb-operator-webhook-server-867f57ddcd-4kfk5\" (UID: \"f8b98ceb-52b8-4666-8f76-06b1b3e6c01a\") " pod="metallb-system/metallb-operator-webhook-server-867f57ddcd-4kfk5" Dec 09 12:20:59 crc kubenswrapper[4703]: I1209 12:20:59.514966 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-867f57ddcd-4kfk5" Dec 09 12:20:59 crc kubenswrapper[4703]: I1209 12:20:59.760966 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drcpw\" (UniqueName: \"kubernetes.io/projected/099a4868-6d13-416d-bace-7c2a09de41a2-kube-api-access-drcpw\") pod \"metallb-operator-controller-manager-6cdb947f87-7qh5t\" (UID: \"099a4868-6d13-416d-bace-7c2a09de41a2\") " pod="metallb-system/metallb-operator-controller-manager-6cdb947f87-7qh5t" Dec 09 12:20:59 crc kubenswrapper[4703]: I1209 12:20:59.766660 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drcpw\" (UniqueName: \"kubernetes.io/projected/099a4868-6d13-416d-bace-7c2a09de41a2-kube-api-access-drcpw\") pod \"metallb-operator-controller-manager-6cdb947f87-7qh5t\" (UID: \"099a4868-6d13-416d-bace-7c2a09de41a2\") " pod="metallb-system/metallb-operator-controller-manager-6cdb947f87-7qh5t" Dec 09 12:20:59 crc kubenswrapper[4703]: I1209 12:20:59.809986 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6cdb947f87-7qh5t" Dec 09 12:21:00 crc kubenswrapper[4703]: I1209 12:21:00.007232 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-867f57ddcd-4kfk5"] Dec 09 12:21:00 crc kubenswrapper[4703]: W1209 12:21:00.011870 4703 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf8b98ceb_52b8_4666_8f76_06b1b3e6c01a.slice/crio-841d25da3ff49f261c6e07bd92fc687ba8a476c4f866d9f1375b61564711dc31 WatchSource:0}: Error finding container 841d25da3ff49f261c6e07bd92fc687ba8a476c4f866d9f1375b61564711dc31: Status 404 returned error can't find the container with id 841d25da3ff49f261c6e07bd92fc687ba8a476c4f866d9f1375b61564711dc31 Dec 09 12:21:00 crc kubenswrapper[4703]: I1209 12:21:00.083453 4703 patch_prober.go:28] interesting pod/machine-config-daemon-q8sfk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 12:21:00 crc kubenswrapper[4703]: I1209 12:21:00.083533 4703 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 12:21:00 crc kubenswrapper[4703]: I1209 12:21:00.083594 4703 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" Dec 09 12:21:00 crc kubenswrapper[4703]: I1209 12:21:00.084293 4703 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b834b447788d8be29753e6c06c8a6c802214a19ed04f8682755c759ef6ba04af"} pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 12:21:00 crc kubenswrapper[4703]: I1209 12:21:00.084402 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" containerName="machine-config-daemon" containerID="cri-o://b834b447788d8be29753e6c06c8a6c802214a19ed04f8682755c759ef6ba04af" gracePeriod=600 Dec 09 12:21:00 crc kubenswrapper[4703]: I1209 12:21:00.184909 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-867f57ddcd-4kfk5" event={"ID":"f8b98ceb-52b8-4666-8f76-06b1b3e6c01a","Type":"ContainerStarted","Data":"841d25da3ff49f261c6e07bd92fc687ba8a476c4f866d9f1375b61564711dc31"} Dec 09 12:21:00 crc kubenswrapper[4703]: I1209 12:21:00.252335 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6cdb947f87-7qh5t"] Dec 09 12:21:00 crc kubenswrapper[4703]: W1209 12:21:00.256380 4703 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod099a4868_6d13_416d_bace_7c2a09de41a2.slice/crio-dabc2aa4e06e38aaa5f9b1ca4bc70057a3d39efc8d58fc6349e2a2daa234589a WatchSource:0}: Error finding container dabc2aa4e06e38aaa5f9b1ca4bc70057a3d39efc8d58fc6349e2a2daa234589a: Status 404 returned error can't find the container with id dabc2aa4e06e38aaa5f9b1ca4bc70057a3d39efc8d58fc6349e2a2daa234589a Dec 09 12:21:01 crc kubenswrapper[4703]: I1209 12:21:01.193677 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6cdb947f87-7qh5t" event={"ID":"099a4868-6d13-416d-bace-7c2a09de41a2","Type":"ContainerStarted","Data":"dabc2aa4e06e38aaa5f9b1ca4bc70057a3d39efc8d58fc6349e2a2daa234589a"} Dec 09 12:21:01 crc kubenswrapper[4703]: I1209 12:21:01.196305 4703 generic.go:334] "Generic (PLEG): container finished" podID="32956ceb-8540-406e-8693-e86efb46cd42" containerID="b834b447788d8be29753e6c06c8a6c802214a19ed04f8682755c759ef6ba04af" exitCode=0 Dec 09 12:21:01 crc kubenswrapper[4703]: I1209 12:21:01.196365 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" event={"ID":"32956ceb-8540-406e-8693-e86efb46cd42","Type":"ContainerDied","Data":"b834b447788d8be29753e6c06c8a6c802214a19ed04f8682755c759ef6ba04af"} Dec 09 12:21:01 crc kubenswrapper[4703]: I1209 12:21:01.196422 4703 scope.go:117] "RemoveContainer" containerID="070529224aea51e7b9ab2ac8deaa225f76e2ab9d46c38789bc31e027e9fd43af" Dec 09 12:21:02 crc kubenswrapper[4703]: I1209 12:21:02.270514 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" event={"ID":"32956ceb-8540-406e-8693-e86efb46cd42","Type":"ContainerStarted","Data":"852cd8ebe9b36e4877ac2f4fe135ba61b72af0fc110102ec40d7b7e1b7e0423f"} Dec 09 12:21:03 crc kubenswrapper[4703]: I1209 12:21:03.404411 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-drrdm" Dec 09 12:21:03 crc kubenswrapper[4703]: I1209 12:21:03.404717 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-drrdm" Dec 09 12:21:03 crc kubenswrapper[4703]: I1209 12:21:03.509526 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-drrdm" Dec 09 12:21:04 crc kubenswrapper[4703]: I1209 12:21:04.398802 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-drrdm" Dec 09 12:21:05 crc kubenswrapper[4703]: I1209 12:21:05.859061 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-drrdm"] Dec 09 12:21:06 crc kubenswrapper[4703]: I1209 12:21:06.317346 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-drrdm" podUID="7ec4b8a0-7198-479a-9853-6ba5b9606365" containerName="registry-server" containerID="cri-o://6067f9558d2910f5bf407908ac076d2efec569aa29363e662cfd62a059d026c0" gracePeriod=2 Dec 09 12:21:06 crc kubenswrapper[4703]: I1209 12:21:06.884554 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-drrdm" Dec 09 12:21:06 crc kubenswrapper[4703]: I1209 12:21:06.977604 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ec4b8a0-7198-479a-9853-6ba5b9606365-utilities\") pod \"7ec4b8a0-7198-479a-9853-6ba5b9606365\" (UID: \"7ec4b8a0-7198-479a-9853-6ba5b9606365\") " Dec 09 12:21:06 crc kubenswrapper[4703]: I1209 12:21:06.977758 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qmqkf\" (UniqueName: \"kubernetes.io/projected/7ec4b8a0-7198-479a-9853-6ba5b9606365-kube-api-access-qmqkf\") pod \"7ec4b8a0-7198-479a-9853-6ba5b9606365\" (UID: \"7ec4b8a0-7198-479a-9853-6ba5b9606365\") " Dec 09 12:21:06 crc kubenswrapper[4703]: I1209 12:21:06.977819 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ec4b8a0-7198-479a-9853-6ba5b9606365-catalog-content\") pod \"7ec4b8a0-7198-479a-9853-6ba5b9606365\" (UID: \"7ec4b8a0-7198-479a-9853-6ba5b9606365\") " Dec 09 12:21:06 crc kubenswrapper[4703]: I1209 12:21:06.978648 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ec4b8a0-7198-479a-9853-6ba5b9606365-utilities" (OuterVolumeSpecName: "utilities") pod "7ec4b8a0-7198-479a-9853-6ba5b9606365" (UID: "7ec4b8a0-7198-479a-9853-6ba5b9606365"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:21:06 crc kubenswrapper[4703]: I1209 12:21:06.988048 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ec4b8a0-7198-479a-9853-6ba5b9606365-kube-api-access-qmqkf" (OuterVolumeSpecName: "kube-api-access-qmqkf") pod "7ec4b8a0-7198-479a-9853-6ba5b9606365" (UID: "7ec4b8a0-7198-479a-9853-6ba5b9606365"). InnerVolumeSpecName "kube-api-access-qmqkf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:21:07 crc kubenswrapper[4703]: I1209 12:21:07.036424 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ec4b8a0-7198-479a-9853-6ba5b9606365-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7ec4b8a0-7198-479a-9853-6ba5b9606365" (UID: "7ec4b8a0-7198-479a-9853-6ba5b9606365"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:21:07 crc kubenswrapper[4703]: I1209 12:21:07.080365 4703 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ec4b8a0-7198-479a-9853-6ba5b9606365-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 12:21:07 crc kubenswrapper[4703]: I1209 12:21:07.080447 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qmqkf\" (UniqueName: \"kubernetes.io/projected/7ec4b8a0-7198-479a-9853-6ba5b9606365-kube-api-access-qmqkf\") on node \"crc\" DevicePath \"\"" Dec 09 12:21:07 crc kubenswrapper[4703]: I1209 12:21:07.080511 4703 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ec4b8a0-7198-479a-9853-6ba5b9606365-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 12:21:07 crc kubenswrapper[4703]: I1209 12:21:07.324764 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6cdb947f87-7qh5t" event={"ID":"099a4868-6d13-416d-bace-7c2a09de41a2","Type":"ContainerStarted","Data":"4c386f6d80f61f3647c70022e9ba084bd2c549e0f558df74f2079257c2417d9e"} Dec 09 12:21:07 crc kubenswrapper[4703]: I1209 12:21:07.325615 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-6cdb947f87-7qh5t" Dec 09 12:21:07 crc kubenswrapper[4703]: I1209 12:21:07.327645 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-867f57ddcd-4kfk5" event={"ID":"f8b98ceb-52b8-4666-8f76-06b1b3e6c01a","Type":"ContainerStarted","Data":"1cd5b207c59a8d3b964a4e95669fded3e435b7d9ff6e128c5078e7a2ea679206"} Dec 09 12:21:07 crc kubenswrapper[4703]: I1209 12:21:07.328126 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-867f57ddcd-4kfk5" Dec 09 12:21:07 crc kubenswrapper[4703]: I1209 12:21:07.332754 4703 generic.go:334] "Generic (PLEG): container finished" podID="7ec4b8a0-7198-479a-9853-6ba5b9606365" containerID="6067f9558d2910f5bf407908ac076d2efec569aa29363e662cfd62a059d026c0" exitCode=0 Dec 09 12:21:07 crc kubenswrapper[4703]: I1209 12:21:07.332802 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-drrdm" event={"ID":"7ec4b8a0-7198-479a-9853-6ba5b9606365","Type":"ContainerDied","Data":"6067f9558d2910f5bf407908ac076d2efec569aa29363e662cfd62a059d026c0"} Dec 09 12:21:07 crc kubenswrapper[4703]: I1209 12:21:07.332866 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-drrdm" event={"ID":"7ec4b8a0-7198-479a-9853-6ba5b9606365","Type":"ContainerDied","Data":"4bb151290b24d04d4bea50aea25296f3c87ac70ac793b65fb2fcbc8105456370"} Dec 09 12:21:07 crc kubenswrapper[4703]: I1209 12:21:07.332890 4703 scope.go:117] "RemoveContainer" containerID="6067f9558d2910f5bf407908ac076d2efec569aa29363e662cfd62a059d026c0" Dec 09 12:21:07 crc kubenswrapper[4703]: I1209 12:21:07.332936 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-drrdm" Dec 09 12:21:07 crc kubenswrapper[4703]: I1209 12:21:07.352841 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-6cdb947f87-7qh5t" podStartSLOduration=6.831369996 podStartE2EDuration="10.352819041s" podCreationTimestamp="2025-12-09 12:20:57 +0000 UTC" firstStartedPulling="2025-12-09 12:21:00.259981052 +0000 UTC m=+959.508744561" lastFinishedPulling="2025-12-09 12:21:03.781430097 +0000 UTC m=+963.030193606" observedRunningTime="2025-12-09 12:21:07.351444524 +0000 UTC m=+966.600208043" watchObservedRunningTime="2025-12-09 12:21:07.352819041 +0000 UTC m=+966.601582560" Dec 09 12:21:07 crc kubenswrapper[4703]: I1209 12:21:07.357152 4703 scope.go:117] "RemoveContainer" containerID="41dbda2a7cd59c09a7eb608848a34efa5b9603c03952f9e3a2c6c94a54e5076b" Dec 09 12:21:07 crc kubenswrapper[4703]: I1209 12:21:07.377153 4703 scope.go:117] "RemoveContainer" containerID="a510ac3bf84fd3dc976a75f103d3ed9c6ee56b933b9b37f659bfa402552efade" Dec 09 12:21:07 crc kubenswrapper[4703]: I1209 12:21:07.378688 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-867f57ddcd-4kfk5" podStartSLOduration=2.744882911 podStartE2EDuration="9.378669985s" podCreationTimestamp="2025-12-09 12:20:58 +0000 UTC" firstStartedPulling="2025-12-09 12:21:00.01407004 +0000 UTC m=+959.262833559" lastFinishedPulling="2025-12-09 12:21:06.647857114 +0000 UTC m=+965.896620633" observedRunningTime="2025-12-09 12:21:07.377583386 +0000 UTC m=+966.626346915" watchObservedRunningTime="2025-12-09 12:21:07.378669985 +0000 UTC m=+966.627433504" Dec 09 12:21:07 crc kubenswrapper[4703]: I1209 12:21:07.395330 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-drrdm"] Dec 09 12:21:07 crc kubenswrapper[4703]: I1209 12:21:07.399152 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-drrdm"] Dec 09 12:21:07 crc kubenswrapper[4703]: I1209 12:21:07.401431 4703 scope.go:117] "RemoveContainer" containerID="6067f9558d2910f5bf407908ac076d2efec569aa29363e662cfd62a059d026c0" Dec 09 12:21:07 crc kubenswrapper[4703]: E1209 12:21:07.402128 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6067f9558d2910f5bf407908ac076d2efec569aa29363e662cfd62a059d026c0\": container with ID starting with 6067f9558d2910f5bf407908ac076d2efec569aa29363e662cfd62a059d026c0 not found: ID does not exist" containerID="6067f9558d2910f5bf407908ac076d2efec569aa29363e662cfd62a059d026c0" Dec 09 12:21:07 crc kubenswrapper[4703]: I1209 12:21:07.402178 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6067f9558d2910f5bf407908ac076d2efec569aa29363e662cfd62a059d026c0"} err="failed to get container status \"6067f9558d2910f5bf407908ac076d2efec569aa29363e662cfd62a059d026c0\": rpc error: code = NotFound desc = could not find container \"6067f9558d2910f5bf407908ac076d2efec569aa29363e662cfd62a059d026c0\": container with ID starting with 6067f9558d2910f5bf407908ac076d2efec569aa29363e662cfd62a059d026c0 not found: ID does not exist" Dec 09 12:21:07 crc kubenswrapper[4703]: I1209 12:21:07.402224 4703 scope.go:117] "RemoveContainer" containerID="41dbda2a7cd59c09a7eb608848a34efa5b9603c03952f9e3a2c6c94a54e5076b" Dec 09 12:21:07 crc kubenswrapper[4703]: E1209 12:21:07.402711 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41dbda2a7cd59c09a7eb608848a34efa5b9603c03952f9e3a2c6c94a54e5076b\": container with ID starting with 41dbda2a7cd59c09a7eb608848a34efa5b9603c03952f9e3a2c6c94a54e5076b not found: ID does not exist" containerID="41dbda2a7cd59c09a7eb608848a34efa5b9603c03952f9e3a2c6c94a54e5076b" Dec 09 12:21:07 crc kubenswrapper[4703]: I1209 12:21:07.402760 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41dbda2a7cd59c09a7eb608848a34efa5b9603c03952f9e3a2c6c94a54e5076b"} err="failed to get container status \"41dbda2a7cd59c09a7eb608848a34efa5b9603c03952f9e3a2c6c94a54e5076b\": rpc error: code = NotFound desc = could not find container \"41dbda2a7cd59c09a7eb608848a34efa5b9603c03952f9e3a2c6c94a54e5076b\": container with ID starting with 41dbda2a7cd59c09a7eb608848a34efa5b9603c03952f9e3a2c6c94a54e5076b not found: ID does not exist" Dec 09 12:21:07 crc kubenswrapper[4703]: I1209 12:21:07.402794 4703 scope.go:117] "RemoveContainer" containerID="a510ac3bf84fd3dc976a75f103d3ed9c6ee56b933b9b37f659bfa402552efade" Dec 09 12:21:07 crc kubenswrapper[4703]: E1209 12:21:07.403330 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a510ac3bf84fd3dc976a75f103d3ed9c6ee56b933b9b37f659bfa402552efade\": container with ID starting with a510ac3bf84fd3dc976a75f103d3ed9c6ee56b933b9b37f659bfa402552efade not found: ID does not exist" containerID="a510ac3bf84fd3dc976a75f103d3ed9c6ee56b933b9b37f659bfa402552efade" Dec 09 12:21:07 crc kubenswrapper[4703]: I1209 12:21:07.403421 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a510ac3bf84fd3dc976a75f103d3ed9c6ee56b933b9b37f659bfa402552efade"} err="failed to get container status \"a510ac3bf84fd3dc976a75f103d3ed9c6ee56b933b9b37f659bfa402552efade\": rpc error: code = NotFound desc = could not find container \"a510ac3bf84fd3dc976a75f103d3ed9c6ee56b933b9b37f659bfa402552efade\": container with ID starting with a510ac3bf84fd3dc976a75f103d3ed9c6ee56b933b9b37f659bfa402552efade not found: ID does not exist" Dec 09 12:21:09 crc kubenswrapper[4703]: I1209 12:21:09.078404 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ec4b8a0-7198-479a-9853-6ba5b9606365" path="/var/lib/kubelet/pods/7ec4b8a0-7198-479a-9853-6ba5b9606365/volumes" Dec 09 12:21:19 crc kubenswrapper[4703]: I1209 12:21:19.520292 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-867f57ddcd-4kfk5" Dec 09 12:21:20 crc kubenswrapper[4703]: I1209 12:21:20.802050 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-49xwk"] Dec 09 12:21:20 crc kubenswrapper[4703]: E1209 12:21:20.802365 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ec4b8a0-7198-479a-9853-6ba5b9606365" containerName="registry-server" Dec 09 12:21:20 crc kubenswrapper[4703]: I1209 12:21:20.802380 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ec4b8a0-7198-479a-9853-6ba5b9606365" containerName="registry-server" Dec 09 12:21:20 crc kubenswrapper[4703]: E1209 12:21:20.802398 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ec4b8a0-7198-479a-9853-6ba5b9606365" containerName="extract-utilities" Dec 09 12:21:20 crc kubenswrapper[4703]: I1209 12:21:20.802406 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ec4b8a0-7198-479a-9853-6ba5b9606365" containerName="extract-utilities" Dec 09 12:21:20 crc kubenswrapper[4703]: E1209 12:21:20.802417 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ec4b8a0-7198-479a-9853-6ba5b9606365" containerName="extract-content" Dec 09 12:21:20 crc kubenswrapper[4703]: I1209 12:21:20.802425 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ec4b8a0-7198-479a-9853-6ba5b9606365" containerName="extract-content" Dec 09 12:21:20 crc kubenswrapper[4703]: I1209 12:21:20.802555 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ec4b8a0-7198-479a-9853-6ba5b9606365" containerName="registry-server" Dec 09 12:21:20 crc kubenswrapper[4703]: I1209 12:21:20.803612 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-49xwk" Dec 09 12:21:20 crc kubenswrapper[4703]: I1209 12:21:20.825449 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-49xwk"] Dec 09 12:21:20 crc kubenswrapper[4703]: I1209 12:21:20.886666 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9c056a7-1fa7-4b8e-8ea8-5686a8708501-utilities\") pod \"redhat-marketplace-49xwk\" (UID: \"b9c056a7-1fa7-4b8e-8ea8-5686a8708501\") " pod="openshift-marketplace/redhat-marketplace-49xwk" Dec 09 12:21:20 crc kubenswrapper[4703]: I1209 12:21:20.886771 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwkfg\" (UniqueName: \"kubernetes.io/projected/b9c056a7-1fa7-4b8e-8ea8-5686a8708501-kube-api-access-jwkfg\") pod \"redhat-marketplace-49xwk\" (UID: \"b9c056a7-1fa7-4b8e-8ea8-5686a8708501\") " pod="openshift-marketplace/redhat-marketplace-49xwk" Dec 09 12:21:20 crc kubenswrapper[4703]: I1209 12:21:20.886815 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9c056a7-1fa7-4b8e-8ea8-5686a8708501-catalog-content\") pod \"redhat-marketplace-49xwk\" (UID: \"b9c056a7-1fa7-4b8e-8ea8-5686a8708501\") " pod="openshift-marketplace/redhat-marketplace-49xwk" Dec 09 12:21:20 crc kubenswrapper[4703]: I1209 12:21:20.987558 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9c056a7-1fa7-4b8e-8ea8-5686a8708501-catalog-content\") pod \"redhat-marketplace-49xwk\" (UID: \"b9c056a7-1fa7-4b8e-8ea8-5686a8708501\") " pod="openshift-marketplace/redhat-marketplace-49xwk" Dec 09 12:21:20 crc kubenswrapper[4703]: I1209 12:21:20.987882 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9c056a7-1fa7-4b8e-8ea8-5686a8708501-utilities\") pod \"redhat-marketplace-49xwk\" (UID: \"b9c056a7-1fa7-4b8e-8ea8-5686a8708501\") " pod="openshift-marketplace/redhat-marketplace-49xwk" Dec 09 12:21:20 crc kubenswrapper[4703]: I1209 12:21:20.987952 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwkfg\" (UniqueName: \"kubernetes.io/projected/b9c056a7-1fa7-4b8e-8ea8-5686a8708501-kube-api-access-jwkfg\") pod \"redhat-marketplace-49xwk\" (UID: \"b9c056a7-1fa7-4b8e-8ea8-5686a8708501\") " pod="openshift-marketplace/redhat-marketplace-49xwk" Dec 09 12:21:20 crc kubenswrapper[4703]: I1209 12:21:20.988434 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9c056a7-1fa7-4b8e-8ea8-5686a8708501-utilities\") pod \"redhat-marketplace-49xwk\" (UID: \"b9c056a7-1fa7-4b8e-8ea8-5686a8708501\") " pod="openshift-marketplace/redhat-marketplace-49xwk" Dec 09 12:21:20 crc kubenswrapper[4703]: I1209 12:21:20.988911 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9c056a7-1fa7-4b8e-8ea8-5686a8708501-catalog-content\") pod \"redhat-marketplace-49xwk\" (UID: \"b9c056a7-1fa7-4b8e-8ea8-5686a8708501\") " pod="openshift-marketplace/redhat-marketplace-49xwk" Dec 09 12:21:21 crc kubenswrapper[4703]: I1209 12:21:21.011838 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwkfg\" (UniqueName: \"kubernetes.io/projected/b9c056a7-1fa7-4b8e-8ea8-5686a8708501-kube-api-access-jwkfg\") pod \"redhat-marketplace-49xwk\" (UID: \"b9c056a7-1fa7-4b8e-8ea8-5686a8708501\") " pod="openshift-marketplace/redhat-marketplace-49xwk" Dec 09 12:21:21 crc kubenswrapper[4703]: I1209 12:21:21.121274 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-49xwk" Dec 09 12:21:21 crc kubenswrapper[4703]: I1209 12:21:21.436779 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-49xwk"] Dec 09 12:21:22 crc kubenswrapper[4703]: I1209 12:21:22.435160 4703 generic.go:334] "Generic (PLEG): container finished" podID="b9c056a7-1fa7-4b8e-8ea8-5686a8708501" containerID="6ee3018f7337a6698718016839cb6449a0cb50dc161341ac2bb237a6d04249ad" exitCode=0 Dec 09 12:21:22 crc kubenswrapper[4703]: I1209 12:21:22.435329 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-49xwk" event={"ID":"b9c056a7-1fa7-4b8e-8ea8-5686a8708501","Type":"ContainerDied","Data":"6ee3018f7337a6698718016839cb6449a0cb50dc161341ac2bb237a6d04249ad"} Dec 09 12:21:22 crc kubenswrapper[4703]: I1209 12:21:22.435516 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-49xwk" event={"ID":"b9c056a7-1fa7-4b8e-8ea8-5686a8708501","Type":"ContainerStarted","Data":"5fefa4b00c8a0324a2b21299acd6ec5830b7cab58d14f7c3b486c2972d2f7ee8"} Dec 09 12:21:24 crc kubenswrapper[4703]: I1209 12:21:24.451772 4703 generic.go:334] "Generic (PLEG): container finished" podID="b9c056a7-1fa7-4b8e-8ea8-5686a8708501" containerID="3fb28ad061b17237da1813c76f0adb08935a6ffc72d745e4ba1a3dc4296f0c05" exitCode=0 Dec 09 12:21:24 crc kubenswrapper[4703]: I1209 12:21:24.451873 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-49xwk" event={"ID":"b9c056a7-1fa7-4b8e-8ea8-5686a8708501","Type":"ContainerDied","Data":"3fb28ad061b17237da1813c76f0adb08935a6ffc72d745e4ba1a3dc4296f0c05"} Dec 09 12:21:26 crc kubenswrapper[4703]: I1209 12:21:26.469874 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-49xwk" event={"ID":"b9c056a7-1fa7-4b8e-8ea8-5686a8708501","Type":"ContainerStarted","Data":"cf3af3a3840bb182eee872b00d7ab05704a421374ab78fb9e666e3adb6d9faeb"} Dec 09 12:21:26 crc kubenswrapper[4703]: I1209 12:21:26.501696 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-49xwk" podStartSLOduration=3.816813066 podStartE2EDuration="6.501662611s" podCreationTimestamp="2025-12-09 12:21:20 +0000 UTC" firstStartedPulling="2025-12-09 12:21:22.436745946 +0000 UTC m=+981.685509455" lastFinishedPulling="2025-12-09 12:21:25.121595491 +0000 UTC m=+984.370359000" observedRunningTime="2025-12-09 12:21:26.49427526 +0000 UTC m=+985.743038809" watchObservedRunningTime="2025-12-09 12:21:26.501662611 +0000 UTC m=+985.750426130" Dec 09 12:21:31 crc kubenswrapper[4703]: I1209 12:21:31.122728 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-49xwk" Dec 09 12:21:31 crc kubenswrapper[4703]: I1209 12:21:31.123396 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-49xwk" Dec 09 12:21:31 crc kubenswrapper[4703]: I1209 12:21:31.171766 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-49xwk" Dec 09 12:21:31 crc kubenswrapper[4703]: I1209 12:21:31.576143 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-49xwk" Dec 09 12:21:31 crc kubenswrapper[4703]: I1209 12:21:31.620496 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-49xwk"] Dec 09 12:21:33 crc kubenswrapper[4703]: I1209 12:21:33.514560 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-49xwk" podUID="b9c056a7-1fa7-4b8e-8ea8-5686a8708501" containerName="registry-server" containerID="cri-o://cf3af3a3840bb182eee872b00d7ab05704a421374ab78fb9e666e3adb6d9faeb" gracePeriod=2 Dec 09 12:21:34 crc kubenswrapper[4703]: I1209 12:21:34.521511 4703 generic.go:334] "Generic (PLEG): container finished" podID="b9c056a7-1fa7-4b8e-8ea8-5686a8708501" containerID="cf3af3a3840bb182eee872b00d7ab05704a421374ab78fb9e666e3adb6d9faeb" exitCode=0 Dec 09 12:21:34 crc kubenswrapper[4703]: I1209 12:21:34.521586 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-49xwk" event={"ID":"b9c056a7-1fa7-4b8e-8ea8-5686a8708501","Type":"ContainerDied","Data":"cf3af3a3840bb182eee872b00d7ab05704a421374ab78fb9e666e3adb6d9faeb"} Dec 09 12:21:35 crc kubenswrapper[4703]: I1209 12:21:35.119269 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-49xwk" Dec 09 12:21:35 crc kubenswrapper[4703]: I1209 12:21:35.223703 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jwkfg\" (UniqueName: \"kubernetes.io/projected/b9c056a7-1fa7-4b8e-8ea8-5686a8708501-kube-api-access-jwkfg\") pod \"b9c056a7-1fa7-4b8e-8ea8-5686a8708501\" (UID: \"b9c056a7-1fa7-4b8e-8ea8-5686a8708501\") " Dec 09 12:21:35 crc kubenswrapper[4703]: I1209 12:21:35.223784 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9c056a7-1fa7-4b8e-8ea8-5686a8708501-utilities\") pod \"b9c056a7-1fa7-4b8e-8ea8-5686a8708501\" (UID: \"b9c056a7-1fa7-4b8e-8ea8-5686a8708501\") " Dec 09 12:21:35 crc kubenswrapper[4703]: I1209 12:21:35.223827 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9c056a7-1fa7-4b8e-8ea8-5686a8708501-catalog-content\") pod \"b9c056a7-1fa7-4b8e-8ea8-5686a8708501\" (UID: \"b9c056a7-1fa7-4b8e-8ea8-5686a8708501\") " Dec 09 12:21:35 crc kubenswrapper[4703]: I1209 12:21:35.226495 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9c056a7-1fa7-4b8e-8ea8-5686a8708501-utilities" (OuterVolumeSpecName: "utilities") pod "b9c056a7-1fa7-4b8e-8ea8-5686a8708501" (UID: "b9c056a7-1fa7-4b8e-8ea8-5686a8708501"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:21:35 crc kubenswrapper[4703]: I1209 12:21:35.231033 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9c056a7-1fa7-4b8e-8ea8-5686a8708501-kube-api-access-jwkfg" (OuterVolumeSpecName: "kube-api-access-jwkfg") pod "b9c056a7-1fa7-4b8e-8ea8-5686a8708501" (UID: "b9c056a7-1fa7-4b8e-8ea8-5686a8708501"). InnerVolumeSpecName "kube-api-access-jwkfg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:21:35 crc kubenswrapper[4703]: I1209 12:21:35.252907 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9c056a7-1fa7-4b8e-8ea8-5686a8708501-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b9c056a7-1fa7-4b8e-8ea8-5686a8708501" (UID: "b9c056a7-1fa7-4b8e-8ea8-5686a8708501"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:21:35 crc kubenswrapper[4703]: I1209 12:21:35.325382 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jwkfg\" (UniqueName: \"kubernetes.io/projected/b9c056a7-1fa7-4b8e-8ea8-5686a8708501-kube-api-access-jwkfg\") on node \"crc\" DevicePath \"\"" Dec 09 12:21:35 crc kubenswrapper[4703]: I1209 12:21:35.325424 4703 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9c056a7-1fa7-4b8e-8ea8-5686a8708501-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 12:21:35 crc kubenswrapper[4703]: I1209 12:21:35.325433 4703 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9c056a7-1fa7-4b8e-8ea8-5686a8708501-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 12:21:35 crc kubenswrapper[4703]: I1209 12:21:35.530880 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-49xwk" event={"ID":"b9c056a7-1fa7-4b8e-8ea8-5686a8708501","Type":"ContainerDied","Data":"5fefa4b00c8a0324a2b21299acd6ec5830b7cab58d14f7c3b486c2972d2f7ee8"} Dec 09 12:21:35 crc kubenswrapper[4703]: I1209 12:21:35.530945 4703 scope.go:117] "RemoveContainer" containerID="cf3af3a3840bb182eee872b00d7ab05704a421374ab78fb9e666e3adb6d9faeb" Dec 09 12:21:35 crc kubenswrapper[4703]: I1209 12:21:35.531082 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-49xwk" Dec 09 12:21:35 crc kubenswrapper[4703]: I1209 12:21:35.557451 4703 scope.go:117] "RemoveContainer" containerID="3fb28ad061b17237da1813c76f0adb08935a6ffc72d745e4ba1a3dc4296f0c05" Dec 09 12:21:35 crc kubenswrapper[4703]: I1209 12:21:35.568652 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-49xwk"] Dec 09 12:21:35 crc kubenswrapper[4703]: I1209 12:21:35.579367 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-49xwk"] Dec 09 12:21:35 crc kubenswrapper[4703]: I1209 12:21:35.583478 4703 scope.go:117] "RemoveContainer" containerID="6ee3018f7337a6698718016839cb6449a0cb50dc161341ac2bb237a6d04249ad" Dec 09 12:21:37 crc kubenswrapper[4703]: I1209 12:21:37.079041 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9c056a7-1fa7-4b8e-8ea8-5686a8708501" path="/var/lib/kubelet/pods/b9c056a7-1fa7-4b8e-8ea8-5686a8708501/volumes" Dec 09 12:21:37 crc kubenswrapper[4703]: I1209 12:21:37.364790 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4hnb7"] Dec 09 12:21:37 crc kubenswrapper[4703]: E1209 12:21:37.365219 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9c056a7-1fa7-4b8e-8ea8-5686a8708501" containerName="extract-content" Dec 09 12:21:37 crc kubenswrapper[4703]: I1209 12:21:37.365243 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9c056a7-1fa7-4b8e-8ea8-5686a8708501" containerName="extract-content" Dec 09 12:21:37 crc kubenswrapper[4703]: E1209 12:21:37.365269 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9c056a7-1fa7-4b8e-8ea8-5686a8708501" containerName="registry-server" Dec 09 12:21:37 crc kubenswrapper[4703]: I1209 12:21:37.365279 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9c056a7-1fa7-4b8e-8ea8-5686a8708501" containerName="registry-server" Dec 09 12:21:37 crc kubenswrapper[4703]: E1209 12:21:37.365295 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9c056a7-1fa7-4b8e-8ea8-5686a8708501" containerName="extract-utilities" Dec 09 12:21:37 crc kubenswrapper[4703]: I1209 12:21:37.365303 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9c056a7-1fa7-4b8e-8ea8-5686a8708501" containerName="extract-utilities" Dec 09 12:21:37 crc kubenswrapper[4703]: I1209 12:21:37.365497 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9c056a7-1fa7-4b8e-8ea8-5686a8708501" containerName="registry-server" Dec 09 12:21:37 crc kubenswrapper[4703]: I1209 12:21:37.366735 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4hnb7" Dec 09 12:21:37 crc kubenswrapper[4703]: I1209 12:21:37.380479 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4hnb7"] Dec 09 12:21:37 crc kubenswrapper[4703]: I1209 12:21:37.455312 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86d65\" (UniqueName: \"kubernetes.io/projected/80bacab6-4086-43bb-9b2a-c32839661a2f-kube-api-access-86d65\") pod \"community-operators-4hnb7\" (UID: \"80bacab6-4086-43bb-9b2a-c32839661a2f\") " pod="openshift-marketplace/community-operators-4hnb7" Dec 09 12:21:37 crc kubenswrapper[4703]: I1209 12:21:37.455481 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80bacab6-4086-43bb-9b2a-c32839661a2f-catalog-content\") pod \"community-operators-4hnb7\" (UID: \"80bacab6-4086-43bb-9b2a-c32839661a2f\") " pod="openshift-marketplace/community-operators-4hnb7" Dec 09 12:21:37 crc kubenswrapper[4703]: I1209 12:21:37.455845 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80bacab6-4086-43bb-9b2a-c32839661a2f-utilities\") pod \"community-operators-4hnb7\" (UID: \"80bacab6-4086-43bb-9b2a-c32839661a2f\") " pod="openshift-marketplace/community-operators-4hnb7" Dec 09 12:21:37 crc kubenswrapper[4703]: I1209 12:21:37.557357 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80bacab6-4086-43bb-9b2a-c32839661a2f-utilities\") pod \"community-operators-4hnb7\" (UID: \"80bacab6-4086-43bb-9b2a-c32839661a2f\") " pod="openshift-marketplace/community-operators-4hnb7" Dec 09 12:21:37 crc kubenswrapper[4703]: I1209 12:21:37.557426 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86d65\" (UniqueName: \"kubernetes.io/projected/80bacab6-4086-43bb-9b2a-c32839661a2f-kube-api-access-86d65\") pod \"community-operators-4hnb7\" (UID: \"80bacab6-4086-43bb-9b2a-c32839661a2f\") " pod="openshift-marketplace/community-operators-4hnb7" Dec 09 12:21:37 crc kubenswrapper[4703]: I1209 12:21:37.557493 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80bacab6-4086-43bb-9b2a-c32839661a2f-catalog-content\") pod \"community-operators-4hnb7\" (UID: \"80bacab6-4086-43bb-9b2a-c32839661a2f\") " pod="openshift-marketplace/community-operators-4hnb7" Dec 09 12:21:37 crc kubenswrapper[4703]: I1209 12:21:37.558285 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80bacab6-4086-43bb-9b2a-c32839661a2f-catalog-content\") pod \"community-operators-4hnb7\" (UID: \"80bacab6-4086-43bb-9b2a-c32839661a2f\") " pod="openshift-marketplace/community-operators-4hnb7" Dec 09 12:21:37 crc kubenswrapper[4703]: I1209 12:21:37.558321 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80bacab6-4086-43bb-9b2a-c32839661a2f-utilities\") pod \"community-operators-4hnb7\" (UID: \"80bacab6-4086-43bb-9b2a-c32839661a2f\") " pod="openshift-marketplace/community-operators-4hnb7" Dec 09 12:21:37 crc kubenswrapper[4703]: I1209 12:21:37.580024 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86d65\" (UniqueName: \"kubernetes.io/projected/80bacab6-4086-43bb-9b2a-c32839661a2f-kube-api-access-86d65\") pod \"community-operators-4hnb7\" (UID: \"80bacab6-4086-43bb-9b2a-c32839661a2f\") " pod="openshift-marketplace/community-operators-4hnb7" Dec 09 12:21:37 crc kubenswrapper[4703]: I1209 12:21:37.681137 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4hnb7" Dec 09 12:21:38 crc kubenswrapper[4703]: I1209 12:21:38.048427 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4hnb7"] Dec 09 12:21:38 crc kubenswrapper[4703]: I1209 12:21:38.556323 4703 generic.go:334] "Generic (PLEG): container finished" podID="80bacab6-4086-43bb-9b2a-c32839661a2f" containerID="6fe7ec559aa4aa53eb246428fa2e1720d50dbd27d5256600b10afaaf29074a6c" exitCode=0 Dec 09 12:21:38 crc kubenswrapper[4703]: I1209 12:21:38.556407 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4hnb7" event={"ID":"80bacab6-4086-43bb-9b2a-c32839661a2f","Type":"ContainerDied","Data":"6fe7ec559aa4aa53eb246428fa2e1720d50dbd27d5256600b10afaaf29074a6c"} Dec 09 12:21:38 crc kubenswrapper[4703]: I1209 12:21:38.556455 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4hnb7" event={"ID":"80bacab6-4086-43bb-9b2a-c32839661a2f","Type":"ContainerStarted","Data":"f186099c31b8835ac18ffe8f5312a22f9877c5b614456d2c4f4684ab9dd55942"} Dec 09 12:21:39 crc kubenswrapper[4703]: I1209 12:21:39.816500 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-6cdb947f87-7qh5t" Dec 09 12:21:40 crc kubenswrapper[4703]: I1209 12:21:40.572233 4703 generic.go:334] "Generic (PLEG): container finished" podID="80bacab6-4086-43bb-9b2a-c32839661a2f" containerID="9b0cdd6b728b459f93146b6ecf22a7ca80d9dd2d0747c32683f2104937eaf147" exitCode=0 Dec 09 12:21:40 crc kubenswrapper[4703]: I1209 12:21:40.572278 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4hnb7" event={"ID":"80bacab6-4086-43bb-9b2a-c32839661a2f","Type":"ContainerDied","Data":"9b0cdd6b728b459f93146b6ecf22a7ca80d9dd2d0747c32683f2104937eaf147"} Dec 09 12:21:40 crc kubenswrapper[4703]: I1209 12:21:40.573845 4703 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 09 12:21:40 crc kubenswrapper[4703]: I1209 12:21:40.651488 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-j5bbv"] Dec 09 12:21:40 crc kubenswrapper[4703]: I1209 12:21:40.654304 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-j5bbv" Dec 09 12:21:40 crc kubenswrapper[4703]: I1209 12:21:40.658481 4703 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Dec 09 12:21:40 crc kubenswrapper[4703]: I1209 12:21:40.658481 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Dec 09 12:21:40 crc kubenswrapper[4703]: I1209 12:21:40.659471 4703 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-nszx7" Dec 09 12:21:40 crc kubenswrapper[4703]: I1209 12:21:40.673560 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-bw4n2"] Dec 09 12:21:40 crc kubenswrapper[4703]: I1209 12:21:40.676451 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-bw4n2" Dec 09 12:21:40 crc kubenswrapper[4703]: I1209 12:21:40.693520 4703 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Dec 09 12:21:40 crc kubenswrapper[4703]: I1209 12:21:40.747820 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-bw4n2"] Dec 09 12:21:40 crc kubenswrapper[4703]: I1209 12:21:40.789237 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-r685h"] Dec 09 12:21:40 crc kubenswrapper[4703]: I1209 12:21:40.790522 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-r685h" Dec 09 12:21:40 crc kubenswrapper[4703]: I1209 12:21:40.794613 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Dec 09 12:21:40 crc kubenswrapper[4703]: I1209 12:21:40.794778 4703 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Dec 09 12:21:40 crc kubenswrapper[4703]: I1209 12:21:40.794987 4703 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-b259r" Dec 09 12:21:40 crc kubenswrapper[4703]: I1209 12:21:40.795138 4703 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Dec 09 12:21:40 crc kubenswrapper[4703]: I1209 12:21:40.814738 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-f8648f98b-76gdd"] Dec 09 12:21:40 crc kubenswrapper[4703]: I1209 12:21:40.815979 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-f8648f98b-76gdd" Dec 09 12:21:40 crc kubenswrapper[4703]: I1209 12:21:40.827841 4703 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Dec 09 12:21:40 crc kubenswrapper[4703]: I1209 12:21:40.828127 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-f8648f98b-76gdd"] Dec 09 12:21:40 crc kubenswrapper[4703]: I1209 12:21:40.835551 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/8e7bbe07-74ed-43ff-9034-1e93caf42289-frr-startup\") pod \"frr-k8s-j5bbv\" (UID: \"8e7bbe07-74ed-43ff-9034-1e93caf42289\") " pod="metallb-system/frr-k8s-j5bbv" Dec 09 12:21:40 crc kubenswrapper[4703]: I1209 12:21:40.835600 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/8e7bbe07-74ed-43ff-9034-1e93caf42289-frr-conf\") pod \"frr-k8s-j5bbv\" (UID: \"8e7bbe07-74ed-43ff-9034-1e93caf42289\") " pod="metallb-system/frr-k8s-j5bbv" Dec 09 12:21:40 crc kubenswrapper[4703]: I1209 12:21:40.835634 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/8e7bbe07-74ed-43ff-9034-1e93caf42289-metrics\") pod \"frr-k8s-j5bbv\" (UID: \"8e7bbe07-74ed-43ff-9034-1e93caf42289\") " pod="metallb-system/frr-k8s-j5bbv" Dec 09 12:21:40 crc kubenswrapper[4703]: I1209 12:21:40.835655 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9dd2a2e2-1c03-4529-946e-abe76835d2f4-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-bw4n2\" (UID: \"9dd2a2e2-1c03-4529-946e-abe76835d2f4\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-bw4n2" Dec 09 12:21:40 crc kubenswrapper[4703]: I1209 12:21:40.835679 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kgwx\" (UniqueName: \"kubernetes.io/projected/8e7bbe07-74ed-43ff-9034-1e93caf42289-kube-api-access-9kgwx\") pod \"frr-k8s-j5bbv\" (UID: \"8e7bbe07-74ed-43ff-9034-1e93caf42289\") " pod="metallb-system/frr-k8s-j5bbv" Dec 09 12:21:40 crc kubenswrapper[4703]: I1209 12:21:40.835700 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8e7bbe07-74ed-43ff-9034-1e93caf42289-metrics-certs\") pod \"frr-k8s-j5bbv\" (UID: \"8e7bbe07-74ed-43ff-9034-1e93caf42289\") " pod="metallb-system/frr-k8s-j5bbv" Dec 09 12:21:40 crc kubenswrapper[4703]: I1209 12:21:40.835726 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/8e7bbe07-74ed-43ff-9034-1e93caf42289-frr-sockets\") pod \"frr-k8s-j5bbv\" (UID: \"8e7bbe07-74ed-43ff-9034-1e93caf42289\") " pod="metallb-system/frr-k8s-j5bbv" Dec 09 12:21:40 crc kubenswrapper[4703]: I1209 12:21:40.835762 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/8e7bbe07-74ed-43ff-9034-1e93caf42289-reloader\") pod \"frr-k8s-j5bbv\" (UID: \"8e7bbe07-74ed-43ff-9034-1e93caf42289\") " pod="metallb-system/frr-k8s-j5bbv" Dec 09 12:21:40 crc kubenswrapper[4703]: I1209 12:21:40.835793 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fjz6\" (UniqueName: \"kubernetes.io/projected/9dd2a2e2-1c03-4529-946e-abe76835d2f4-kube-api-access-2fjz6\") pod \"frr-k8s-webhook-server-7fcb986d4-bw4n2\" (UID: \"9dd2a2e2-1c03-4529-946e-abe76835d2f4\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-bw4n2" Dec 09 12:21:40 crc kubenswrapper[4703]: I1209 12:21:40.936911 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/8e7bbe07-74ed-43ff-9034-1e93caf42289-reloader\") pod \"frr-k8s-j5bbv\" (UID: \"8e7bbe07-74ed-43ff-9034-1e93caf42289\") " pod="metallb-system/frr-k8s-j5bbv" Dec 09 12:21:40 crc kubenswrapper[4703]: I1209 12:21:40.936991 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpwxc\" (UniqueName: \"kubernetes.io/projected/06eb48ca-40a8-4c61-97f5-b6ed1f0691fe-kube-api-access-mpwxc\") pod \"speaker-r685h\" (UID: \"06eb48ca-40a8-4c61-97f5-b6ed1f0691fe\") " pod="metallb-system/speaker-r685h" Dec 09 12:21:40 crc kubenswrapper[4703]: I1209 12:21:40.937052 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fjz6\" (UniqueName: \"kubernetes.io/projected/9dd2a2e2-1c03-4529-946e-abe76835d2f4-kube-api-access-2fjz6\") pod \"frr-k8s-webhook-server-7fcb986d4-bw4n2\" (UID: \"9dd2a2e2-1c03-4529-946e-abe76835d2f4\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-bw4n2" Dec 09 12:21:40 crc kubenswrapper[4703]: I1209 12:21:40.937105 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/06eb48ca-40a8-4c61-97f5-b6ed1f0691fe-metallb-excludel2\") pod \"speaker-r685h\" (UID: \"06eb48ca-40a8-4c61-97f5-b6ed1f0691fe\") " pod="metallb-system/speaker-r685h" Dec 09 12:21:40 crc kubenswrapper[4703]: I1209 12:21:40.937131 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e44337b7-8419-419c-8281-64da2bc8d0aa-metrics-certs\") pod \"controller-f8648f98b-76gdd\" (UID: \"e44337b7-8419-419c-8281-64da2bc8d0aa\") " pod="metallb-system/controller-f8648f98b-76gdd" Dec 09 12:21:40 crc kubenswrapper[4703]: I1209 12:21:40.937199 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/06eb48ca-40a8-4c61-97f5-b6ed1f0691fe-metrics-certs\") pod \"speaker-r685h\" (UID: \"06eb48ca-40a8-4c61-97f5-b6ed1f0691fe\") " pod="metallb-system/speaker-r685h" Dec 09 12:21:40 crc kubenswrapper[4703]: I1209 12:21:40.937343 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/8e7bbe07-74ed-43ff-9034-1e93caf42289-frr-startup\") pod \"frr-k8s-j5bbv\" (UID: \"8e7bbe07-74ed-43ff-9034-1e93caf42289\") " pod="metallb-system/frr-k8s-j5bbv" Dec 09 12:21:40 crc kubenswrapper[4703]: I1209 12:21:40.937389 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/8e7bbe07-74ed-43ff-9034-1e93caf42289-frr-conf\") pod \"frr-k8s-j5bbv\" (UID: \"8e7bbe07-74ed-43ff-9034-1e93caf42289\") " pod="metallb-system/frr-k8s-j5bbv" Dec 09 12:21:40 crc kubenswrapper[4703]: I1209 12:21:40.937411 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e44337b7-8419-419c-8281-64da2bc8d0aa-cert\") pod \"controller-f8648f98b-76gdd\" (UID: \"e44337b7-8419-419c-8281-64da2bc8d0aa\") " pod="metallb-system/controller-f8648f98b-76gdd" Dec 09 12:21:40 crc kubenswrapper[4703]: I1209 12:21:40.937439 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/8e7bbe07-74ed-43ff-9034-1e93caf42289-metrics\") pod \"frr-k8s-j5bbv\" (UID: \"8e7bbe07-74ed-43ff-9034-1e93caf42289\") " pod="metallb-system/frr-k8s-j5bbv" Dec 09 12:21:40 crc kubenswrapper[4703]: I1209 12:21:40.937434 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/8e7bbe07-74ed-43ff-9034-1e93caf42289-reloader\") pod \"frr-k8s-j5bbv\" (UID: \"8e7bbe07-74ed-43ff-9034-1e93caf42289\") " pod="metallb-system/frr-k8s-j5bbv" Dec 09 12:21:40 crc kubenswrapper[4703]: I1209 12:21:40.937466 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9dd2a2e2-1c03-4529-946e-abe76835d2f4-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-bw4n2\" (UID: \"9dd2a2e2-1c03-4529-946e-abe76835d2f4\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-bw4n2" Dec 09 12:21:40 crc kubenswrapper[4703]: I1209 12:21:40.937540 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9kgwx\" (UniqueName: \"kubernetes.io/projected/8e7bbe07-74ed-43ff-9034-1e93caf42289-kube-api-access-9kgwx\") pod \"frr-k8s-j5bbv\" (UID: \"8e7bbe07-74ed-43ff-9034-1e93caf42289\") " pod="metallb-system/frr-k8s-j5bbv" Dec 09 12:21:40 crc kubenswrapper[4703]: I1209 12:21:40.937568 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8e7bbe07-74ed-43ff-9034-1e93caf42289-metrics-certs\") pod \"frr-k8s-j5bbv\" (UID: \"8e7bbe07-74ed-43ff-9034-1e93caf42289\") " pod="metallb-system/frr-k8s-j5bbv" Dec 09 12:21:40 crc kubenswrapper[4703]: I1209 12:21:40.937599 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/06eb48ca-40a8-4c61-97f5-b6ed1f0691fe-memberlist\") pod \"speaker-r685h\" (UID: \"06eb48ca-40a8-4c61-97f5-b6ed1f0691fe\") " pod="metallb-system/speaker-r685h" Dec 09 12:21:40 crc kubenswrapper[4703]: I1209 12:21:40.937633 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/8e7bbe07-74ed-43ff-9034-1e93caf42289-frr-sockets\") pod \"frr-k8s-j5bbv\" (UID: \"8e7bbe07-74ed-43ff-9034-1e93caf42289\") " pod="metallb-system/frr-k8s-j5bbv" Dec 09 12:21:40 crc kubenswrapper[4703]: I1209 12:21:40.937666 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8c6cp\" (UniqueName: \"kubernetes.io/projected/e44337b7-8419-419c-8281-64da2bc8d0aa-kube-api-access-8c6cp\") pod \"controller-f8648f98b-76gdd\" (UID: \"e44337b7-8419-419c-8281-64da2bc8d0aa\") " pod="metallb-system/controller-f8648f98b-76gdd" Dec 09 12:21:40 crc kubenswrapper[4703]: I1209 12:21:40.937712 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/8e7bbe07-74ed-43ff-9034-1e93caf42289-metrics\") pod \"frr-k8s-j5bbv\" (UID: \"8e7bbe07-74ed-43ff-9034-1e93caf42289\") " pod="metallb-system/frr-k8s-j5bbv" Dec 09 12:21:40 crc kubenswrapper[4703]: I1209 12:21:40.937895 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/8e7bbe07-74ed-43ff-9034-1e93caf42289-frr-conf\") pod \"frr-k8s-j5bbv\" (UID: \"8e7bbe07-74ed-43ff-9034-1e93caf42289\") " pod="metallb-system/frr-k8s-j5bbv" Dec 09 12:21:40 crc kubenswrapper[4703]: I1209 12:21:40.937999 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/8e7bbe07-74ed-43ff-9034-1e93caf42289-frr-sockets\") pod \"frr-k8s-j5bbv\" (UID: \"8e7bbe07-74ed-43ff-9034-1e93caf42289\") " pod="metallb-system/frr-k8s-j5bbv" Dec 09 12:21:40 crc kubenswrapper[4703]: I1209 12:21:40.938692 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/8e7bbe07-74ed-43ff-9034-1e93caf42289-frr-startup\") pod \"frr-k8s-j5bbv\" (UID: \"8e7bbe07-74ed-43ff-9034-1e93caf42289\") " pod="metallb-system/frr-k8s-j5bbv" Dec 09 12:21:40 crc kubenswrapper[4703]: I1209 12:21:40.947830 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8e7bbe07-74ed-43ff-9034-1e93caf42289-metrics-certs\") pod \"frr-k8s-j5bbv\" (UID: \"8e7bbe07-74ed-43ff-9034-1e93caf42289\") " pod="metallb-system/frr-k8s-j5bbv" Dec 09 12:21:40 crc kubenswrapper[4703]: I1209 12:21:40.955147 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9dd2a2e2-1c03-4529-946e-abe76835d2f4-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-bw4n2\" (UID: \"9dd2a2e2-1c03-4529-946e-abe76835d2f4\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-bw4n2" Dec 09 12:21:40 crc kubenswrapper[4703]: I1209 12:21:40.960991 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fjz6\" (UniqueName: \"kubernetes.io/projected/9dd2a2e2-1c03-4529-946e-abe76835d2f4-kube-api-access-2fjz6\") pod \"frr-k8s-webhook-server-7fcb986d4-bw4n2\" (UID: \"9dd2a2e2-1c03-4529-946e-abe76835d2f4\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-bw4n2" Dec 09 12:21:40 crc kubenswrapper[4703]: I1209 12:21:40.969425 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9kgwx\" (UniqueName: \"kubernetes.io/projected/8e7bbe07-74ed-43ff-9034-1e93caf42289-kube-api-access-9kgwx\") pod \"frr-k8s-j5bbv\" (UID: \"8e7bbe07-74ed-43ff-9034-1e93caf42289\") " pod="metallb-system/frr-k8s-j5bbv" Dec 09 12:21:40 crc kubenswrapper[4703]: I1209 12:21:40.979257 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-j5bbv" Dec 09 12:21:41 crc kubenswrapper[4703]: I1209 12:21:41.039077 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/06eb48ca-40a8-4c61-97f5-b6ed1f0691fe-memberlist\") pod \"speaker-r685h\" (UID: \"06eb48ca-40a8-4c61-97f5-b6ed1f0691fe\") " pod="metallb-system/speaker-r685h" Dec 09 12:21:41 crc kubenswrapper[4703]: I1209 12:21:41.039128 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8c6cp\" (UniqueName: \"kubernetes.io/projected/e44337b7-8419-419c-8281-64da2bc8d0aa-kube-api-access-8c6cp\") pod \"controller-f8648f98b-76gdd\" (UID: \"e44337b7-8419-419c-8281-64da2bc8d0aa\") " pod="metallb-system/controller-f8648f98b-76gdd" Dec 09 12:21:41 crc kubenswrapper[4703]: I1209 12:21:41.039159 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpwxc\" (UniqueName: \"kubernetes.io/projected/06eb48ca-40a8-4c61-97f5-b6ed1f0691fe-kube-api-access-mpwxc\") pod \"speaker-r685h\" (UID: \"06eb48ca-40a8-4c61-97f5-b6ed1f0691fe\") " pod="metallb-system/speaker-r685h" Dec 09 12:21:41 crc kubenswrapper[4703]: I1209 12:21:41.039201 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/06eb48ca-40a8-4c61-97f5-b6ed1f0691fe-metallb-excludel2\") pod \"speaker-r685h\" (UID: \"06eb48ca-40a8-4c61-97f5-b6ed1f0691fe\") " pod="metallb-system/speaker-r685h" Dec 09 12:21:41 crc kubenswrapper[4703]: I1209 12:21:41.039224 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e44337b7-8419-419c-8281-64da2bc8d0aa-metrics-certs\") pod \"controller-f8648f98b-76gdd\" (UID: \"e44337b7-8419-419c-8281-64da2bc8d0aa\") " pod="metallb-system/controller-f8648f98b-76gdd" Dec 09 12:21:41 crc kubenswrapper[4703]: I1209 12:21:41.039244 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/06eb48ca-40a8-4c61-97f5-b6ed1f0691fe-metrics-certs\") pod \"speaker-r685h\" (UID: \"06eb48ca-40a8-4c61-97f5-b6ed1f0691fe\") " pod="metallb-system/speaker-r685h" Dec 09 12:21:41 crc kubenswrapper[4703]: E1209 12:21:41.039278 4703 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Dec 09 12:21:41 crc kubenswrapper[4703]: E1209 12:21:41.039366 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06eb48ca-40a8-4c61-97f5-b6ed1f0691fe-memberlist podName:06eb48ca-40a8-4c61-97f5-b6ed1f0691fe nodeName:}" failed. No retries permitted until 2025-12-09 12:21:41.539346722 +0000 UTC m=+1000.788110241 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/06eb48ca-40a8-4c61-97f5-b6ed1f0691fe-memberlist") pod "speaker-r685h" (UID: "06eb48ca-40a8-4c61-97f5-b6ed1f0691fe") : secret "metallb-memberlist" not found Dec 09 12:21:41 crc kubenswrapper[4703]: I1209 12:21:41.039286 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e44337b7-8419-419c-8281-64da2bc8d0aa-cert\") pod \"controller-f8648f98b-76gdd\" (UID: \"e44337b7-8419-419c-8281-64da2bc8d0aa\") " pod="metallb-system/controller-f8648f98b-76gdd" Dec 09 12:21:41 crc kubenswrapper[4703]: E1209 12:21:41.039909 4703 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Dec 09 12:21:41 crc kubenswrapper[4703]: E1209 12:21:41.039979 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06eb48ca-40a8-4c61-97f5-b6ed1f0691fe-metrics-certs podName:06eb48ca-40a8-4c61-97f5-b6ed1f0691fe nodeName:}" failed. No retries permitted until 2025-12-09 12:21:41.539964928 +0000 UTC m=+1000.788728447 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/06eb48ca-40a8-4c61-97f5-b6ed1f0691fe-metrics-certs") pod "speaker-r685h" (UID: "06eb48ca-40a8-4c61-97f5-b6ed1f0691fe") : secret "speaker-certs-secret" not found Dec 09 12:21:41 crc kubenswrapper[4703]: I1209 12:21:41.040503 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/06eb48ca-40a8-4c61-97f5-b6ed1f0691fe-metallb-excludel2\") pod \"speaker-r685h\" (UID: \"06eb48ca-40a8-4c61-97f5-b6ed1f0691fe\") " pod="metallb-system/speaker-r685h" Dec 09 12:21:41 crc kubenswrapper[4703]: I1209 12:21:41.044111 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-bw4n2" Dec 09 12:21:41 crc kubenswrapper[4703]: I1209 12:21:41.044656 4703 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Dec 09 12:21:41 crc kubenswrapper[4703]: I1209 12:21:41.045805 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e44337b7-8419-419c-8281-64da2bc8d0aa-metrics-certs\") pod \"controller-f8648f98b-76gdd\" (UID: \"e44337b7-8419-419c-8281-64da2bc8d0aa\") " pod="metallb-system/controller-f8648f98b-76gdd" Dec 09 12:21:41 crc kubenswrapper[4703]: I1209 12:21:41.062345 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e44337b7-8419-419c-8281-64da2bc8d0aa-cert\") pod \"controller-f8648f98b-76gdd\" (UID: \"e44337b7-8419-419c-8281-64da2bc8d0aa\") " pod="metallb-system/controller-f8648f98b-76gdd" Dec 09 12:21:41 crc kubenswrapper[4703]: I1209 12:21:41.072680 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8c6cp\" (UniqueName: \"kubernetes.io/projected/e44337b7-8419-419c-8281-64da2bc8d0aa-kube-api-access-8c6cp\") pod \"controller-f8648f98b-76gdd\" (UID: \"e44337b7-8419-419c-8281-64da2bc8d0aa\") " pod="metallb-system/controller-f8648f98b-76gdd" Dec 09 12:21:41 crc kubenswrapper[4703]: I1209 12:21:41.098318 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpwxc\" (UniqueName: \"kubernetes.io/projected/06eb48ca-40a8-4c61-97f5-b6ed1f0691fe-kube-api-access-mpwxc\") pod \"speaker-r685h\" (UID: \"06eb48ca-40a8-4c61-97f5-b6ed1f0691fe\") " pod="metallb-system/speaker-r685h" Dec 09 12:21:41 crc kubenswrapper[4703]: I1209 12:21:41.134963 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-f8648f98b-76gdd" Dec 09 12:21:41 crc kubenswrapper[4703]: I1209 12:21:41.471001 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-f8648f98b-76gdd"] Dec 09 12:21:41 crc kubenswrapper[4703]: I1209 12:21:41.551520 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/06eb48ca-40a8-4c61-97f5-b6ed1f0691fe-memberlist\") pod \"speaker-r685h\" (UID: \"06eb48ca-40a8-4c61-97f5-b6ed1f0691fe\") " pod="metallb-system/speaker-r685h" Dec 09 12:21:41 crc kubenswrapper[4703]: I1209 12:21:41.551661 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/06eb48ca-40a8-4c61-97f5-b6ed1f0691fe-metrics-certs\") pod \"speaker-r685h\" (UID: \"06eb48ca-40a8-4c61-97f5-b6ed1f0691fe\") " pod="metallb-system/speaker-r685h" Dec 09 12:21:41 crc kubenswrapper[4703]: E1209 12:21:41.551737 4703 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Dec 09 12:21:41 crc kubenswrapper[4703]: E1209 12:21:41.551817 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06eb48ca-40a8-4c61-97f5-b6ed1f0691fe-memberlist podName:06eb48ca-40a8-4c61-97f5-b6ed1f0691fe nodeName:}" failed. No retries permitted until 2025-12-09 12:21:42.551796886 +0000 UTC m=+1001.800560405 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/06eb48ca-40a8-4c61-97f5-b6ed1f0691fe-memberlist") pod "speaker-r685h" (UID: "06eb48ca-40a8-4c61-97f5-b6ed1f0691fe") : secret "metallb-memberlist" not found Dec 09 12:21:41 crc kubenswrapper[4703]: I1209 12:21:41.558941 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/06eb48ca-40a8-4c61-97f5-b6ed1f0691fe-metrics-certs\") pod \"speaker-r685h\" (UID: \"06eb48ca-40a8-4c61-97f5-b6ed1f0691fe\") " pod="metallb-system/speaker-r685h" Dec 09 12:21:41 crc kubenswrapper[4703]: I1209 12:21:41.581057 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-j5bbv" event={"ID":"8e7bbe07-74ed-43ff-9034-1e93caf42289","Type":"ContainerStarted","Data":"25da3005ca501487c662c41f8ff3932734662ec53b2f17042d6e36d2816cf0b6"} Dec 09 12:21:41 crc kubenswrapper[4703]: I1209 12:21:41.584909 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-76gdd" event={"ID":"e44337b7-8419-419c-8281-64da2bc8d0aa","Type":"ContainerStarted","Data":"41c287fbf20b204b0ce59271612366e7ec11a90c56af0000d6c623e0242fbdc4"} Dec 09 12:21:41 crc kubenswrapper[4703]: I1209 12:21:41.587437 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-bw4n2"] Dec 09 12:21:41 crc kubenswrapper[4703]: W1209 12:21:41.607245 4703 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9dd2a2e2_1c03_4529_946e_abe76835d2f4.slice/crio-ae0474aeacb38c23b7abc681c921bb92de8f67de95c78bc36666df485feb878d WatchSource:0}: Error finding container ae0474aeacb38c23b7abc681c921bb92de8f67de95c78bc36666df485feb878d: Status 404 returned error can't find the container with id ae0474aeacb38c23b7abc681c921bb92de8f67de95c78bc36666df485feb878d Dec 09 12:21:42 crc kubenswrapper[4703]: I1209 12:21:42.566377 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/06eb48ca-40a8-4c61-97f5-b6ed1f0691fe-memberlist\") pod \"speaker-r685h\" (UID: \"06eb48ca-40a8-4c61-97f5-b6ed1f0691fe\") " pod="metallb-system/speaker-r685h" Dec 09 12:21:42 crc kubenswrapper[4703]: I1209 12:21:42.576984 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/06eb48ca-40a8-4c61-97f5-b6ed1f0691fe-memberlist\") pod \"speaker-r685h\" (UID: \"06eb48ca-40a8-4c61-97f5-b6ed1f0691fe\") " pod="metallb-system/speaker-r685h" Dec 09 12:21:42 crc kubenswrapper[4703]: I1209 12:21:42.605612 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-r685h" Dec 09 12:21:42 crc kubenswrapper[4703]: I1209 12:21:42.608859 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-76gdd" event={"ID":"e44337b7-8419-419c-8281-64da2bc8d0aa","Type":"ContainerStarted","Data":"edcf4b2f5ed370c97eec9f73ed97a57fdbbb6474e0aa1a439d7f7a4a9233dfd9"} Dec 09 12:21:42 crc kubenswrapper[4703]: I1209 12:21:42.608904 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-76gdd" event={"ID":"e44337b7-8419-419c-8281-64da2bc8d0aa","Type":"ContainerStarted","Data":"5aa26a97b8d02270b7251f71e37fd8b93c44eef3a83e1c4f71a0c74bd94a1dfd"} Dec 09 12:21:42 crc kubenswrapper[4703]: I1209 12:21:42.608989 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-f8648f98b-76gdd" Dec 09 12:21:42 crc kubenswrapper[4703]: I1209 12:21:42.610218 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-bw4n2" event={"ID":"9dd2a2e2-1c03-4529-946e-abe76835d2f4","Type":"ContainerStarted","Data":"ae0474aeacb38c23b7abc681c921bb92de8f67de95c78bc36666df485feb878d"} Dec 09 12:21:42 crc kubenswrapper[4703]: I1209 12:21:42.619700 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4hnb7" event={"ID":"80bacab6-4086-43bb-9b2a-c32839661a2f","Type":"ContainerStarted","Data":"4fe58f807e6c7c00b14fd3b25fb426df723b5723ea947449979777ae7ed11704"} Dec 09 12:21:42 crc kubenswrapper[4703]: I1209 12:21:42.626576 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-f8648f98b-76gdd" podStartSLOduration=2.626549131 podStartE2EDuration="2.626549131s" podCreationTimestamp="2025-12-09 12:21:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:21:42.622695056 +0000 UTC m=+1001.871458575" watchObservedRunningTime="2025-12-09 12:21:42.626549131 +0000 UTC m=+1001.875312670" Dec 09 12:21:42 crc kubenswrapper[4703]: I1209 12:21:42.646575 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4hnb7" podStartSLOduration=2.697772511 podStartE2EDuration="5.646557641s" podCreationTimestamp="2025-12-09 12:21:37 +0000 UTC" firstStartedPulling="2025-12-09 12:21:38.559317614 +0000 UTC m=+997.808081133" lastFinishedPulling="2025-12-09 12:21:41.508102744 +0000 UTC m=+1000.756866263" observedRunningTime="2025-12-09 12:21:42.646471719 +0000 UTC m=+1001.895235238" watchObservedRunningTime="2025-12-09 12:21:42.646557641 +0000 UTC m=+1001.895321170" Dec 09 12:21:42 crc kubenswrapper[4703]: W1209 12:21:42.668686 4703 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod06eb48ca_40a8_4c61_97f5_b6ed1f0691fe.slice/crio-0eacc36972890cf26ac7eb95b6823eb1d1b3f361b17b85aaa36663a94e420e48 WatchSource:0}: Error finding container 0eacc36972890cf26ac7eb95b6823eb1d1b3f361b17b85aaa36663a94e420e48: Status 404 returned error can't find the container with id 0eacc36972890cf26ac7eb95b6823eb1d1b3f361b17b85aaa36663a94e420e48 Dec 09 12:21:43 crc kubenswrapper[4703]: I1209 12:21:43.657311 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-r685h" event={"ID":"06eb48ca-40a8-4c61-97f5-b6ed1f0691fe","Type":"ContainerStarted","Data":"812af42f9becf3de7c62cb7c2bc50e95e64dd37d5af1256d43901da75ae74d0f"} Dec 09 12:21:43 crc kubenswrapper[4703]: I1209 12:21:43.657635 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-r685h" event={"ID":"06eb48ca-40a8-4c61-97f5-b6ed1f0691fe","Type":"ContainerStarted","Data":"99401d7b56212e6c2de7b2be6b25a25a9eb8d48f387d9673c4a4d3ec9df4259c"} Dec 09 12:21:43 crc kubenswrapper[4703]: I1209 12:21:43.657649 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-r685h" event={"ID":"06eb48ca-40a8-4c61-97f5-b6ed1f0691fe","Type":"ContainerStarted","Data":"0eacc36972890cf26ac7eb95b6823eb1d1b3f361b17b85aaa36663a94e420e48"} Dec 09 12:21:43 crc kubenswrapper[4703]: I1209 12:21:43.658746 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-r685h" Dec 09 12:21:43 crc kubenswrapper[4703]: I1209 12:21:43.713315 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-r685h" podStartSLOduration=3.713296231 podStartE2EDuration="3.713296231s" podCreationTimestamp="2025-12-09 12:21:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:21:43.700172336 +0000 UTC m=+1002.948935855" watchObservedRunningTime="2025-12-09 12:21:43.713296231 +0000 UTC m=+1002.962059750" Dec 09 12:21:47 crc kubenswrapper[4703]: I1209 12:21:47.681283 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4hnb7" Dec 09 12:21:47 crc kubenswrapper[4703]: I1209 12:21:47.681633 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4hnb7" Dec 09 12:21:47 crc kubenswrapper[4703]: I1209 12:21:47.744673 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4hnb7" Dec 09 12:21:47 crc kubenswrapper[4703]: I1209 12:21:47.794249 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4hnb7" Dec 09 12:21:47 crc kubenswrapper[4703]: I1209 12:21:47.980009 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4hnb7"] Dec 09 12:21:49 crc kubenswrapper[4703]: I1209 12:21:49.716947 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4hnb7" podUID="80bacab6-4086-43bb-9b2a-c32839661a2f" containerName="registry-server" containerID="cri-o://4fe58f807e6c7c00b14fd3b25fb426df723b5723ea947449979777ae7ed11704" gracePeriod=2 Dec 09 12:21:50 crc kubenswrapper[4703]: I1209 12:21:50.726160 4703 generic.go:334] "Generic (PLEG): container finished" podID="80bacab6-4086-43bb-9b2a-c32839661a2f" containerID="4fe58f807e6c7c00b14fd3b25fb426df723b5723ea947449979777ae7ed11704" exitCode=0 Dec 09 12:21:50 crc kubenswrapper[4703]: I1209 12:21:50.726250 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4hnb7" event={"ID":"80bacab6-4086-43bb-9b2a-c32839661a2f","Type":"ContainerDied","Data":"4fe58f807e6c7c00b14fd3b25fb426df723b5723ea947449979777ae7ed11704"} Dec 09 12:21:51 crc kubenswrapper[4703]: I1209 12:21:51.143216 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-f8648f98b-76gdd" Dec 09 12:21:52 crc kubenswrapper[4703]: I1209 12:21:52.610529 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-r685h" Dec 09 12:21:52 crc kubenswrapper[4703]: I1209 12:21:52.947579 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4hnb7" Dec 09 12:21:53 crc kubenswrapper[4703]: I1209 12:21:53.035555 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80bacab6-4086-43bb-9b2a-c32839661a2f-utilities\") pod \"80bacab6-4086-43bb-9b2a-c32839661a2f\" (UID: \"80bacab6-4086-43bb-9b2a-c32839661a2f\") " Dec 09 12:21:53 crc kubenswrapper[4703]: I1209 12:21:53.035647 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-86d65\" (UniqueName: \"kubernetes.io/projected/80bacab6-4086-43bb-9b2a-c32839661a2f-kube-api-access-86d65\") pod \"80bacab6-4086-43bb-9b2a-c32839661a2f\" (UID: \"80bacab6-4086-43bb-9b2a-c32839661a2f\") " Dec 09 12:21:53 crc kubenswrapper[4703]: I1209 12:21:53.035750 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80bacab6-4086-43bb-9b2a-c32839661a2f-catalog-content\") pod \"80bacab6-4086-43bb-9b2a-c32839661a2f\" (UID: \"80bacab6-4086-43bb-9b2a-c32839661a2f\") " Dec 09 12:21:53 crc kubenswrapper[4703]: I1209 12:21:53.036678 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80bacab6-4086-43bb-9b2a-c32839661a2f-utilities" (OuterVolumeSpecName: "utilities") pod "80bacab6-4086-43bb-9b2a-c32839661a2f" (UID: "80bacab6-4086-43bb-9b2a-c32839661a2f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:21:53 crc kubenswrapper[4703]: I1209 12:21:53.042595 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80bacab6-4086-43bb-9b2a-c32839661a2f-kube-api-access-86d65" (OuterVolumeSpecName: "kube-api-access-86d65") pod "80bacab6-4086-43bb-9b2a-c32839661a2f" (UID: "80bacab6-4086-43bb-9b2a-c32839661a2f"). InnerVolumeSpecName "kube-api-access-86d65". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:21:53 crc kubenswrapper[4703]: I1209 12:21:53.093014 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80bacab6-4086-43bb-9b2a-c32839661a2f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "80bacab6-4086-43bb-9b2a-c32839661a2f" (UID: "80bacab6-4086-43bb-9b2a-c32839661a2f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:21:53 crc kubenswrapper[4703]: I1209 12:21:53.137464 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-86d65\" (UniqueName: \"kubernetes.io/projected/80bacab6-4086-43bb-9b2a-c32839661a2f-kube-api-access-86d65\") on node \"crc\" DevicePath \"\"" Dec 09 12:21:53 crc kubenswrapper[4703]: I1209 12:21:53.137502 4703 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80bacab6-4086-43bb-9b2a-c32839661a2f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 12:21:53 crc kubenswrapper[4703]: I1209 12:21:53.137516 4703 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80bacab6-4086-43bb-9b2a-c32839661a2f-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 12:21:53 crc kubenswrapper[4703]: I1209 12:21:53.747318 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4hnb7" event={"ID":"80bacab6-4086-43bb-9b2a-c32839661a2f","Type":"ContainerDied","Data":"f186099c31b8835ac18ffe8f5312a22f9877c5b614456d2c4f4684ab9dd55942"} Dec 09 12:21:53 crc kubenswrapper[4703]: I1209 12:21:53.747909 4703 scope.go:117] "RemoveContainer" containerID="4fe58f807e6c7c00b14fd3b25fb426df723b5723ea947449979777ae7ed11704" Dec 09 12:21:53 crc kubenswrapper[4703]: I1209 12:21:53.747351 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4hnb7" Dec 09 12:21:53 crc kubenswrapper[4703]: I1209 12:21:53.748935 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-bw4n2" event={"ID":"9dd2a2e2-1c03-4529-946e-abe76835d2f4","Type":"ContainerStarted","Data":"771a1a6d9edb549938991c9c2d5d50ce53aa007a478bfb2395b4f9fc87ce429f"} Dec 09 12:21:53 crc kubenswrapper[4703]: I1209 12:21:53.749054 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-bw4n2" Dec 09 12:21:53 crc kubenswrapper[4703]: I1209 12:21:53.750695 4703 generic.go:334] "Generic (PLEG): container finished" podID="8e7bbe07-74ed-43ff-9034-1e93caf42289" containerID="0b15ede9ef65efccf69e9736c0e32e528a7d2114dd4a53195c9190496cdb0fa4" exitCode=0 Dec 09 12:21:53 crc kubenswrapper[4703]: I1209 12:21:53.750733 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-j5bbv" event={"ID":"8e7bbe07-74ed-43ff-9034-1e93caf42289","Type":"ContainerDied","Data":"0b15ede9ef65efccf69e9736c0e32e528a7d2114dd4a53195c9190496cdb0fa4"} Dec 09 12:21:53 crc kubenswrapper[4703]: I1209 12:21:53.779890 4703 scope.go:117] "RemoveContainer" containerID="9b0cdd6b728b459f93146b6ecf22a7ca80d9dd2d0747c32683f2104937eaf147" Dec 09 12:21:53 crc kubenswrapper[4703]: I1209 12:21:53.781039 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-bw4n2" podStartSLOduration=2.002340496 podStartE2EDuration="13.781018968s" podCreationTimestamp="2025-12-09 12:21:40 +0000 UTC" firstStartedPulling="2025-12-09 12:21:41.614854221 +0000 UTC m=+1000.863617740" lastFinishedPulling="2025-12-09 12:21:53.393532693 +0000 UTC m=+1012.642296212" observedRunningTime="2025-12-09 12:21:53.780422592 +0000 UTC m=+1013.029186111" watchObservedRunningTime="2025-12-09 12:21:53.781018968 +0000 UTC m=+1013.029782507" Dec 09 12:21:53 crc kubenswrapper[4703]: I1209 12:21:53.861815 4703 scope.go:117] "RemoveContainer" containerID="6fe7ec559aa4aa53eb246428fa2e1720d50dbd27d5256600b10afaaf29074a6c" Dec 09 12:21:53 crc kubenswrapper[4703]: I1209 12:21:53.868795 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4hnb7"] Dec 09 12:21:53 crc kubenswrapper[4703]: I1209 12:21:53.876096 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4hnb7"] Dec 09 12:21:54 crc kubenswrapper[4703]: I1209 12:21:54.778675 4703 generic.go:334] "Generic (PLEG): container finished" podID="8e7bbe07-74ed-43ff-9034-1e93caf42289" containerID="cba7888d83470c820be11bf2e1ad8bfe7f6e3bd3eb24c8e3ae372c8b2c5c76e5" exitCode=0 Dec 09 12:21:54 crc kubenswrapper[4703]: I1209 12:21:54.778835 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-j5bbv" event={"ID":"8e7bbe07-74ed-43ff-9034-1e93caf42289","Type":"ContainerDied","Data":"cba7888d83470c820be11bf2e1ad8bfe7f6e3bd3eb24c8e3ae372c8b2c5c76e5"} Dec 09 12:21:55 crc kubenswrapper[4703]: I1209 12:21:55.082083 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80bacab6-4086-43bb-9b2a-c32839661a2f" path="/var/lib/kubelet/pods/80bacab6-4086-43bb-9b2a-c32839661a2f/volumes" Dec 09 12:21:55 crc kubenswrapper[4703]: I1209 12:21:55.623474 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-tlzvs"] Dec 09 12:21:55 crc kubenswrapper[4703]: E1209 12:21:55.623868 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80bacab6-4086-43bb-9b2a-c32839661a2f" containerName="registry-server" Dec 09 12:21:55 crc kubenswrapper[4703]: I1209 12:21:55.623893 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="80bacab6-4086-43bb-9b2a-c32839661a2f" containerName="registry-server" Dec 09 12:21:55 crc kubenswrapper[4703]: E1209 12:21:55.623908 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80bacab6-4086-43bb-9b2a-c32839661a2f" containerName="extract-utilities" Dec 09 12:21:55 crc kubenswrapper[4703]: I1209 12:21:55.623918 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="80bacab6-4086-43bb-9b2a-c32839661a2f" containerName="extract-utilities" Dec 09 12:21:55 crc kubenswrapper[4703]: E1209 12:21:55.623943 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80bacab6-4086-43bb-9b2a-c32839661a2f" containerName="extract-content" Dec 09 12:21:55 crc kubenswrapper[4703]: I1209 12:21:55.623953 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="80bacab6-4086-43bb-9b2a-c32839661a2f" containerName="extract-content" Dec 09 12:21:55 crc kubenswrapper[4703]: I1209 12:21:55.624124 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="80bacab6-4086-43bb-9b2a-c32839661a2f" containerName="registry-server" Dec 09 12:21:55 crc kubenswrapper[4703]: I1209 12:21:55.625010 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-tlzvs" Dec 09 12:21:55 crc kubenswrapper[4703]: I1209 12:21:55.630286 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-gqzcc" Dec 09 12:21:55 crc kubenswrapper[4703]: I1209 12:21:55.631695 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Dec 09 12:21:55 crc kubenswrapper[4703]: I1209 12:21:55.631901 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Dec 09 12:21:55 crc kubenswrapper[4703]: I1209 12:21:55.636417 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-tlzvs"] Dec 09 12:21:55 crc kubenswrapper[4703]: I1209 12:21:55.793393 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bg2zq\" (UniqueName: \"kubernetes.io/projected/92768094-d1ac-4a01-a085-3d96fae07f95-kube-api-access-bg2zq\") pod \"openstack-operator-index-tlzvs\" (UID: \"92768094-d1ac-4a01-a085-3d96fae07f95\") " pod="openstack-operators/openstack-operator-index-tlzvs" Dec 09 12:21:55 crc kubenswrapper[4703]: I1209 12:21:55.793952 4703 generic.go:334] "Generic (PLEG): container finished" podID="8e7bbe07-74ed-43ff-9034-1e93caf42289" containerID="f7c6208b60c6fc947c83243bc958948f8c89aa31da747756fbfde90266736154" exitCode=0 Dec 09 12:21:55 crc kubenswrapper[4703]: I1209 12:21:55.793980 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-j5bbv" event={"ID":"8e7bbe07-74ed-43ff-9034-1e93caf42289","Type":"ContainerDied","Data":"f7c6208b60c6fc947c83243bc958948f8c89aa31da747756fbfde90266736154"} Dec 09 12:21:55 crc kubenswrapper[4703]: I1209 12:21:55.895493 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bg2zq\" (UniqueName: \"kubernetes.io/projected/92768094-d1ac-4a01-a085-3d96fae07f95-kube-api-access-bg2zq\") pod \"openstack-operator-index-tlzvs\" (UID: \"92768094-d1ac-4a01-a085-3d96fae07f95\") " pod="openstack-operators/openstack-operator-index-tlzvs" Dec 09 12:21:55 crc kubenswrapper[4703]: I1209 12:21:55.921877 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bg2zq\" (UniqueName: \"kubernetes.io/projected/92768094-d1ac-4a01-a085-3d96fae07f95-kube-api-access-bg2zq\") pod \"openstack-operator-index-tlzvs\" (UID: \"92768094-d1ac-4a01-a085-3d96fae07f95\") " pod="openstack-operators/openstack-operator-index-tlzvs" Dec 09 12:21:55 crc kubenswrapper[4703]: I1209 12:21:55.967545 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-tlzvs" Dec 09 12:21:56 crc kubenswrapper[4703]: I1209 12:21:56.519839 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-tlzvs"] Dec 09 12:21:56 crc kubenswrapper[4703]: I1209 12:21:56.807305 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-j5bbv" event={"ID":"8e7bbe07-74ed-43ff-9034-1e93caf42289","Type":"ContainerStarted","Data":"e6b647d0de9e9fe4fa32f7427c097f003617f301736ddc6298d13d8f652f2279"} Dec 09 12:21:56 crc kubenswrapper[4703]: I1209 12:21:56.807353 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-j5bbv" event={"ID":"8e7bbe07-74ed-43ff-9034-1e93caf42289","Type":"ContainerStarted","Data":"f481e2f146171165cd15e30ef95084aeb9f0fa802ee38d2c7a59bbd877e00def"} Dec 09 12:21:56 crc kubenswrapper[4703]: I1209 12:21:56.807366 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-j5bbv" event={"ID":"8e7bbe07-74ed-43ff-9034-1e93caf42289","Type":"ContainerStarted","Data":"7aefa1071d01bd988876606a9b6b712e1a99dbea4d9dc3d1f233bb7a5b1744d3"} Dec 09 12:21:56 crc kubenswrapper[4703]: I1209 12:21:56.810057 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-tlzvs" event={"ID":"92768094-d1ac-4a01-a085-3d96fae07f95","Type":"ContainerStarted","Data":"4a2b0f79ce0410bd113e47720090d835b7eb7752eb64e5d4fa26d35143cef34d"} Dec 09 12:21:57 crc kubenswrapper[4703]: I1209 12:21:57.824606 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-j5bbv" event={"ID":"8e7bbe07-74ed-43ff-9034-1e93caf42289","Type":"ContainerStarted","Data":"111d7557459e4c7c09baa82e9e556750e53077d003b0d71e6a9d28766b60761c"} Dec 09 12:21:57 crc kubenswrapper[4703]: I1209 12:21:57.825186 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-j5bbv" Dec 09 12:21:57 crc kubenswrapper[4703]: I1209 12:21:57.825279 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-j5bbv" event={"ID":"8e7bbe07-74ed-43ff-9034-1e93caf42289","Type":"ContainerStarted","Data":"00fdc69de43051b37f61ff2211514dc24795f68edfa4887d9beba0f913155756"} Dec 09 12:21:57 crc kubenswrapper[4703]: I1209 12:21:57.825296 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-j5bbv" event={"ID":"8e7bbe07-74ed-43ff-9034-1e93caf42289","Type":"ContainerStarted","Data":"2645b31261f88447ce8f16bb64fd467b514487ebbb2804241796b8ed6bc6fee0"} Dec 09 12:21:57 crc kubenswrapper[4703]: I1209 12:21:57.854359 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-j5bbv" podStartSLOduration=5.7873028699999995 podStartE2EDuration="17.854340249s" podCreationTimestamp="2025-12-09 12:21:40 +0000 UTC" firstStartedPulling="2025-12-09 12:21:41.306573145 +0000 UTC m=+1000.555336664" lastFinishedPulling="2025-12-09 12:21:53.373610524 +0000 UTC m=+1012.622374043" observedRunningTime="2025-12-09 12:21:57.853367663 +0000 UTC m=+1017.102131182" watchObservedRunningTime="2025-12-09 12:21:57.854340249 +0000 UTC m=+1017.103103768" Dec 09 12:21:59 crc kubenswrapper[4703]: I1209 12:21:59.159101 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-tlzvs"] Dec 09 12:21:59 crc kubenswrapper[4703]: I1209 12:21:59.770587 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-p777b"] Dec 09 12:21:59 crc kubenswrapper[4703]: I1209 12:21:59.771786 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-p777b" Dec 09 12:21:59 crc kubenswrapper[4703]: I1209 12:21:59.777420 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-p777b"] Dec 09 12:21:59 crc kubenswrapper[4703]: I1209 12:21:59.867992 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qncn\" (UniqueName: \"kubernetes.io/projected/546d3606-c78f-46dc-8869-9bac880e6757-kube-api-access-7qncn\") pod \"openstack-operator-index-p777b\" (UID: \"546d3606-c78f-46dc-8869-9bac880e6757\") " pod="openstack-operators/openstack-operator-index-p777b" Dec 09 12:21:59 crc kubenswrapper[4703]: I1209 12:21:59.969890 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qncn\" (UniqueName: \"kubernetes.io/projected/546d3606-c78f-46dc-8869-9bac880e6757-kube-api-access-7qncn\") pod \"openstack-operator-index-p777b\" (UID: \"546d3606-c78f-46dc-8869-9bac880e6757\") " pod="openstack-operators/openstack-operator-index-p777b" Dec 09 12:21:59 crc kubenswrapper[4703]: I1209 12:21:59.995034 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qncn\" (UniqueName: \"kubernetes.io/projected/546d3606-c78f-46dc-8869-9bac880e6757-kube-api-access-7qncn\") pod \"openstack-operator-index-p777b\" (UID: \"546d3606-c78f-46dc-8869-9bac880e6757\") " pod="openstack-operators/openstack-operator-index-p777b" Dec 09 12:22:00 crc kubenswrapper[4703]: I1209 12:22:00.096583 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-p777b" Dec 09 12:22:00 crc kubenswrapper[4703]: I1209 12:22:00.544817 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-p777b"] Dec 09 12:22:00 crc kubenswrapper[4703]: I1209 12:22:00.845285 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-p777b" event={"ID":"546d3606-c78f-46dc-8869-9bac880e6757","Type":"ContainerStarted","Data":"d081ebcaf0e68d9a5feb9c560af2e5454b63349fafb35ae56de1be7c3c2f8130"} Dec 09 12:22:00 crc kubenswrapper[4703]: I1209 12:22:00.980723 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-j5bbv" Dec 09 12:22:01 crc kubenswrapper[4703]: I1209 12:22:01.023638 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-j5bbv" Dec 09 12:22:04 crc kubenswrapper[4703]: I1209 12:22:04.877564 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-p777b" event={"ID":"546d3606-c78f-46dc-8869-9bac880e6757","Type":"ContainerStarted","Data":"a9aadbed3aaed32230fb0d318ed4663df4a6161a26e151fa64fba64e6f6ec8bb"} Dec 09 12:22:04 crc kubenswrapper[4703]: I1209 12:22:04.881381 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-tlzvs" event={"ID":"92768094-d1ac-4a01-a085-3d96fae07f95","Type":"ContainerStarted","Data":"2d21b59683b167a5897ebe350d7e3eac252b19d9fd96820140d518144a3c55a2"} Dec 09 12:22:04 crc kubenswrapper[4703]: I1209 12:22:04.881559 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-tlzvs" podUID="92768094-d1ac-4a01-a085-3d96fae07f95" containerName="registry-server" containerID="cri-o://2d21b59683b167a5897ebe350d7e3eac252b19d9fd96820140d518144a3c55a2" gracePeriod=2 Dec 09 12:22:04 crc kubenswrapper[4703]: I1209 12:22:04.902728 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-p777b" podStartSLOduration=2.268349616 podStartE2EDuration="5.90270803s" podCreationTimestamp="2025-12-09 12:21:59 +0000 UTC" firstStartedPulling="2025-12-09 12:22:00.551822915 +0000 UTC m=+1019.800586434" lastFinishedPulling="2025-12-09 12:22:04.186181339 +0000 UTC m=+1023.434944848" observedRunningTime="2025-12-09 12:22:04.897523129 +0000 UTC m=+1024.146286648" watchObservedRunningTime="2025-12-09 12:22:04.90270803 +0000 UTC m=+1024.151471549" Dec 09 12:22:04 crc kubenswrapper[4703]: I1209 12:22:04.919001 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-tlzvs" podStartSLOduration=2.265775217 podStartE2EDuration="9.91898374s" podCreationTimestamp="2025-12-09 12:21:55 +0000 UTC" firstStartedPulling="2025-12-09 12:21:56.533351607 +0000 UTC m=+1015.782115126" lastFinishedPulling="2025-12-09 12:22:04.18656013 +0000 UTC m=+1023.435323649" observedRunningTime="2025-12-09 12:22:04.916152404 +0000 UTC m=+1024.164915923" watchObservedRunningTime="2025-12-09 12:22:04.91898374 +0000 UTC m=+1024.167747259" Dec 09 12:22:05 crc kubenswrapper[4703]: I1209 12:22:05.223706 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-tlzvs" Dec 09 12:22:05 crc kubenswrapper[4703]: I1209 12:22:05.350220 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bg2zq\" (UniqueName: \"kubernetes.io/projected/92768094-d1ac-4a01-a085-3d96fae07f95-kube-api-access-bg2zq\") pod \"92768094-d1ac-4a01-a085-3d96fae07f95\" (UID: \"92768094-d1ac-4a01-a085-3d96fae07f95\") " Dec 09 12:22:05 crc kubenswrapper[4703]: I1209 12:22:05.360415 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92768094-d1ac-4a01-a085-3d96fae07f95-kube-api-access-bg2zq" (OuterVolumeSpecName: "kube-api-access-bg2zq") pod "92768094-d1ac-4a01-a085-3d96fae07f95" (UID: "92768094-d1ac-4a01-a085-3d96fae07f95"). InnerVolumeSpecName "kube-api-access-bg2zq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:22:05 crc kubenswrapper[4703]: I1209 12:22:05.452384 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bg2zq\" (UniqueName: \"kubernetes.io/projected/92768094-d1ac-4a01-a085-3d96fae07f95-kube-api-access-bg2zq\") on node \"crc\" DevicePath \"\"" Dec 09 12:22:05 crc kubenswrapper[4703]: I1209 12:22:05.890421 4703 generic.go:334] "Generic (PLEG): container finished" podID="92768094-d1ac-4a01-a085-3d96fae07f95" containerID="2d21b59683b167a5897ebe350d7e3eac252b19d9fd96820140d518144a3c55a2" exitCode=0 Dec 09 12:22:05 crc kubenswrapper[4703]: I1209 12:22:05.890507 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-tlzvs" event={"ID":"92768094-d1ac-4a01-a085-3d96fae07f95","Type":"ContainerDied","Data":"2d21b59683b167a5897ebe350d7e3eac252b19d9fd96820140d518144a3c55a2"} Dec 09 12:22:05 crc kubenswrapper[4703]: I1209 12:22:05.890552 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-tlzvs" event={"ID":"92768094-d1ac-4a01-a085-3d96fae07f95","Type":"ContainerDied","Data":"4a2b0f79ce0410bd113e47720090d835b7eb7752eb64e5d4fa26d35143cef34d"} Dec 09 12:22:05 crc kubenswrapper[4703]: I1209 12:22:05.890561 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-tlzvs" Dec 09 12:22:05 crc kubenswrapper[4703]: I1209 12:22:05.890573 4703 scope.go:117] "RemoveContainer" containerID="2d21b59683b167a5897ebe350d7e3eac252b19d9fd96820140d518144a3c55a2" Dec 09 12:22:05 crc kubenswrapper[4703]: I1209 12:22:05.909106 4703 scope.go:117] "RemoveContainer" containerID="2d21b59683b167a5897ebe350d7e3eac252b19d9fd96820140d518144a3c55a2" Dec 09 12:22:05 crc kubenswrapper[4703]: E1209 12:22:05.909484 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d21b59683b167a5897ebe350d7e3eac252b19d9fd96820140d518144a3c55a2\": container with ID starting with 2d21b59683b167a5897ebe350d7e3eac252b19d9fd96820140d518144a3c55a2 not found: ID does not exist" containerID="2d21b59683b167a5897ebe350d7e3eac252b19d9fd96820140d518144a3c55a2" Dec 09 12:22:05 crc kubenswrapper[4703]: I1209 12:22:05.909514 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d21b59683b167a5897ebe350d7e3eac252b19d9fd96820140d518144a3c55a2"} err="failed to get container status \"2d21b59683b167a5897ebe350d7e3eac252b19d9fd96820140d518144a3c55a2\": rpc error: code = NotFound desc = could not find container \"2d21b59683b167a5897ebe350d7e3eac252b19d9fd96820140d518144a3c55a2\": container with ID starting with 2d21b59683b167a5897ebe350d7e3eac252b19d9fd96820140d518144a3c55a2 not found: ID does not exist" Dec 09 12:22:05 crc kubenswrapper[4703]: I1209 12:22:05.918854 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-tlzvs"] Dec 09 12:22:05 crc kubenswrapper[4703]: I1209 12:22:05.922910 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-tlzvs"] Dec 09 12:22:07 crc kubenswrapper[4703]: I1209 12:22:07.078172 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92768094-d1ac-4a01-a085-3d96fae07f95" path="/var/lib/kubelet/pods/92768094-d1ac-4a01-a085-3d96fae07f95/volumes" Dec 09 12:22:10 crc kubenswrapper[4703]: I1209 12:22:10.097706 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-p777b" Dec 09 12:22:10 crc kubenswrapper[4703]: I1209 12:22:10.097790 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-p777b" Dec 09 12:22:10 crc kubenswrapper[4703]: I1209 12:22:10.127335 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-p777b" Dec 09 12:22:10 crc kubenswrapper[4703]: I1209 12:22:10.958576 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-p777b" Dec 09 12:22:10 crc kubenswrapper[4703]: I1209 12:22:10.983063 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-j5bbv" Dec 09 12:22:11 crc kubenswrapper[4703]: I1209 12:22:11.066466 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-bw4n2" Dec 09 12:22:17 crc kubenswrapper[4703]: I1209 12:22:17.038238 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/9c3d187de28e81e2aa863f5c385ad60315fa95c68b744528ef728d9f38vq9n6"] Dec 09 12:22:17 crc kubenswrapper[4703]: E1209 12:22:17.038861 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92768094-d1ac-4a01-a085-3d96fae07f95" containerName="registry-server" Dec 09 12:22:17 crc kubenswrapper[4703]: I1209 12:22:17.038877 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="92768094-d1ac-4a01-a085-3d96fae07f95" containerName="registry-server" Dec 09 12:22:17 crc kubenswrapper[4703]: I1209 12:22:17.039030 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="92768094-d1ac-4a01-a085-3d96fae07f95" containerName="registry-server" Dec 09 12:22:17 crc kubenswrapper[4703]: I1209 12:22:17.039950 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9c3d187de28e81e2aa863f5c385ad60315fa95c68b744528ef728d9f38vq9n6" Dec 09 12:22:17 crc kubenswrapper[4703]: I1209 12:22:17.043176 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-t9jbp" Dec 09 12:22:17 crc kubenswrapper[4703]: I1209 12:22:17.049811 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/9c3d187de28e81e2aa863f5c385ad60315fa95c68b744528ef728d9f38vq9n6"] Dec 09 12:22:17 crc kubenswrapper[4703]: I1209 12:22:17.108991 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b5ad644c-f525-4885-89b3-1d61751802f3-bundle\") pod \"9c3d187de28e81e2aa863f5c385ad60315fa95c68b744528ef728d9f38vq9n6\" (UID: \"b5ad644c-f525-4885-89b3-1d61751802f3\") " pod="openstack-operators/9c3d187de28e81e2aa863f5c385ad60315fa95c68b744528ef728d9f38vq9n6" Dec 09 12:22:17 crc kubenswrapper[4703]: I1209 12:22:17.109045 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8h99\" (UniqueName: \"kubernetes.io/projected/b5ad644c-f525-4885-89b3-1d61751802f3-kube-api-access-m8h99\") pod \"9c3d187de28e81e2aa863f5c385ad60315fa95c68b744528ef728d9f38vq9n6\" (UID: \"b5ad644c-f525-4885-89b3-1d61751802f3\") " pod="openstack-operators/9c3d187de28e81e2aa863f5c385ad60315fa95c68b744528ef728d9f38vq9n6" Dec 09 12:22:17 crc kubenswrapper[4703]: I1209 12:22:17.109077 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b5ad644c-f525-4885-89b3-1d61751802f3-util\") pod \"9c3d187de28e81e2aa863f5c385ad60315fa95c68b744528ef728d9f38vq9n6\" (UID: \"b5ad644c-f525-4885-89b3-1d61751802f3\") " pod="openstack-operators/9c3d187de28e81e2aa863f5c385ad60315fa95c68b744528ef728d9f38vq9n6" Dec 09 12:22:17 crc kubenswrapper[4703]: I1209 12:22:17.210862 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b5ad644c-f525-4885-89b3-1d61751802f3-bundle\") pod \"9c3d187de28e81e2aa863f5c385ad60315fa95c68b744528ef728d9f38vq9n6\" (UID: \"b5ad644c-f525-4885-89b3-1d61751802f3\") " pod="openstack-operators/9c3d187de28e81e2aa863f5c385ad60315fa95c68b744528ef728d9f38vq9n6" Dec 09 12:22:17 crc kubenswrapper[4703]: I1209 12:22:17.210931 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8h99\" (UniqueName: \"kubernetes.io/projected/b5ad644c-f525-4885-89b3-1d61751802f3-kube-api-access-m8h99\") pod \"9c3d187de28e81e2aa863f5c385ad60315fa95c68b744528ef728d9f38vq9n6\" (UID: \"b5ad644c-f525-4885-89b3-1d61751802f3\") " pod="openstack-operators/9c3d187de28e81e2aa863f5c385ad60315fa95c68b744528ef728d9f38vq9n6" Dec 09 12:22:17 crc kubenswrapper[4703]: I1209 12:22:17.210962 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b5ad644c-f525-4885-89b3-1d61751802f3-util\") pod \"9c3d187de28e81e2aa863f5c385ad60315fa95c68b744528ef728d9f38vq9n6\" (UID: \"b5ad644c-f525-4885-89b3-1d61751802f3\") " pod="openstack-operators/9c3d187de28e81e2aa863f5c385ad60315fa95c68b744528ef728d9f38vq9n6" Dec 09 12:22:17 crc kubenswrapper[4703]: I1209 12:22:17.211601 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b5ad644c-f525-4885-89b3-1d61751802f3-bundle\") pod \"9c3d187de28e81e2aa863f5c385ad60315fa95c68b744528ef728d9f38vq9n6\" (UID: \"b5ad644c-f525-4885-89b3-1d61751802f3\") " pod="openstack-operators/9c3d187de28e81e2aa863f5c385ad60315fa95c68b744528ef728d9f38vq9n6" Dec 09 12:22:17 crc kubenswrapper[4703]: I1209 12:22:17.211699 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b5ad644c-f525-4885-89b3-1d61751802f3-util\") pod \"9c3d187de28e81e2aa863f5c385ad60315fa95c68b744528ef728d9f38vq9n6\" (UID: \"b5ad644c-f525-4885-89b3-1d61751802f3\") " pod="openstack-operators/9c3d187de28e81e2aa863f5c385ad60315fa95c68b744528ef728d9f38vq9n6" Dec 09 12:22:17 crc kubenswrapper[4703]: I1209 12:22:17.230184 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8h99\" (UniqueName: \"kubernetes.io/projected/b5ad644c-f525-4885-89b3-1d61751802f3-kube-api-access-m8h99\") pod \"9c3d187de28e81e2aa863f5c385ad60315fa95c68b744528ef728d9f38vq9n6\" (UID: \"b5ad644c-f525-4885-89b3-1d61751802f3\") " pod="openstack-operators/9c3d187de28e81e2aa863f5c385ad60315fa95c68b744528ef728d9f38vq9n6" Dec 09 12:22:17 crc kubenswrapper[4703]: I1209 12:22:17.365163 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9c3d187de28e81e2aa863f5c385ad60315fa95c68b744528ef728d9f38vq9n6" Dec 09 12:22:17 crc kubenswrapper[4703]: I1209 12:22:17.981803 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/9c3d187de28e81e2aa863f5c385ad60315fa95c68b744528ef728d9f38vq9n6"] Dec 09 12:22:17 crc kubenswrapper[4703]: W1209 12:22:17.987279 4703 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb5ad644c_f525_4885_89b3_1d61751802f3.slice/crio-d5c6ec9749d14472ed5e0f9133501fde15304c8f8622168871fca1a51ab9e038 WatchSource:0}: Error finding container d5c6ec9749d14472ed5e0f9133501fde15304c8f8622168871fca1a51ab9e038: Status 404 returned error can't find the container with id d5c6ec9749d14472ed5e0f9133501fde15304c8f8622168871fca1a51ab9e038 Dec 09 12:22:18 crc kubenswrapper[4703]: I1209 12:22:18.983307 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9c3d187de28e81e2aa863f5c385ad60315fa95c68b744528ef728d9f38vq9n6" event={"ID":"b5ad644c-f525-4885-89b3-1d61751802f3","Type":"ContainerStarted","Data":"d5c6ec9749d14472ed5e0f9133501fde15304c8f8622168871fca1a51ab9e038"} Dec 09 12:22:19 crc kubenswrapper[4703]: I1209 12:22:19.990303 4703 generic.go:334] "Generic (PLEG): container finished" podID="b5ad644c-f525-4885-89b3-1d61751802f3" containerID="618dfd495157b8539722a1f082941f74532bc9d21b6da92ef316a75fb30775e2" exitCode=0 Dec 09 12:22:19 crc kubenswrapper[4703]: I1209 12:22:19.990407 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9c3d187de28e81e2aa863f5c385ad60315fa95c68b744528ef728d9f38vq9n6" event={"ID":"b5ad644c-f525-4885-89b3-1d61751802f3","Type":"ContainerDied","Data":"618dfd495157b8539722a1f082941f74532bc9d21b6da92ef316a75fb30775e2"} Dec 09 12:22:27 crc kubenswrapper[4703]: I1209 12:22:27.035771 4703 generic.go:334] "Generic (PLEG): container finished" podID="b5ad644c-f525-4885-89b3-1d61751802f3" containerID="c7e4b55665c1671ce6f4a6f8324d947a6114ef1e65cec60115e049680eff09a0" exitCode=0 Dec 09 12:22:27 crc kubenswrapper[4703]: I1209 12:22:27.035860 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9c3d187de28e81e2aa863f5c385ad60315fa95c68b744528ef728d9f38vq9n6" event={"ID":"b5ad644c-f525-4885-89b3-1d61751802f3","Type":"ContainerDied","Data":"c7e4b55665c1671ce6f4a6f8324d947a6114ef1e65cec60115e049680eff09a0"} Dec 09 12:22:28 crc kubenswrapper[4703]: I1209 12:22:28.045407 4703 generic.go:334] "Generic (PLEG): container finished" podID="b5ad644c-f525-4885-89b3-1d61751802f3" containerID="eddb6b1393e2789ef02882014ed56b1ba2c51e9597c79fea589c857c37c2de85" exitCode=0 Dec 09 12:22:28 crc kubenswrapper[4703]: I1209 12:22:28.045476 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9c3d187de28e81e2aa863f5c385ad60315fa95c68b744528ef728d9f38vq9n6" event={"ID":"b5ad644c-f525-4885-89b3-1d61751802f3","Type":"ContainerDied","Data":"eddb6b1393e2789ef02882014ed56b1ba2c51e9597c79fea589c857c37c2de85"} Dec 09 12:22:29 crc kubenswrapper[4703]: I1209 12:22:29.352351 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9c3d187de28e81e2aa863f5c385ad60315fa95c68b744528ef728d9f38vq9n6" Dec 09 12:22:29 crc kubenswrapper[4703]: I1209 12:22:29.489435 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b5ad644c-f525-4885-89b3-1d61751802f3-util\") pod \"b5ad644c-f525-4885-89b3-1d61751802f3\" (UID: \"b5ad644c-f525-4885-89b3-1d61751802f3\") " Dec 09 12:22:29 crc kubenswrapper[4703]: I1209 12:22:29.489532 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b5ad644c-f525-4885-89b3-1d61751802f3-bundle\") pod \"b5ad644c-f525-4885-89b3-1d61751802f3\" (UID: \"b5ad644c-f525-4885-89b3-1d61751802f3\") " Dec 09 12:22:29 crc kubenswrapper[4703]: I1209 12:22:29.489636 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m8h99\" (UniqueName: \"kubernetes.io/projected/b5ad644c-f525-4885-89b3-1d61751802f3-kube-api-access-m8h99\") pod \"b5ad644c-f525-4885-89b3-1d61751802f3\" (UID: \"b5ad644c-f525-4885-89b3-1d61751802f3\") " Dec 09 12:22:29 crc kubenswrapper[4703]: I1209 12:22:29.490747 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5ad644c-f525-4885-89b3-1d61751802f3-bundle" (OuterVolumeSpecName: "bundle") pod "b5ad644c-f525-4885-89b3-1d61751802f3" (UID: "b5ad644c-f525-4885-89b3-1d61751802f3"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:22:29 crc kubenswrapper[4703]: I1209 12:22:29.495949 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5ad644c-f525-4885-89b3-1d61751802f3-kube-api-access-m8h99" (OuterVolumeSpecName: "kube-api-access-m8h99") pod "b5ad644c-f525-4885-89b3-1d61751802f3" (UID: "b5ad644c-f525-4885-89b3-1d61751802f3"). InnerVolumeSpecName "kube-api-access-m8h99". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:22:29 crc kubenswrapper[4703]: I1209 12:22:29.504611 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5ad644c-f525-4885-89b3-1d61751802f3-util" (OuterVolumeSpecName: "util") pod "b5ad644c-f525-4885-89b3-1d61751802f3" (UID: "b5ad644c-f525-4885-89b3-1d61751802f3"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:22:29 crc kubenswrapper[4703]: I1209 12:22:29.591471 4703 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b5ad644c-f525-4885-89b3-1d61751802f3-util\") on node \"crc\" DevicePath \"\"" Dec 09 12:22:29 crc kubenswrapper[4703]: I1209 12:22:29.591552 4703 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b5ad644c-f525-4885-89b3-1d61751802f3-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:22:29 crc kubenswrapper[4703]: I1209 12:22:29.591566 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m8h99\" (UniqueName: \"kubernetes.io/projected/b5ad644c-f525-4885-89b3-1d61751802f3-kube-api-access-m8h99\") on node \"crc\" DevicePath \"\"" Dec 09 12:22:30 crc kubenswrapper[4703]: I1209 12:22:30.064950 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9c3d187de28e81e2aa863f5c385ad60315fa95c68b744528ef728d9f38vq9n6" event={"ID":"b5ad644c-f525-4885-89b3-1d61751802f3","Type":"ContainerDied","Data":"d5c6ec9749d14472ed5e0f9133501fde15304c8f8622168871fca1a51ab9e038"} Dec 09 12:22:30 crc kubenswrapper[4703]: I1209 12:22:30.065012 4703 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5c6ec9749d14472ed5e0f9133501fde15304c8f8622168871fca1a51ab9e038" Dec 09 12:22:30 crc kubenswrapper[4703]: I1209 12:22:30.065056 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9c3d187de28e81e2aa863f5c385ad60315fa95c68b744528ef728d9f38vq9n6" Dec 09 12:22:34 crc kubenswrapper[4703]: I1209 12:22:34.803664 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-6979fbd8bc-6qblc"] Dec 09 12:22:34 crc kubenswrapper[4703]: E1209 12:22:34.804417 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5ad644c-f525-4885-89b3-1d61751802f3" containerName="util" Dec 09 12:22:34 crc kubenswrapper[4703]: I1209 12:22:34.804491 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5ad644c-f525-4885-89b3-1d61751802f3" containerName="util" Dec 09 12:22:34 crc kubenswrapper[4703]: E1209 12:22:34.804508 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5ad644c-f525-4885-89b3-1d61751802f3" containerName="pull" Dec 09 12:22:34 crc kubenswrapper[4703]: I1209 12:22:34.804514 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5ad644c-f525-4885-89b3-1d61751802f3" containerName="pull" Dec 09 12:22:34 crc kubenswrapper[4703]: E1209 12:22:34.804523 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5ad644c-f525-4885-89b3-1d61751802f3" containerName="extract" Dec 09 12:22:34 crc kubenswrapper[4703]: I1209 12:22:34.804529 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5ad644c-f525-4885-89b3-1d61751802f3" containerName="extract" Dec 09 12:22:34 crc kubenswrapper[4703]: I1209 12:22:34.804642 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5ad644c-f525-4885-89b3-1d61751802f3" containerName="extract" Dec 09 12:22:34 crc kubenswrapper[4703]: I1209 12:22:34.805058 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-6979fbd8bc-6qblc" Dec 09 12:22:34 crc kubenswrapper[4703]: I1209 12:22:34.807311 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-scj47" Dec 09 12:22:34 crc kubenswrapper[4703]: I1209 12:22:34.847378 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-6979fbd8bc-6qblc"] Dec 09 12:22:34 crc kubenswrapper[4703]: I1209 12:22:34.970670 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6d7g\" (UniqueName: \"kubernetes.io/projected/7b10fb00-08de-4e39-ba8e-58a46ec09b19-kube-api-access-w6d7g\") pod \"openstack-operator-controller-operator-6979fbd8bc-6qblc\" (UID: \"7b10fb00-08de-4e39-ba8e-58a46ec09b19\") " pod="openstack-operators/openstack-operator-controller-operator-6979fbd8bc-6qblc" Dec 09 12:22:35 crc kubenswrapper[4703]: I1209 12:22:35.072723 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6d7g\" (UniqueName: \"kubernetes.io/projected/7b10fb00-08de-4e39-ba8e-58a46ec09b19-kube-api-access-w6d7g\") pod \"openstack-operator-controller-operator-6979fbd8bc-6qblc\" (UID: \"7b10fb00-08de-4e39-ba8e-58a46ec09b19\") " pod="openstack-operators/openstack-operator-controller-operator-6979fbd8bc-6qblc" Dec 09 12:22:35 crc kubenswrapper[4703]: I1209 12:22:35.100468 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6d7g\" (UniqueName: \"kubernetes.io/projected/7b10fb00-08de-4e39-ba8e-58a46ec09b19-kube-api-access-w6d7g\") pod \"openstack-operator-controller-operator-6979fbd8bc-6qblc\" (UID: \"7b10fb00-08de-4e39-ba8e-58a46ec09b19\") " pod="openstack-operators/openstack-operator-controller-operator-6979fbd8bc-6qblc" Dec 09 12:22:35 crc kubenswrapper[4703]: I1209 12:22:35.126693 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-6979fbd8bc-6qblc" Dec 09 12:22:35 crc kubenswrapper[4703]: I1209 12:22:35.593874 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-6979fbd8bc-6qblc"] Dec 09 12:22:36 crc kubenswrapper[4703]: I1209 12:22:36.111572 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-6979fbd8bc-6qblc" event={"ID":"7b10fb00-08de-4e39-ba8e-58a46ec09b19","Type":"ContainerStarted","Data":"50d0a15c864f0b49605b5a325241402409c800b0028979e0ee6fc9971f6dd99e"} Dec 09 12:22:41 crc kubenswrapper[4703]: I1209 12:22:41.156538 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-6979fbd8bc-6qblc" event={"ID":"7b10fb00-08de-4e39-ba8e-58a46ec09b19","Type":"ContainerStarted","Data":"0f3e0918d1ae89b217185681920f8009c395e579f54c180e37aa40715e93afc0"} Dec 09 12:22:41 crc kubenswrapper[4703]: I1209 12:22:41.157081 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-6979fbd8bc-6qblc" Dec 09 12:22:41 crc kubenswrapper[4703]: I1209 12:22:41.194811 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-6979fbd8bc-6qblc" podStartSLOduration=2.358229564 podStartE2EDuration="7.19478639s" podCreationTimestamp="2025-12-09 12:22:34 +0000 UTC" firstStartedPulling="2025-12-09 12:22:35.591374614 +0000 UTC m=+1054.840138133" lastFinishedPulling="2025-12-09 12:22:40.427931439 +0000 UTC m=+1059.676694959" observedRunningTime="2025-12-09 12:22:41.183555617 +0000 UTC m=+1060.432319156" watchObservedRunningTime="2025-12-09 12:22:41.19478639 +0000 UTC m=+1060.443549909" Dec 09 12:22:45 crc kubenswrapper[4703]: I1209 12:22:45.129524 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-6979fbd8bc-6qblc" Dec 09 12:23:17 crc kubenswrapper[4703]: I1209 12:23:17.884411 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-rqtl7"] Dec 09 12:23:17 crc kubenswrapper[4703]: I1209 12:23:17.886329 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-rqtl7" Dec 09 12:23:17 crc kubenswrapper[4703]: I1209 12:23:17.887558 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6c677c69b-4njvk"] Dec 09 12:23:17 crc kubenswrapper[4703]: I1209 12:23:17.889020 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-4njvk" Dec 09 12:23:17 crc kubenswrapper[4703]: I1209 12:23:17.891005 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-4khwx" Dec 09 12:23:17 crc kubenswrapper[4703]: I1209 12:23:17.891318 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-249kg" Dec 09 12:23:17 crc kubenswrapper[4703]: I1209 12:23:17.895783 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-rqtl7"] Dec 09 12:23:17 crc kubenswrapper[4703]: I1209 12:23:17.911762 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-697fb699cf-cnjkv"] Dec 09 12:23:17 crc kubenswrapper[4703]: I1209 12:23:17.913009 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-cnjkv" Dec 09 12:23:17 crc kubenswrapper[4703]: I1209 12:23:17.927483 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-slvpn" Dec 09 12:23:17 crc kubenswrapper[4703]: I1209 12:23:17.947960 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6c677c69b-4njvk"] Dec 09 12:23:17 crc kubenswrapper[4703]: I1209 12:23:17.957389 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-5697bb5779-v9r65"] Dec 09 12:23:17 crc kubenswrapper[4703]: I1209 12:23:17.958918 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-v9r65" Dec 09 12:23:17 crc kubenswrapper[4703]: I1209 12:23:17.964597 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-lhvz4" Dec 09 12:23:17 crc kubenswrapper[4703]: I1209 12:23:17.966527 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-697fb699cf-cnjkv"] Dec 09 12:23:17 crc kubenswrapper[4703]: I1209 12:23:17.987952 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-srhz6"] Dec 09 12:23:17 crc kubenswrapper[4703]: I1209 12:23:17.989055 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-srhz6" Dec 09 12:23:17 crc kubenswrapper[4703]: I1209 12:23:17.991581 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-9bmcc" Dec 09 12:23:17 crc kubenswrapper[4703]: I1209 12:23:17.999363 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-5697bb5779-v9r65"] Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.032183 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6glr\" (UniqueName: \"kubernetes.io/projected/fc2c796b-a300-435a-bce4-be428b7b4ac6-kube-api-access-s6glr\") pod \"designate-operator-controller-manager-697fb699cf-cnjkv\" (UID: \"fc2c796b-a300-435a-bce4-be428b7b4ac6\") " pod="openstack-operators/designate-operator-controller-manager-697fb699cf-cnjkv" Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.032275 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rs89\" (UniqueName: \"kubernetes.io/projected/56aba94b-3065-4e94-a683-ddcb0f0f1734-kube-api-access-2rs89\") pod \"barbican-operator-controller-manager-7d9dfd778-rqtl7\" (UID: \"56aba94b-3065-4e94-a683-ddcb0f0f1734\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-rqtl7" Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.032382 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gx5sx\" (UniqueName: \"kubernetes.io/projected/874a8c8a-8438-4764-9660-31185bf873e6-kube-api-access-gx5sx\") pod \"cinder-operator-controller-manager-6c677c69b-4njvk\" (UID: \"874a8c8a-8438-4764-9660-31185bf873e6\") " pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-4njvk" Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.045138 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-9x27n"] Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.046560 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-9x27n" Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.049392 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-x5qfg" Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.060554 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-srhz6"] Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.085314 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-78d48bff9d-2gwjv"] Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.097158 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-2gwjv" Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.101348 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.103345 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-9x27n"] Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.112275 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-pdnt4" Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.134154 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6glr\" (UniqueName: \"kubernetes.io/projected/fc2c796b-a300-435a-bce4-be428b7b4ac6-kube-api-access-s6glr\") pod \"designate-operator-controller-manager-697fb699cf-cnjkv\" (UID: \"fc2c796b-a300-435a-bce4-be428b7b4ac6\") " pod="openstack-operators/designate-operator-controller-manager-697fb699cf-cnjkv" Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.134220 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvm54\" (UniqueName: \"kubernetes.io/projected/8aff308b-1702-4057-80f7-517462396b76-kube-api-access-dvm54\") pod \"glance-operator-controller-manager-5697bb5779-v9r65\" (UID: \"8aff308b-1702-4057-80f7-517462396b76\") " pod="openstack-operators/glance-operator-controller-manager-5697bb5779-v9r65" Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.134249 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rs89\" (UniqueName: \"kubernetes.io/projected/56aba94b-3065-4e94-a683-ddcb0f0f1734-kube-api-access-2rs89\") pod \"barbican-operator-controller-manager-7d9dfd778-rqtl7\" (UID: \"56aba94b-3065-4e94-a683-ddcb0f0f1734\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-rqtl7" Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.134361 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jh75\" (UniqueName: \"kubernetes.io/projected/13a53c62-2578-4060-8dbf-17fccd6080b1-kube-api-access-4jh75\") pod \"horizon-operator-controller-manager-68c6d99b8f-srhz6\" (UID: \"13a53c62-2578-4060-8dbf-17fccd6080b1\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-srhz6" Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.134406 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gx5sx\" (UniqueName: \"kubernetes.io/projected/874a8c8a-8438-4764-9660-31185bf873e6-kube-api-access-gx5sx\") pod \"cinder-operator-controller-manager-6c677c69b-4njvk\" (UID: \"874a8c8a-8438-4764-9660-31185bf873e6\") " pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-4njvk" Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.162412 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-967d97867-4qrc4"] Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.164365 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-967d97867-4qrc4" Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.169940 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-kn2pp" Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.172230 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6glr\" (UniqueName: \"kubernetes.io/projected/fc2c796b-a300-435a-bce4-be428b7b4ac6-kube-api-access-s6glr\") pod \"designate-operator-controller-manager-697fb699cf-cnjkv\" (UID: \"fc2c796b-a300-435a-bce4-be428b7b4ac6\") " pod="openstack-operators/designate-operator-controller-manager-697fb699cf-cnjkv" Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.173316 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gx5sx\" (UniqueName: \"kubernetes.io/projected/874a8c8a-8438-4764-9660-31185bf873e6-kube-api-access-gx5sx\") pod \"cinder-operator-controller-manager-6c677c69b-4njvk\" (UID: \"874a8c8a-8438-4764-9660-31185bf873e6\") " pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-4njvk" Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.182695 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rs89\" (UniqueName: \"kubernetes.io/projected/56aba94b-3065-4e94-a683-ddcb0f0f1734-kube-api-access-2rs89\") pod \"barbican-operator-controller-manager-7d9dfd778-rqtl7\" (UID: \"56aba94b-3065-4e94-a683-ddcb0f0f1734\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-rqtl7" Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.182777 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-967d97867-4qrc4"] Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.196380 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-78d48bff9d-2gwjv"] Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.214070 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7765d96ddf-bv7zd"] Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.215757 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-bv7zd" Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.222657 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-rqtl7" Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.224124 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-6d9hd" Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.230231 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7765d96ddf-bv7zd"] Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.246942 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zf2r9\" (UniqueName: \"kubernetes.io/projected/53f0e694-8a8f-4985-a614-f8ca11f6cf32-kube-api-access-zf2r9\") pod \"ironic-operator-controller-manager-967d97867-4qrc4\" (UID: \"53f0e694-8a8f-4985-a614-f8ca11f6cf32\") " pod="openstack-operators/ironic-operator-controller-manager-967d97867-4qrc4" Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.247001 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jh75\" (UniqueName: \"kubernetes.io/projected/13a53c62-2578-4060-8dbf-17fccd6080b1-kube-api-access-4jh75\") pod \"horizon-operator-controller-manager-68c6d99b8f-srhz6\" (UID: \"13a53c62-2578-4060-8dbf-17fccd6080b1\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-srhz6" Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.247025 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmt8z\" (UniqueName: \"kubernetes.io/projected/73fb1f5d-7761-420a-b327-568cab0fb0d2-kube-api-access-tmt8z\") pod \"keystone-operator-controller-manager-7765d96ddf-bv7zd\" (UID: \"73fb1f5d-7761-420a-b327-568cab0fb0d2\") " pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-bv7zd" Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.247057 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8bgf\" (UniqueName: \"kubernetes.io/projected/063b1e9b-8501-497e-b999-280076922605-kube-api-access-j8bgf\") pod \"heat-operator-controller-manager-5f64f6f8bb-9x27n\" (UID: \"063b1e9b-8501-497e-b999-280076922605\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-9x27n" Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.247051 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-4njvk" Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.247095 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxjws\" (UniqueName: \"kubernetes.io/projected/28c19114-205f-4c58-8ca3-7a7a19b0968b-kube-api-access-rxjws\") pod \"infra-operator-controller-manager-78d48bff9d-2gwjv\" (UID: \"28c19114-205f-4c58-8ca3-7a7a19b0968b\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-2gwjv" Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.247116 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/28c19114-205f-4c58-8ca3-7a7a19b0968b-cert\") pod \"infra-operator-controller-manager-78d48bff9d-2gwjv\" (UID: \"28c19114-205f-4c58-8ca3-7a7a19b0968b\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-2gwjv" Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.247170 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvm54\" (UniqueName: \"kubernetes.io/projected/8aff308b-1702-4057-80f7-517462396b76-kube-api-access-dvm54\") pod \"glance-operator-controller-manager-5697bb5779-v9r65\" (UID: \"8aff308b-1702-4057-80f7-517462396b76\") " pod="openstack-operators/glance-operator-controller-manager-5697bb5779-v9r65" Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.261288 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-cnjkv" Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.275376 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-5b5fd79c9c-8xf75"] Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.279037 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-8xf75" Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.296809 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jh75\" (UniqueName: \"kubernetes.io/projected/13a53c62-2578-4060-8dbf-17fccd6080b1-kube-api-access-4jh75\") pod \"horizon-operator-controller-manager-68c6d99b8f-srhz6\" (UID: \"13a53c62-2578-4060-8dbf-17fccd6080b1\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-srhz6" Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.297228 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-vzkc4" Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.309171 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvm54\" (UniqueName: \"kubernetes.io/projected/8aff308b-1702-4057-80f7-517462396b76-kube-api-access-dvm54\") pod \"glance-operator-controller-manager-5697bb5779-v9r65\" (UID: \"8aff308b-1702-4057-80f7-517462396b76\") " pod="openstack-operators/glance-operator-controller-manager-5697bb5779-v9r65" Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.314467 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-srhz6" Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.323238 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-5b5fd79c9c-8xf75"] Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.364915 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxjws\" (UniqueName: \"kubernetes.io/projected/28c19114-205f-4c58-8ca3-7a7a19b0968b-kube-api-access-rxjws\") pod \"infra-operator-controller-manager-78d48bff9d-2gwjv\" (UID: \"28c19114-205f-4c58-8ca3-7a7a19b0968b\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-2gwjv" Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.364969 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/28c19114-205f-4c58-8ca3-7a7a19b0968b-cert\") pod \"infra-operator-controller-manager-78d48bff9d-2gwjv\" (UID: \"28c19114-205f-4c58-8ca3-7a7a19b0968b\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-2gwjv" Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.365110 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zf2r9\" (UniqueName: \"kubernetes.io/projected/53f0e694-8a8f-4985-a614-f8ca11f6cf32-kube-api-access-zf2r9\") pod \"ironic-operator-controller-manager-967d97867-4qrc4\" (UID: \"53f0e694-8a8f-4985-a614-f8ca11f6cf32\") " pod="openstack-operators/ironic-operator-controller-manager-967d97867-4qrc4" Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.365154 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmt8z\" (UniqueName: \"kubernetes.io/projected/73fb1f5d-7761-420a-b327-568cab0fb0d2-kube-api-access-tmt8z\") pod \"keystone-operator-controller-manager-7765d96ddf-bv7zd\" (UID: \"73fb1f5d-7761-420a-b327-568cab0fb0d2\") " pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-bv7zd" Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.365174 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q74nb\" (UniqueName: \"kubernetes.io/projected/df921247-4ca5-4916-b42e-15fc060d72c4-kube-api-access-q74nb\") pod \"manila-operator-controller-manager-5b5fd79c9c-8xf75\" (UID: \"df921247-4ca5-4916-b42e-15fc060d72c4\") " pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-8xf75" Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.365216 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8bgf\" (UniqueName: \"kubernetes.io/projected/063b1e9b-8501-497e-b999-280076922605-kube-api-access-j8bgf\") pod \"heat-operator-controller-manager-5f64f6f8bb-9x27n\" (UID: \"063b1e9b-8501-497e-b999-280076922605\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-9x27n" Dec 09 12:23:18 crc kubenswrapper[4703]: E1209 12:23:18.365724 4703 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 09 12:23:18 crc kubenswrapper[4703]: E1209 12:23:18.365765 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/28c19114-205f-4c58-8ca3-7a7a19b0968b-cert podName:28c19114-205f-4c58-8ca3-7a7a19b0968b nodeName:}" failed. No retries permitted until 2025-12-09 12:23:18.865750403 +0000 UTC m=+1098.114513922 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/28c19114-205f-4c58-8ca3-7a7a19b0968b-cert") pod "infra-operator-controller-manager-78d48bff9d-2gwjv" (UID: "28c19114-205f-4c58-8ca3-7a7a19b0968b") : secret "infra-operator-webhook-server-cert" not found Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.367241 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-79c8c4686c-tv5tz"] Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.369100 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-tv5tz" Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.387817 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-v5nvt" Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.405026 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zf2r9\" (UniqueName: \"kubernetes.io/projected/53f0e694-8a8f-4985-a614-f8ca11f6cf32-kube-api-access-zf2r9\") pod \"ironic-operator-controller-manager-967d97867-4qrc4\" (UID: \"53f0e694-8a8f-4985-a614-f8ca11f6cf32\") " pod="openstack-operators/ironic-operator-controller-manager-967d97867-4qrc4" Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.408789 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-79c8c4686c-tv5tz"] Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.409693 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmt8z\" (UniqueName: \"kubernetes.io/projected/73fb1f5d-7761-420a-b327-568cab0fb0d2-kube-api-access-tmt8z\") pod \"keystone-operator-controller-manager-7765d96ddf-bv7zd\" (UID: \"73fb1f5d-7761-420a-b327-568cab0fb0d2\") " pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-bv7zd" Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.412008 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8bgf\" (UniqueName: \"kubernetes.io/projected/063b1e9b-8501-497e-b999-280076922605-kube-api-access-j8bgf\") pod \"heat-operator-controller-manager-5f64f6f8bb-9x27n\" (UID: \"063b1e9b-8501-497e-b999-280076922605\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-9x27n" Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.415223 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxjws\" (UniqueName: \"kubernetes.io/projected/28c19114-205f-4c58-8ca3-7a7a19b0968b-kube-api-access-rxjws\") pod \"infra-operator-controller-manager-78d48bff9d-2gwjv\" (UID: \"28c19114-205f-4c58-8ca3-7a7a19b0968b\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-2gwjv" Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.431727 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-k7rqx"] Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.433388 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-k7rqx" Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.441892 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-pvhnc" Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.444044 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-gpbmp"] Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.449545 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-gpbmp" Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.454011 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-lgxw2" Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.465263 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-k7rqx"] Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.467403 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q74nb\" (UniqueName: \"kubernetes.io/projected/df921247-4ca5-4916-b42e-15fc060d72c4-kube-api-access-q74nb\") pod \"manila-operator-controller-manager-5b5fd79c9c-8xf75\" (UID: \"df921247-4ca5-4916-b42e-15fc060d72c4\") " pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-8xf75" Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.493508 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-gpbmp"] Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.506512 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-md4r8"] Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.507677 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-998648c74-md4r8" Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.516103 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-md4r8"] Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.520664 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-w876b" Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.524541 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q74nb\" (UniqueName: \"kubernetes.io/projected/df921247-4ca5-4916-b42e-15fc060d72c4-kube-api-access-q74nb\") pod \"manila-operator-controller-manager-5b5fd79c9c-8xf75\" (UID: \"df921247-4ca5-4916-b42e-15fc060d72c4\") " pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-8xf75" Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.540457 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fgcmht"] Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.541748 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fgcmht" Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.544756 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.545030 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-cg6jr" Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.562748 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-lrk9g"] Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.564581 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-lrk9g" Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.570517 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fgcmht"] Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.571282 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2nnh\" (UniqueName: \"kubernetes.io/projected/1facff82-6e9e-4bed-8145-1b00dcc84f51-kube-api-access-l2nnh\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-k7rqx\" (UID: \"1facff82-6e9e-4bed-8145-1b00dcc84f51\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-k7rqx" Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.571333 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlgm6\" (UniqueName: \"kubernetes.io/projected/de1bc545-3573-49b4-9a2d-8db33d6f37d1-kube-api-access-wlgm6\") pod \"mariadb-operator-controller-manager-79c8c4686c-tv5tz\" (UID: \"de1bc545-3573-49b4-9a2d-8db33d6f37d1\") " pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-tv5tz" Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.571400 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvr8p\" (UniqueName: \"kubernetes.io/projected/eb139c31-b7c2-4d15-b9be-c541adf0c87f-kube-api-access-zvr8p\") pod \"nova-operator-controller-manager-697bc559fc-gpbmp\" (UID: \"eb139c31-b7c2-4d15-b9be-c541adf0c87f\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-gpbmp" Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.572437 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-jkkwc" Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.584125 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-v9r65" Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.588355 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-lrk9g"] Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.606782 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-pswqz"] Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.608601 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-78f8948974-pswqz" Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.611896 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-j64lw" Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.618352 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-pswqz"] Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.624909 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-797ff5dd46-77fms"] Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.629735 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-797ff5dd46-77fms" Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.632460 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-hxwxt" Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.640596 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-9d58d64bc-zcck5"] Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.642213 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-zcck5" Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.645330 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-gwwfj" Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.669076 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-967d97867-4qrc4" Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.669532 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-9x27n" Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.695628 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a3ff4025-d356-4b31-b42b-5e198155ba91-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879fgcmht\" (UID: \"a3ff4025-d356-4b31-b42b-5e198155ba91\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fgcmht" Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.695689 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnx2x\" (UniqueName: \"kubernetes.io/projected/2866fa2f-a90b-4137-8ef3-23e9e1140899-kube-api-access-lnx2x\") pod \"octavia-operator-controller-manager-998648c74-md4r8\" (UID: \"2866fa2f-a90b-4137-8ef3-23e9e1140899\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-md4r8" Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.695719 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvmtg\" (UniqueName: \"kubernetes.io/projected/37c73372-42dd-44a5-a5cc-e7d324be6981-kube-api-access-mvmtg\") pod \"placement-operator-controller-manager-78f8948974-pswqz\" (UID: \"37c73372-42dd-44a5-a5cc-e7d324be6981\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-pswqz" Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.695768 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jt7q\" (UniqueName: \"kubernetes.io/projected/acd94eef-86bb-4acc-8790-011d87eb0da4-kube-api-access-4jt7q\") pod \"ovn-operator-controller-manager-b6456fdb6-lrk9g\" (UID: \"acd94eef-86bb-4acc-8790-011d87eb0da4\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-lrk9g" Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.695794 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2nnh\" (UniqueName: \"kubernetes.io/projected/1facff82-6e9e-4bed-8145-1b00dcc84f51-kube-api-access-l2nnh\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-k7rqx\" (UID: \"1facff82-6e9e-4bed-8145-1b00dcc84f51\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-k7rqx" Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.695826 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlgm6\" (UniqueName: \"kubernetes.io/projected/de1bc545-3573-49b4-9a2d-8db33d6f37d1-kube-api-access-wlgm6\") pod \"mariadb-operator-controller-manager-79c8c4686c-tv5tz\" (UID: \"de1bc545-3573-49b4-9a2d-8db33d6f37d1\") " pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-tv5tz" Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.695855 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-954w6\" (UniqueName: \"kubernetes.io/projected/a3ff4025-d356-4b31-b42b-5e198155ba91-kube-api-access-954w6\") pod \"openstack-baremetal-operator-controller-manager-84b575879fgcmht\" (UID: \"a3ff4025-d356-4b31-b42b-5e198155ba91\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fgcmht" Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.695914 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvr8p\" (UniqueName: \"kubernetes.io/projected/eb139c31-b7c2-4d15-b9be-c541adf0c87f-kube-api-access-zvr8p\") pod \"nova-operator-controller-manager-697bc559fc-gpbmp\" (UID: \"eb139c31-b7c2-4d15-b9be-c541adf0c87f\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-gpbmp" Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.699930 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-bv7zd" Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.717591 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-797ff5dd46-77fms"] Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.740664 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-8xf75" Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.746903 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-9d58d64bc-zcck5"] Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.747518 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2nnh\" (UniqueName: \"kubernetes.io/projected/1facff82-6e9e-4bed-8145-1b00dcc84f51-kube-api-access-l2nnh\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-k7rqx\" (UID: \"1facff82-6e9e-4bed-8145-1b00dcc84f51\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-k7rqx" Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.764844 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlgm6\" (UniqueName: \"kubernetes.io/projected/de1bc545-3573-49b4-9a2d-8db33d6f37d1-kube-api-access-wlgm6\") pod \"mariadb-operator-controller-manager-79c8c4686c-tv5tz\" (UID: \"de1bc545-3573-49b4-9a2d-8db33d6f37d1\") " pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-tv5tz" Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.765456 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvr8p\" (UniqueName: \"kubernetes.io/projected/eb139c31-b7c2-4d15-b9be-c541adf0c87f-kube-api-access-zvr8p\") pod \"nova-operator-controller-manager-697bc559fc-gpbmp\" (UID: \"eb139c31-b7c2-4d15-b9be-c541adf0c87f\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-gpbmp" Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.773355 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-kvpf4"] Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.775391 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5854674fcc-kvpf4" Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.790888 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-sdj42" Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.798819 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-k7rqx" Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.800853 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-954w6\" (UniqueName: \"kubernetes.io/projected/a3ff4025-d356-4b31-b42b-5e198155ba91-kube-api-access-954w6\") pod \"openstack-baremetal-operator-controller-manager-84b575879fgcmht\" (UID: \"a3ff4025-d356-4b31-b42b-5e198155ba91\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fgcmht" Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.800981 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bblbz\" (UniqueName: \"kubernetes.io/projected/79e382ca-8d65-45fc-8dbf-3626827cb50f-kube-api-access-bblbz\") pod \"test-operator-controller-manager-5854674fcc-kvpf4\" (UID: \"79e382ca-8d65-45fc-8dbf-3626827cb50f\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-kvpf4" Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.801006 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a3ff4025-d356-4b31-b42b-5e198155ba91-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879fgcmht\" (UID: \"a3ff4025-d356-4b31-b42b-5e198155ba91\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fgcmht" Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.801036 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnx2x\" (UniqueName: \"kubernetes.io/projected/2866fa2f-a90b-4137-8ef3-23e9e1140899-kube-api-access-lnx2x\") pod \"octavia-operator-controller-manager-998648c74-md4r8\" (UID: \"2866fa2f-a90b-4137-8ef3-23e9e1140899\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-md4r8" Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.801063 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvmtg\" (UniqueName: \"kubernetes.io/projected/37c73372-42dd-44a5-a5cc-e7d324be6981-kube-api-access-mvmtg\") pod \"placement-operator-controller-manager-78f8948974-pswqz\" (UID: \"37c73372-42dd-44a5-a5cc-e7d324be6981\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-pswqz" Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.801122 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vb9th\" (UniqueName: \"kubernetes.io/projected/4868f7dd-ada4-4df8-9bc8-ae5ca73f2935-kube-api-access-vb9th\") pod \"swift-operator-controller-manager-9d58d64bc-zcck5\" (UID: \"4868f7dd-ada4-4df8-9bc8-ae5ca73f2935\") " pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-zcck5" Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.801153 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdp7f\" (UniqueName: \"kubernetes.io/projected/db6f122b-a853-4ecb-8d82-2a8a04c8224e-kube-api-access-pdp7f\") pod \"telemetry-operator-controller-manager-797ff5dd46-77fms\" (UID: \"db6f122b-a853-4ecb-8d82-2a8a04c8224e\") " pod="openstack-operators/telemetry-operator-controller-manager-797ff5dd46-77fms" Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.801179 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jt7q\" (UniqueName: \"kubernetes.io/projected/acd94eef-86bb-4acc-8790-011d87eb0da4-kube-api-access-4jt7q\") pod \"ovn-operator-controller-manager-b6456fdb6-lrk9g\" (UID: \"acd94eef-86bb-4acc-8790-011d87eb0da4\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-lrk9g" Dec 09 12:23:18 crc kubenswrapper[4703]: E1209 12:23:18.801754 4703 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 09 12:23:18 crc kubenswrapper[4703]: E1209 12:23:18.801801 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a3ff4025-d356-4b31-b42b-5e198155ba91-cert podName:a3ff4025-d356-4b31-b42b-5e198155ba91 nodeName:}" failed. No retries permitted until 2025-12-09 12:23:19.301788132 +0000 UTC m=+1098.550551641 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a3ff4025-d356-4b31-b42b-5e198155ba91-cert") pod "openstack-baremetal-operator-controller-manager-84b575879fgcmht" (UID: "a3ff4025-d356-4b31-b42b-5e198155ba91") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.805561 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-kvpf4"] Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.810874 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-gpbmp" Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.852282 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvmtg\" (UniqueName: \"kubernetes.io/projected/37c73372-42dd-44a5-a5cc-e7d324be6981-kube-api-access-mvmtg\") pod \"placement-operator-controller-manager-78f8948974-pswqz\" (UID: \"37c73372-42dd-44a5-a5cc-e7d324be6981\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-pswqz" Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.852897 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jt7q\" (UniqueName: \"kubernetes.io/projected/acd94eef-86bb-4acc-8790-011d87eb0da4-kube-api-access-4jt7q\") pod \"ovn-operator-controller-manager-b6456fdb6-lrk9g\" (UID: \"acd94eef-86bb-4acc-8790-011d87eb0da4\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-lrk9g" Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.854640 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-954w6\" (UniqueName: \"kubernetes.io/projected/a3ff4025-d356-4b31-b42b-5e198155ba91-kube-api-access-954w6\") pod \"openstack-baremetal-operator-controller-manager-84b575879fgcmht\" (UID: \"a3ff4025-d356-4b31-b42b-5e198155ba91\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fgcmht" Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.859798 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-78f8948974-pswqz" Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.864409 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-667bd8d554-959xw"] Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.868001 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnx2x\" (UniqueName: \"kubernetes.io/projected/2866fa2f-a90b-4137-8ef3-23e9e1140899-kube-api-access-lnx2x\") pod \"octavia-operator-controller-manager-998648c74-md4r8\" (UID: \"2866fa2f-a90b-4137-8ef3-23e9e1140899\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-md4r8" Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.875464 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-667bd8d554-959xw" Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.883724 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-f62ch" Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.889567 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-667bd8d554-959xw"] Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.902804 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/28c19114-205f-4c58-8ca3-7a7a19b0968b-cert\") pod \"infra-operator-controller-manager-78d48bff9d-2gwjv\" (UID: \"28c19114-205f-4c58-8ca3-7a7a19b0968b\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-2gwjv" Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.902882 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bblbz\" (UniqueName: \"kubernetes.io/projected/79e382ca-8d65-45fc-8dbf-3626827cb50f-kube-api-access-bblbz\") pod \"test-operator-controller-manager-5854674fcc-kvpf4\" (UID: \"79e382ca-8d65-45fc-8dbf-3626827cb50f\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-kvpf4" Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.902982 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vb9th\" (UniqueName: \"kubernetes.io/projected/4868f7dd-ada4-4df8-9bc8-ae5ca73f2935-kube-api-access-vb9th\") pod \"swift-operator-controller-manager-9d58d64bc-zcck5\" (UID: \"4868f7dd-ada4-4df8-9bc8-ae5ca73f2935\") " pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-zcck5" Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.903004 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdp7f\" (UniqueName: \"kubernetes.io/projected/db6f122b-a853-4ecb-8d82-2a8a04c8224e-kube-api-access-pdp7f\") pod \"telemetry-operator-controller-manager-797ff5dd46-77fms\" (UID: \"db6f122b-a853-4ecb-8d82-2a8a04c8224e\") " pod="openstack-operators/telemetry-operator-controller-manager-797ff5dd46-77fms" Dec 09 12:23:18 crc kubenswrapper[4703]: E1209 12:23:18.903773 4703 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 09 12:23:18 crc kubenswrapper[4703]: E1209 12:23:18.903840 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/28c19114-205f-4c58-8ca3-7a7a19b0968b-cert podName:28c19114-205f-4c58-8ca3-7a7a19b0968b nodeName:}" failed. No retries permitted until 2025-12-09 12:23:19.903814719 +0000 UTC m=+1099.152578238 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/28c19114-205f-4c58-8ca3-7a7a19b0968b-cert") pod "infra-operator-controller-manager-78d48bff9d-2gwjv" (UID: "28c19114-205f-4c58-8ca3-7a7a19b0968b") : secret "infra-operator-webhook-server-cert" not found Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.925518 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-586c894b5-s992d"] Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.927036 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-586c894b5-s992d" Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.929889 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.929916 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.930098 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-6m842" Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.932794 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bblbz\" (UniqueName: \"kubernetes.io/projected/79e382ca-8d65-45fc-8dbf-3626827cb50f-kube-api-access-bblbz\") pod \"test-operator-controller-manager-5854674fcc-kvpf4\" (UID: \"79e382ca-8d65-45fc-8dbf-3626827cb50f\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-kvpf4" Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.934001 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vb9th\" (UniqueName: \"kubernetes.io/projected/4868f7dd-ada4-4df8-9bc8-ae5ca73f2935-kube-api-access-vb9th\") pod \"swift-operator-controller-manager-9d58d64bc-zcck5\" (UID: \"4868f7dd-ada4-4df8-9bc8-ae5ca73f2935\") " pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-zcck5" Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.942009 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-586c894b5-s992d"] Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.950024 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdp7f\" (UniqueName: \"kubernetes.io/projected/db6f122b-a853-4ecb-8d82-2a8a04c8224e-kube-api-access-pdp7f\") pod \"telemetry-operator-controller-manager-797ff5dd46-77fms\" (UID: \"db6f122b-a853-4ecb-8d82-2a8a04c8224e\") " pod="openstack-operators/telemetry-operator-controller-manager-797ff5dd46-77fms" Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.954906 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-842l9"] Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.956627 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-842l9" Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.963433 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-zcck5" Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.965582 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-mts68" Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.965850 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-842l9"] Dec 09 12:23:18 crc kubenswrapper[4703]: I1209 12:23:18.974165 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5854674fcc-kvpf4" Dec 09 12:23:19 crc kubenswrapper[4703]: I1209 12:23:19.004888 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjt87\" (UniqueName: \"kubernetes.io/projected/d34eb12d-d10a-406c-bfd9-9f772f9e63eb-kube-api-access-zjt87\") pod \"watcher-operator-controller-manager-667bd8d554-959xw\" (UID: \"d34eb12d-d10a-406c-bfd9-9f772f9e63eb\") " pod="openstack-operators/watcher-operator-controller-manager-667bd8d554-959xw" Dec 09 12:23:19 crc kubenswrapper[4703]: I1209 12:23:19.028437 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-tv5tz" Dec 09 12:23:19 crc kubenswrapper[4703]: I1209 12:23:19.082788 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-998648c74-md4r8" Dec 09 12:23:19 crc kubenswrapper[4703]: I1209 12:23:19.106479 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7m85\" (UniqueName: \"kubernetes.io/projected/ba302959-e371-4d55-a320-062f7aeeefea-kube-api-access-v7m85\") pod \"openstack-operator-controller-manager-586c894b5-s992d\" (UID: \"ba302959-e371-4d55-a320-062f7aeeefea\") " pod="openstack-operators/openstack-operator-controller-manager-586c894b5-s992d" Dec 09 12:23:19 crc kubenswrapper[4703]: I1209 12:23:19.106584 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjt87\" (UniqueName: \"kubernetes.io/projected/d34eb12d-d10a-406c-bfd9-9f772f9e63eb-kube-api-access-zjt87\") pod \"watcher-operator-controller-manager-667bd8d554-959xw\" (UID: \"d34eb12d-d10a-406c-bfd9-9f772f9e63eb\") " pod="openstack-operators/watcher-operator-controller-manager-667bd8d554-959xw" Dec 09 12:23:19 crc kubenswrapper[4703]: I1209 12:23:19.106678 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ba302959-e371-4d55-a320-062f7aeeefea-webhook-certs\") pod \"openstack-operator-controller-manager-586c894b5-s992d\" (UID: \"ba302959-e371-4d55-a320-062f7aeeefea\") " pod="openstack-operators/openstack-operator-controller-manager-586c894b5-s992d" Dec 09 12:23:19 crc kubenswrapper[4703]: I1209 12:23:19.106739 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2bjr\" (UniqueName: \"kubernetes.io/projected/b37cc5b3-46d5-403e-be1b-46eebc75f0ef-kube-api-access-z2bjr\") pod \"rabbitmq-cluster-operator-manager-668c99d594-842l9\" (UID: \"b37cc5b3-46d5-403e-be1b-46eebc75f0ef\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-842l9" Dec 09 12:23:19 crc kubenswrapper[4703]: I1209 12:23:19.106827 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ba302959-e371-4d55-a320-062f7aeeefea-metrics-certs\") pod \"openstack-operator-controller-manager-586c894b5-s992d\" (UID: \"ba302959-e371-4d55-a320-062f7aeeefea\") " pod="openstack-operators/openstack-operator-controller-manager-586c894b5-s992d" Dec 09 12:23:19 crc kubenswrapper[4703]: I1209 12:23:19.117183 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-lrk9g" Dec 09 12:23:19 crc kubenswrapper[4703]: I1209 12:23:19.143449 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjt87\" (UniqueName: \"kubernetes.io/projected/d34eb12d-d10a-406c-bfd9-9f772f9e63eb-kube-api-access-zjt87\") pod \"watcher-operator-controller-manager-667bd8d554-959xw\" (UID: \"d34eb12d-d10a-406c-bfd9-9f772f9e63eb\") " pod="openstack-operators/watcher-operator-controller-manager-667bd8d554-959xw" Dec 09 12:23:19 crc kubenswrapper[4703]: I1209 12:23:19.190737 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-797ff5dd46-77fms" Dec 09 12:23:19 crc kubenswrapper[4703]: I1209 12:23:19.203541 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-srhz6"] Dec 09 12:23:19 crc kubenswrapper[4703]: I1209 12:23:19.208007 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7m85\" (UniqueName: \"kubernetes.io/projected/ba302959-e371-4d55-a320-062f7aeeefea-kube-api-access-v7m85\") pod \"openstack-operator-controller-manager-586c894b5-s992d\" (UID: \"ba302959-e371-4d55-a320-062f7aeeefea\") " pod="openstack-operators/openstack-operator-controller-manager-586c894b5-s992d" Dec 09 12:23:19 crc kubenswrapper[4703]: E1209 12:23:19.215237 4703 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 09 12:23:19 crc kubenswrapper[4703]: E1209 12:23:19.215339 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ba302959-e371-4d55-a320-062f7aeeefea-webhook-certs podName:ba302959-e371-4d55-a320-062f7aeeefea nodeName:}" failed. No retries permitted until 2025-12-09 12:23:19.715312191 +0000 UTC m=+1098.964075710 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ba302959-e371-4d55-a320-062f7aeeefea-webhook-certs") pod "openstack-operator-controller-manager-586c894b5-s992d" (UID: "ba302959-e371-4d55-a320-062f7aeeefea") : secret "webhook-server-cert" not found Dec 09 12:23:19 crc kubenswrapper[4703]: I1209 12:23:19.215098 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ba302959-e371-4d55-a320-062f7aeeefea-webhook-certs\") pod \"openstack-operator-controller-manager-586c894b5-s992d\" (UID: \"ba302959-e371-4d55-a320-062f7aeeefea\") " pod="openstack-operators/openstack-operator-controller-manager-586c894b5-s992d" Dec 09 12:23:19 crc kubenswrapper[4703]: I1209 12:23:19.218694 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2bjr\" (UniqueName: \"kubernetes.io/projected/b37cc5b3-46d5-403e-be1b-46eebc75f0ef-kube-api-access-z2bjr\") pod \"rabbitmq-cluster-operator-manager-668c99d594-842l9\" (UID: \"b37cc5b3-46d5-403e-be1b-46eebc75f0ef\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-842l9" Dec 09 12:23:19 crc kubenswrapper[4703]: I1209 12:23:19.219306 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ba302959-e371-4d55-a320-062f7aeeefea-metrics-certs\") pod \"openstack-operator-controller-manager-586c894b5-s992d\" (UID: \"ba302959-e371-4d55-a320-062f7aeeefea\") " pod="openstack-operators/openstack-operator-controller-manager-586c894b5-s992d" Dec 09 12:23:19 crc kubenswrapper[4703]: E1209 12:23:19.219735 4703 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 09 12:23:19 crc kubenswrapper[4703]: E1209 12:23:19.220370 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ba302959-e371-4d55-a320-062f7aeeefea-metrics-certs podName:ba302959-e371-4d55-a320-062f7aeeefea nodeName:}" failed. No retries permitted until 2025-12-09 12:23:19.720333597 +0000 UTC m=+1098.969097116 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ba302959-e371-4d55-a320-062f7aeeefea-metrics-certs") pod "openstack-operator-controller-manager-586c894b5-s992d" (UID: "ba302959-e371-4d55-a320-062f7aeeefea") : secret "metrics-server-cert" not found Dec 09 12:23:19 crc kubenswrapper[4703]: I1209 12:23:19.236399 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7m85\" (UniqueName: \"kubernetes.io/projected/ba302959-e371-4d55-a320-062f7aeeefea-kube-api-access-v7m85\") pod \"openstack-operator-controller-manager-586c894b5-s992d\" (UID: \"ba302959-e371-4d55-a320-062f7aeeefea\") " pod="openstack-operators/openstack-operator-controller-manager-586c894b5-s992d" Dec 09 12:23:19 crc kubenswrapper[4703]: I1209 12:23:19.257168 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2bjr\" (UniqueName: \"kubernetes.io/projected/b37cc5b3-46d5-403e-be1b-46eebc75f0ef-kube-api-access-z2bjr\") pod \"rabbitmq-cluster-operator-manager-668c99d594-842l9\" (UID: \"b37cc5b3-46d5-403e-be1b-46eebc75f0ef\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-842l9" Dec 09 12:23:19 crc kubenswrapper[4703]: I1209 12:23:19.257277 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-697fb699cf-cnjkv"] Dec 09 12:23:19 crc kubenswrapper[4703]: I1209 12:23:19.268087 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-rqtl7"] Dec 09 12:23:19 crc kubenswrapper[4703]: I1209 12:23:19.308770 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-667bd8d554-959xw" Dec 09 12:23:19 crc kubenswrapper[4703]: I1209 12:23:19.322010 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a3ff4025-d356-4b31-b42b-5e198155ba91-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879fgcmht\" (UID: \"a3ff4025-d356-4b31-b42b-5e198155ba91\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fgcmht" Dec 09 12:23:19 crc kubenswrapper[4703]: E1209 12:23:19.322325 4703 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 09 12:23:19 crc kubenswrapper[4703]: E1209 12:23:19.322385 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a3ff4025-d356-4b31-b42b-5e198155ba91-cert podName:a3ff4025-d356-4b31-b42b-5e198155ba91 nodeName:}" failed. No retries permitted until 2025-12-09 12:23:20.322366755 +0000 UTC m=+1099.571130274 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a3ff4025-d356-4b31-b42b-5e198155ba91-cert") pod "openstack-baremetal-operator-controller-manager-84b575879fgcmht" (UID: "a3ff4025-d356-4b31-b42b-5e198155ba91") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 09 12:23:19 crc kubenswrapper[4703]: W1209 12:23:19.328879 4703 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc2c796b_a300_435a_bce4_be428b7b4ac6.slice/crio-04bb3030aec7a91b191a25c335f673ad1ef165306b4e4bb1925f4bb9e649c4e9 WatchSource:0}: Error finding container 04bb3030aec7a91b191a25c335f673ad1ef165306b4e4bb1925f4bb9e649c4e9: Status 404 returned error can't find the container with id 04bb3030aec7a91b191a25c335f673ad1ef165306b4e4bb1925f4bb9e649c4e9 Dec 09 12:23:19 crc kubenswrapper[4703]: I1209 12:23:19.350771 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-842l9" Dec 09 12:23:19 crc kubenswrapper[4703]: I1209 12:23:19.519199 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6c677c69b-4njvk"] Dec 09 12:23:19 crc kubenswrapper[4703]: I1209 12:23:19.580951 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-rqtl7" event={"ID":"56aba94b-3065-4e94-a683-ddcb0f0f1734","Type":"ContainerStarted","Data":"187385f969e0aa3db7e8b8db1dbb057f0ae09906f03efbf7598d068638374336"} Dec 09 12:23:19 crc kubenswrapper[4703]: W1209 12:23:19.601198 4703 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod874a8c8a_8438_4764_9660_31185bf873e6.slice/crio-f0372d44171927cb9965e710f6326f4b850fd666070813aa2b6af484f83fc345 WatchSource:0}: Error finding container f0372d44171927cb9965e710f6326f4b850fd666070813aa2b6af484f83fc345: Status 404 returned error can't find the container with id f0372d44171927cb9965e710f6326f4b850fd666070813aa2b6af484f83fc345 Dec 09 12:23:19 crc kubenswrapper[4703]: I1209 12:23:19.601836 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-cnjkv" event={"ID":"fc2c796b-a300-435a-bce4-be428b7b4ac6","Type":"ContainerStarted","Data":"04bb3030aec7a91b191a25c335f673ad1ef165306b4e4bb1925f4bb9e649c4e9"} Dec 09 12:23:19 crc kubenswrapper[4703]: I1209 12:23:19.612970 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-srhz6" event={"ID":"13a53c62-2578-4060-8dbf-17fccd6080b1","Type":"ContainerStarted","Data":"13b8f6f4d532a6a38fa8ca1c4e04de0710aa7bb229e392c09eda468911dafff5"} Dec 09 12:23:19 crc kubenswrapper[4703]: I1209 12:23:19.869374 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ba302959-e371-4d55-a320-062f7aeeefea-webhook-certs\") pod \"openstack-operator-controller-manager-586c894b5-s992d\" (UID: \"ba302959-e371-4d55-a320-062f7aeeefea\") " pod="openstack-operators/openstack-operator-controller-manager-586c894b5-s992d" Dec 09 12:23:19 crc kubenswrapper[4703]: I1209 12:23:19.869505 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ba302959-e371-4d55-a320-062f7aeeefea-metrics-certs\") pod \"openstack-operator-controller-manager-586c894b5-s992d\" (UID: \"ba302959-e371-4d55-a320-062f7aeeefea\") " pod="openstack-operators/openstack-operator-controller-manager-586c894b5-s992d" Dec 09 12:23:19 crc kubenswrapper[4703]: E1209 12:23:19.870230 4703 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 09 12:23:19 crc kubenswrapper[4703]: E1209 12:23:19.870309 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ba302959-e371-4d55-a320-062f7aeeefea-metrics-certs podName:ba302959-e371-4d55-a320-062f7aeeefea nodeName:}" failed. No retries permitted until 2025-12-09 12:23:20.870290109 +0000 UTC m=+1100.119053618 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ba302959-e371-4d55-a320-062f7aeeefea-metrics-certs") pod "openstack-operator-controller-manager-586c894b5-s992d" (UID: "ba302959-e371-4d55-a320-062f7aeeefea") : secret "metrics-server-cert" not found Dec 09 12:23:19 crc kubenswrapper[4703]: E1209 12:23:19.886495 4703 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 09 12:23:19 crc kubenswrapper[4703]: E1209 12:23:19.886595 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ba302959-e371-4d55-a320-062f7aeeefea-webhook-certs podName:ba302959-e371-4d55-a320-062f7aeeefea nodeName:}" failed. No retries permitted until 2025-12-09 12:23:20.886569498 +0000 UTC m=+1100.135333017 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ba302959-e371-4d55-a320-062f7aeeefea-webhook-certs") pod "openstack-operator-controller-manager-586c894b5-s992d" (UID: "ba302959-e371-4d55-a320-062f7aeeefea") : secret "webhook-server-cert" not found Dec 09 12:23:19 crc kubenswrapper[4703]: I1209 12:23:19.970740 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/28c19114-205f-4c58-8ca3-7a7a19b0968b-cert\") pod \"infra-operator-controller-manager-78d48bff9d-2gwjv\" (UID: \"28c19114-205f-4c58-8ca3-7a7a19b0968b\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-2gwjv" Dec 09 12:23:19 crc kubenswrapper[4703]: E1209 12:23:19.970995 4703 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 09 12:23:19 crc kubenswrapper[4703]: E1209 12:23:19.971050 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/28c19114-205f-4c58-8ca3-7a7a19b0968b-cert podName:28c19114-205f-4c58-8ca3-7a7a19b0968b nodeName:}" failed. No retries permitted until 2025-12-09 12:23:21.971034622 +0000 UTC m=+1101.219798141 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/28c19114-205f-4c58-8ca3-7a7a19b0968b-cert") pod "infra-operator-controller-manager-78d48bff9d-2gwjv" (UID: "28c19114-205f-4c58-8ca3-7a7a19b0968b") : secret "infra-operator-webhook-server-cert" not found Dec 09 12:23:20 crc kubenswrapper[4703]: I1209 12:23:20.027724 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-5697bb5779-v9r65"] Dec 09 12:23:20 crc kubenswrapper[4703]: I1209 12:23:20.056790 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-967d97867-4qrc4"] Dec 09 12:23:20 crc kubenswrapper[4703]: W1209 12:23:20.059515 4703 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod53f0e694_8a8f_4985_a614_f8ca11f6cf32.slice/crio-42cddea8526d6620ee8e3edee2b48dc871f092d09f215bb78e248cd52d8bdf0f WatchSource:0}: Error finding container 42cddea8526d6620ee8e3edee2b48dc871f092d09f215bb78e248cd52d8bdf0f: Status 404 returned error can't find the container with id 42cddea8526d6620ee8e3edee2b48dc871f092d09f215bb78e248cd52d8bdf0f Dec 09 12:23:20 crc kubenswrapper[4703]: I1209 12:23:20.090981 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7765d96ddf-bv7zd"] Dec 09 12:23:20 crc kubenswrapper[4703]: W1209 12:23:20.096073 4703 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod73fb1f5d_7761_420a_b327_568cab0fb0d2.slice/crio-c14e851140d3e35d98d5c746a858773cb8d793814c663bdac8169129642d608f WatchSource:0}: Error finding container c14e851140d3e35d98d5c746a858773cb8d793814c663bdac8169129642d608f: Status 404 returned error can't find the container with id c14e851140d3e35d98d5c746a858773cb8d793814c663bdac8169129642d608f Dec 09 12:23:20 crc kubenswrapper[4703]: I1209 12:23:20.139315 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-9x27n"] Dec 09 12:23:20 crc kubenswrapper[4703]: I1209 12:23:20.386136 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a3ff4025-d356-4b31-b42b-5e198155ba91-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879fgcmht\" (UID: \"a3ff4025-d356-4b31-b42b-5e198155ba91\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fgcmht" Dec 09 12:23:20 crc kubenswrapper[4703]: E1209 12:23:20.386468 4703 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 09 12:23:20 crc kubenswrapper[4703]: E1209 12:23:20.386564 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a3ff4025-d356-4b31-b42b-5e198155ba91-cert podName:a3ff4025-d356-4b31-b42b-5e198155ba91 nodeName:}" failed. No retries permitted until 2025-12-09 12:23:22.386543145 +0000 UTC m=+1101.635306664 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a3ff4025-d356-4b31-b42b-5e198155ba91-cert") pod "openstack-baremetal-operator-controller-manager-84b575879fgcmht" (UID: "a3ff4025-d356-4b31-b42b-5e198155ba91") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 09 12:23:20 crc kubenswrapper[4703]: I1209 12:23:20.424534 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-lrk9g"] Dec 09 12:23:20 crc kubenswrapper[4703]: I1209 12:23:20.439863 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-k7rqx"] Dec 09 12:23:20 crc kubenswrapper[4703]: W1209 12:23:20.442660 4703 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1facff82_6e9e_4bed_8145_1b00dcc84f51.slice/crio-81745f569c18f01f87e16d16620d73d0a4ae07c4c44116a266c2d0e5fffbe74c WatchSource:0}: Error finding container 81745f569c18f01f87e16d16620d73d0a4ae07c4c44116a266c2d0e5fffbe74c: Status 404 returned error can't find the container with id 81745f569c18f01f87e16d16620d73d0a4ae07c4c44116a266c2d0e5fffbe74c Dec 09 12:23:20 crc kubenswrapper[4703]: I1209 12:23:20.447425 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-5b5fd79c9c-8xf75"] Dec 09 12:23:20 crc kubenswrapper[4703]: I1209 12:23:20.454708 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-pswqz"] Dec 09 12:23:20 crc kubenswrapper[4703]: I1209 12:23:20.478877 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-kvpf4"] Dec 09 12:23:20 crc kubenswrapper[4703]: I1209 12:23:20.496030 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-9d58d64bc-zcck5"] Dec 09 12:23:20 crc kubenswrapper[4703]: I1209 12:23:20.558356 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-gpbmp"] Dec 09 12:23:20 crc kubenswrapper[4703]: I1209 12:23:20.610148 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-79c8c4686c-tv5tz"] Dec 09 12:23:20 crc kubenswrapper[4703]: I1209 12:23:20.658445 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-667bd8d554-959xw"] Dec 09 12:23:20 crc kubenswrapper[4703]: I1209 12:23:20.667500 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-4njvk" event={"ID":"874a8c8a-8438-4764-9660-31185bf873e6","Type":"ContainerStarted","Data":"f0372d44171927cb9965e710f6326f4b850fd666070813aa2b6af484f83fc345"} Dec 09 12:23:20 crc kubenswrapper[4703]: I1209 12:23:20.690778 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-md4r8"] Dec 09 12:23:20 crc kubenswrapper[4703]: I1209 12:23:20.708328 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-842l9"] Dec 09 12:23:20 crc kubenswrapper[4703]: E1209 12:23:20.755932 4703 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:6b3e0302608a2e70f9b5ae9167f6fbf59264f226d9db99d48f70466ab2f216b8,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zjt87,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-667bd8d554-959xw_openstack-operators(d34eb12d-d10a-406c-bfd9-9f772f9e63eb): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 09 12:23:20 crc kubenswrapper[4703]: E1209 12:23:20.761287 4703 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zjt87,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-667bd8d554-959xw_openstack-operators(d34eb12d-d10a-406c-bfd9-9f772f9e63eb): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 09 12:23:20 crc kubenswrapper[4703]: E1209 12:23:20.763329 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/watcher-operator-controller-manager-667bd8d554-959xw" podUID="d34eb12d-d10a-406c-bfd9-9f772f9e63eb" Dec 09 12:23:20 crc kubenswrapper[4703]: W1209 12:23:20.763735 4703 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddb6f122b_a853_4ecb_8d82_2a8a04c8224e.slice/crio-9f583b540c98c99e5cdc7399a7823afb4077a734388b5864a2da99b2e53ba609 WatchSource:0}: Error finding container 9f583b540c98c99e5cdc7399a7823afb4077a734388b5864a2da99b2e53ba609: Status 404 returned error can't find the container with id 9f583b540c98c99e5cdc7399a7823afb4077a734388b5864a2da99b2e53ba609 Dec 09 12:23:20 crc kubenswrapper[4703]: E1209 12:23:20.763765 4703 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-z2bjr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-842l9_openstack-operators(b37cc5b3-46d5-403e-be1b-46eebc75f0ef): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 09 12:23:20 crc kubenswrapper[4703]: E1209 12:23:20.769296 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-842l9" podUID="b37cc5b3-46d5-403e-be1b-46eebc75f0ef" Dec 09 12:23:20 crc kubenswrapper[4703]: I1209 12:23:20.769628 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-797ff5dd46-77fms"] Dec 09 12:23:20 crc kubenswrapper[4703]: I1209 12:23:20.769689 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-8xf75" event={"ID":"df921247-4ca5-4916-b42e-15fc060d72c4","Type":"ContainerStarted","Data":"2b16f8724ba7a7d479bb36be110bae7d422b301f1834f08ad5dbf7f924262129"} Dec 09 12:23:20 crc kubenswrapper[4703]: I1209 12:23:20.772538 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-bv7zd" event={"ID":"73fb1f5d-7761-420a-b327-568cab0fb0d2","Type":"ContainerStarted","Data":"c14e851140d3e35d98d5c746a858773cb8d793814c663bdac8169129642d608f"} Dec 09 12:23:20 crc kubenswrapper[4703]: E1209 12:23:20.783527 4703 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.142:5001/openstack-k8s-operators/telemetry-operator:d3ea47b1122f22fdda4bc30dd95b8db90981973f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pdp7f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-797ff5dd46-77fms_openstack-operators(db6f122b-a853-4ecb-8d82-2a8a04c8224e): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 09 12:23:20 crc kubenswrapper[4703]: E1209 12:23:20.804262 4703 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pdp7f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-797ff5dd46-77fms_openstack-operators(db6f122b-a853-4ecb-8d82-2a8a04c8224e): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 09 12:23:20 crc kubenswrapper[4703]: I1209 12:23:20.806893 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-967d97867-4qrc4" event={"ID":"53f0e694-8a8f-4985-a614-f8ca11f6cf32","Type":"ContainerStarted","Data":"42cddea8526d6620ee8e3edee2b48dc871f092d09f215bb78e248cd52d8bdf0f"} Dec 09 12:23:20 crc kubenswrapper[4703]: E1209 12:23:20.809467 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/telemetry-operator-controller-manager-797ff5dd46-77fms" podUID="db6f122b-a853-4ecb-8d82-2a8a04c8224e" Dec 09 12:23:20 crc kubenswrapper[4703]: I1209 12:23:20.842444 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-v9r65" event={"ID":"8aff308b-1702-4057-80f7-517462396b76","Type":"ContainerStarted","Data":"1ebb39b57dba2f17d76698877d0ed668f4facfeaf376b2e77638962caaec6793"} Dec 09 12:23:20 crc kubenswrapper[4703]: I1209 12:23:20.857752 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-pswqz" event={"ID":"37c73372-42dd-44a5-a5cc-e7d324be6981","Type":"ContainerStarted","Data":"cf028a589cb37230bfc44b0a415130e492b2ab271577f123750b5464e84a6b02"} Dec 09 12:23:20 crc kubenswrapper[4703]: I1209 12:23:20.862259 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-9x27n" event={"ID":"063b1e9b-8501-497e-b999-280076922605","Type":"ContainerStarted","Data":"964fbba55cc7dd84b0281f8aa8314814cb8287267320098e3bc68fbb65e92890"} Dec 09 12:23:20 crc kubenswrapper[4703]: I1209 12:23:20.864261 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-lrk9g" event={"ID":"acd94eef-86bb-4acc-8790-011d87eb0da4","Type":"ContainerStarted","Data":"038751558d03e37ef51677993d7500ea28870fac2da02e7b0a3701a36532a08b"} Dec 09 12:23:20 crc kubenswrapper[4703]: I1209 12:23:20.865294 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-k7rqx" event={"ID":"1facff82-6e9e-4bed-8145-1b00dcc84f51","Type":"ContainerStarted","Data":"81745f569c18f01f87e16d16620d73d0a4ae07c4c44116a266c2d0e5fffbe74c"} Dec 09 12:23:20 crc kubenswrapper[4703]: I1209 12:23:20.880549 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-zcck5" event={"ID":"4868f7dd-ada4-4df8-9bc8-ae5ca73f2935","Type":"ContainerStarted","Data":"2eee2d17780dd3d084e7ad4ae249ce0ed5d9e81c1f33683a734787bfa9a75f3d"} Dec 09 12:23:20 crc kubenswrapper[4703]: I1209 12:23:20.905425 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ba302959-e371-4d55-a320-062f7aeeefea-webhook-certs\") pod \"openstack-operator-controller-manager-586c894b5-s992d\" (UID: \"ba302959-e371-4d55-a320-062f7aeeefea\") " pod="openstack-operators/openstack-operator-controller-manager-586c894b5-s992d" Dec 09 12:23:20 crc kubenswrapper[4703]: I1209 12:23:20.905546 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ba302959-e371-4d55-a320-062f7aeeefea-metrics-certs\") pod \"openstack-operator-controller-manager-586c894b5-s992d\" (UID: \"ba302959-e371-4d55-a320-062f7aeeefea\") " pod="openstack-operators/openstack-operator-controller-manager-586c894b5-s992d" Dec 09 12:23:20 crc kubenswrapper[4703]: E1209 12:23:20.905766 4703 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 09 12:23:20 crc kubenswrapper[4703]: E1209 12:23:20.909384 4703 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 09 12:23:20 crc kubenswrapper[4703]: E1209 12:23:20.912597 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ba302959-e371-4d55-a320-062f7aeeefea-metrics-certs podName:ba302959-e371-4d55-a320-062f7aeeefea nodeName:}" failed. No retries permitted until 2025-12-09 12:23:22.905822843 +0000 UTC m=+1102.154586372 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ba302959-e371-4d55-a320-062f7aeeefea-metrics-certs") pod "openstack-operator-controller-manager-586c894b5-s992d" (UID: "ba302959-e371-4d55-a320-062f7aeeefea") : secret "metrics-server-cert" not found Dec 09 12:23:20 crc kubenswrapper[4703]: E1209 12:23:20.912707 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ba302959-e371-4d55-a320-062f7aeeefea-webhook-certs podName:ba302959-e371-4d55-a320-062f7aeeefea nodeName:}" failed. No retries permitted until 2025-12-09 12:23:22.912655118 +0000 UTC m=+1102.161418637 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ba302959-e371-4d55-a320-062f7aeeefea-webhook-certs") pod "openstack-operator-controller-manager-586c894b5-s992d" (UID: "ba302959-e371-4d55-a320-062f7aeeefea") : secret "webhook-server-cert" not found Dec 09 12:23:21 crc kubenswrapper[4703]: I1209 12:23:21.925156 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-kvpf4" event={"ID":"79e382ca-8d65-45fc-8dbf-3626827cb50f","Type":"ContainerStarted","Data":"110666cda62b7ac673cc69755d24cdc16ddee04f138c7a87a2295fe86283359e"} Dec 09 12:23:21 crc kubenswrapper[4703]: I1209 12:23:21.933791 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-842l9" event={"ID":"b37cc5b3-46d5-403e-be1b-46eebc75f0ef","Type":"ContainerStarted","Data":"1b615f185c5d8c0c98a7364319fcf22aeca283a33000cbfed40a0a783d89fe06"} Dec 09 12:23:21 crc kubenswrapper[4703]: I1209 12:23:21.938041 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-797ff5dd46-77fms" event={"ID":"db6f122b-a853-4ecb-8d82-2a8a04c8224e","Type":"ContainerStarted","Data":"9f583b540c98c99e5cdc7399a7823afb4077a734388b5864a2da99b2e53ba609"} Dec 09 12:23:21 crc kubenswrapper[4703]: E1209 12:23:21.939967 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-842l9" podUID="b37cc5b3-46d5-403e-be1b-46eebc75f0ef" Dec 09 12:23:21 crc kubenswrapper[4703]: I1209 12:23:21.941079 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-md4r8" event={"ID":"2866fa2f-a90b-4137-8ef3-23e9e1140899","Type":"ContainerStarted","Data":"d1c4851ca986a2387e8e40e4ac4a538126eac7b336716114e846e006a2d2c238"} Dec 09 12:23:21 crc kubenswrapper[4703]: I1209 12:23:21.942237 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-667bd8d554-959xw" event={"ID":"d34eb12d-d10a-406c-bfd9-9f772f9e63eb","Type":"ContainerStarted","Data":"46922b1f5d05ed7539b309f297af12a090ccd396c4d7d0d6bb8fdffc60a1edfe"} Dec 09 12:23:21 crc kubenswrapper[4703]: I1209 12:23:21.943924 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-gpbmp" event={"ID":"eb139c31-b7c2-4d15-b9be-c541adf0c87f","Type":"ContainerStarted","Data":"dbdec37163945f924f6d8adcc0d473fa8b63d101fe4b1b8736f8f583e2f2dd15"} Dec 09 12:23:21 crc kubenswrapper[4703]: E1209 12:23:21.944631 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:6b3e0302608a2e70f9b5ae9167f6fbf59264f226d9db99d48f70466ab2f216b8\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/watcher-operator-controller-manager-667bd8d554-959xw" podUID="d34eb12d-d10a-406c-bfd9-9f772f9e63eb" Dec 09 12:23:21 crc kubenswrapper[4703]: E1209 12:23:21.946066 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.142:5001/openstack-k8s-operators/telemetry-operator:d3ea47b1122f22fdda4bc30dd95b8db90981973f\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/telemetry-operator-controller-manager-797ff5dd46-77fms" podUID="db6f122b-a853-4ecb-8d82-2a8a04c8224e" Dec 09 12:23:21 crc kubenswrapper[4703]: I1209 12:23:21.947925 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-tv5tz" event={"ID":"de1bc545-3573-49b4-9a2d-8db33d6f37d1","Type":"ContainerStarted","Data":"80336ed5d5838c69b5df4f35d8403145d75ae365ebc927eabdce1ba23632a75c"} Dec 09 12:23:22 crc kubenswrapper[4703]: I1209 12:23:22.042790 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/28c19114-205f-4c58-8ca3-7a7a19b0968b-cert\") pod \"infra-operator-controller-manager-78d48bff9d-2gwjv\" (UID: \"28c19114-205f-4c58-8ca3-7a7a19b0968b\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-2gwjv" Dec 09 12:23:22 crc kubenswrapper[4703]: E1209 12:23:22.043412 4703 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 09 12:23:22 crc kubenswrapper[4703]: E1209 12:23:22.043505 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/28c19114-205f-4c58-8ca3-7a7a19b0968b-cert podName:28c19114-205f-4c58-8ca3-7a7a19b0968b nodeName:}" failed. No retries permitted until 2025-12-09 12:23:26.04347554 +0000 UTC m=+1105.292239059 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/28c19114-205f-4c58-8ca3-7a7a19b0968b-cert") pod "infra-operator-controller-manager-78d48bff9d-2gwjv" (UID: "28c19114-205f-4c58-8ca3-7a7a19b0968b") : secret "infra-operator-webhook-server-cert" not found Dec 09 12:23:22 crc kubenswrapper[4703]: I1209 12:23:22.389956 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a3ff4025-d356-4b31-b42b-5e198155ba91-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879fgcmht\" (UID: \"a3ff4025-d356-4b31-b42b-5e198155ba91\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fgcmht" Dec 09 12:23:22 crc kubenswrapper[4703]: E1209 12:23:22.390378 4703 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 09 12:23:22 crc kubenswrapper[4703]: E1209 12:23:22.390478 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a3ff4025-d356-4b31-b42b-5e198155ba91-cert podName:a3ff4025-d356-4b31-b42b-5e198155ba91 nodeName:}" failed. No retries permitted until 2025-12-09 12:23:26.39042092 +0000 UTC m=+1105.639184439 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a3ff4025-d356-4b31-b42b-5e198155ba91-cert") pod "openstack-baremetal-operator-controller-manager-84b575879fgcmht" (UID: "a3ff4025-d356-4b31-b42b-5e198155ba91") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 09 12:23:22 crc kubenswrapper[4703]: I1209 12:23:22.985324 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ba302959-e371-4d55-a320-062f7aeeefea-webhook-certs\") pod \"openstack-operator-controller-manager-586c894b5-s992d\" (UID: \"ba302959-e371-4d55-a320-062f7aeeefea\") " pod="openstack-operators/openstack-operator-controller-manager-586c894b5-s992d" Dec 09 12:23:22 crc kubenswrapper[4703]: I1209 12:23:22.985437 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ba302959-e371-4d55-a320-062f7aeeefea-metrics-certs\") pod \"openstack-operator-controller-manager-586c894b5-s992d\" (UID: \"ba302959-e371-4d55-a320-062f7aeeefea\") " pod="openstack-operators/openstack-operator-controller-manager-586c894b5-s992d" Dec 09 12:23:22 crc kubenswrapper[4703]: E1209 12:23:22.985580 4703 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 09 12:23:22 crc kubenswrapper[4703]: E1209 12:23:22.985628 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ba302959-e371-4d55-a320-062f7aeeefea-metrics-certs podName:ba302959-e371-4d55-a320-062f7aeeefea nodeName:}" failed. No retries permitted until 2025-12-09 12:23:26.985613471 +0000 UTC m=+1106.234376990 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ba302959-e371-4d55-a320-062f7aeeefea-metrics-certs") pod "openstack-operator-controller-manager-586c894b5-s992d" (UID: "ba302959-e371-4d55-a320-062f7aeeefea") : secret "metrics-server-cert" not found Dec 09 12:23:22 crc kubenswrapper[4703]: E1209 12:23:22.985983 4703 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 09 12:23:22 crc kubenswrapper[4703]: E1209 12:23:22.986016 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ba302959-e371-4d55-a320-062f7aeeefea-webhook-certs podName:ba302959-e371-4d55-a320-062f7aeeefea nodeName:}" failed. No retries permitted until 2025-12-09 12:23:26.986008081 +0000 UTC m=+1106.234771600 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ba302959-e371-4d55-a320-062f7aeeefea-webhook-certs") pod "openstack-operator-controller-manager-586c894b5-s992d" (UID: "ba302959-e371-4d55-a320-062f7aeeefea") : secret "webhook-server-cert" not found Dec 09 12:23:23 crc kubenswrapper[4703]: E1209 12:23:23.006473 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-842l9" podUID="b37cc5b3-46d5-403e-be1b-46eebc75f0ef" Dec 09 12:23:23 crc kubenswrapper[4703]: E1209 12:23:23.007048 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.142:5001/openstack-k8s-operators/telemetry-operator:d3ea47b1122f22fdda4bc30dd95b8db90981973f\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/telemetry-operator-controller-manager-797ff5dd46-77fms" podUID="db6f122b-a853-4ecb-8d82-2a8a04c8224e" Dec 09 12:23:23 crc kubenswrapper[4703]: E1209 12:23:23.007119 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:6b3e0302608a2e70f9b5ae9167f6fbf59264f226d9db99d48f70466ab2f216b8\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/watcher-operator-controller-manager-667bd8d554-959xw" podUID="d34eb12d-d10a-406c-bfd9-9f772f9e63eb" Dec 09 12:23:26 crc kubenswrapper[4703]: I1209 12:23:26.116276 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/28c19114-205f-4c58-8ca3-7a7a19b0968b-cert\") pod \"infra-operator-controller-manager-78d48bff9d-2gwjv\" (UID: \"28c19114-205f-4c58-8ca3-7a7a19b0968b\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-2gwjv" Dec 09 12:23:26 crc kubenswrapper[4703]: E1209 12:23:26.116375 4703 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 09 12:23:26 crc kubenswrapper[4703]: E1209 12:23:26.117109 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/28c19114-205f-4c58-8ca3-7a7a19b0968b-cert podName:28c19114-205f-4c58-8ca3-7a7a19b0968b nodeName:}" failed. No retries permitted until 2025-12-09 12:23:34.117081489 +0000 UTC m=+1113.365845008 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/28c19114-205f-4c58-8ca3-7a7a19b0968b-cert") pod "infra-operator-controller-manager-78d48bff9d-2gwjv" (UID: "28c19114-205f-4c58-8ca3-7a7a19b0968b") : secret "infra-operator-webhook-server-cert" not found Dec 09 12:23:26 crc kubenswrapper[4703]: I1209 12:23:26.439271 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a3ff4025-d356-4b31-b42b-5e198155ba91-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879fgcmht\" (UID: \"a3ff4025-d356-4b31-b42b-5e198155ba91\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fgcmht" Dec 09 12:23:26 crc kubenswrapper[4703]: E1209 12:23:26.439517 4703 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 09 12:23:26 crc kubenswrapper[4703]: E1209 12:23:26.439603 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a3ff4025-d356-4b31-b42b-5e198155ba91-cert podName:a3ff4025-d356-4b31-b42b-5e198155ba91 nodeName:}" failed. No retries permitted until 2025-12-09 12:23:34.439563877 +0000 UTC m=+1113.688327396 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a3ff4025-d356-4b31-b42b-5e198155ba91-cert") pod "openstack-baremetal-operator-controller-manager-84b575879fgcmht" (UID: "a3ff4025-d356-4b31-b42b-5e198155ba91") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 09 12:23:27 crc kubenswrapper[4703]: I1209 12:23:27.155167 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ba302959-e371-4d55-a320-062f7aeeefea-metrics-certs\") pod \"openstack-operator-controller-manager-586c894b5-s992d\" (UID: \"ba302959-e371-4d55-a320-062f7aeeefea\") " pod="openstack-operators/openstack-operator-controller-manager-586c894b5-s992d" Dec 09 12:23:27 crc kubenswrapper[4703]: I1209 12:23:27.155288 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ba302959-e371-4d55-a320-062f7aeeefea-webhook-certs\") pod \"openstack-operator-controller-manager-586c894b5-s992d\" (UID: \"ba302959-e371-4d55-a320-062f7aeeefea\") " pod="openstack-operators/openstack-operator-controller-manager-586c894b5-s992d" Dec 09 12:23:27 crc kubenswrapper[4703]: E1209 12:23:27.155449 4703 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 09 12:23:27 crc kubenswrapper[4703]: E1209 12:23:27.155498 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ba302959-e371-4d55-a320-062f7aeeefea-webhook-certs podName:ba302959-e371-4d55-a320-062f7aeeefea nodeName:}" failed. No retries permitted until 2025-12-09 12:23:35.155484511 +0000 UTC m=+1114.404248030 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ba302959-e371-4d55-a320-062f7aeeefea-webhook-certs") pod "openstack-operator-controller-manager-586c894b5-s992d" (UID: "ba302959-e371-4d55-a320-062f7aeeefea") : secret "webhook-server-cert" not found Dec 09 12:23:27 crc kubenswrapper[4703]: E1209 12:23:27.155605 4703 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 09 12:23:27 crc kubenswrapper[4703]: E1209 12:23:27.155751 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ba302959-e371-4d55-a320-062f7aeeefea-metrics-certs podName:ba302959-e371-4d55-a320-062f7aeeefea nodeName:}" failed. No retries permitted until 2025-12-09 12:23:35.155717597 +0000 UTC m=+1114.404481286 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ba302959-e371-4d55-a320-062f7aeeefea-metrics-certs") pod "openstack-operator-controller-manager-586c894b5-s992d" (UID: "ba302959-e371-4d55-a320-062f7aeeefea") : secret "metrics-server-cert" not found Dec 09 12:23:30 crc kubenswrapper[4703]: I1209 12:23:30.084581 4703 patch_prober.go:28] interesting pod/machine-config-daemon-q8sfk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 12:23:30 crc kubenswrapper[4703]: I1209 12:23:30.084892 4703 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 12:23:34 crc kubenswrapper[4703]: I1209 12:23:34.121668 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/28c19114-205f-4c58-8ca3-7a7a19b0968b-cert\") pod \"infra-operator-controller-manager-78d48bff9d-2gwjv\" (UID: \"28c19114-205f-4c58-8ca3-7a7a19b0968b\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-2gwjv" Dec 09 12:23:34 crc kubenswrapper[4703]: I1209 12:23:34.128664 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/28c19114-205f-4c58-8ca3-7a7a19b0968b-cert\") pod \"infra-operator-controller-manager-78d48bff9d-2gwjv\" (UID: \"28c19114-205f-4c58-8ca3-7a7a19b0968b\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-2gwjv" Dec 09 12:23:34 crc kubenswrapper[4703]: I1209 12:23:34.355317 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-2gwjv" Dec 09 12:23:34 crc kubenswrapper[4703]: I1209 12:23:34.530218 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a3ff4025-d356-4b31-b42b-5e198155ba91-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879fgcmht\" (UID: \"a3ff4025-d356-4b31-b42b-5e198155ba91\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fgcmht" Dec 09 12:23:34 crc kubenswrapper[4703]: E1209 12:23:34.530462 4703 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 09 12:23:34 crc kubenswrapper[4703]: E1209 12:23:34.530574 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a3ff4025-d356-4b31-b42b-5e198155ba91-cert podName:a3ff4025-d356-4b31-b42b-5e198155ba91 nodeName:}" failed. No retries permitted until 2025-12-09 12:23:50.530549097 +0000 UTC m=+1129.779312616 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a3ff4025-d356-4b31-b42b-5e198155ba91-cert") pod "openstack-baremetal-operator-controller-manager-84b575879fgcmht" (UID: "a3ff4025-d356-4b31-b42b-5e198155ba91") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 09 12:23:35 crc kubenswrapper[4703]: I1209 12:23:35.241166 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ba302959-e371-4d55-a320-062f7aeeefea-webhook-certs\") pod \"openstack-operator-controller-manager-586c894b5-s992d\" (UID: \"ba302959-e371-4d55-a320-062f7aeeefea\") " pod="openstack-operators/openstack-operator-controller-manager-586c894b5-s992d" Dec 09 12:23:35 crc kubenswrapper[4703]: E1209 12:23:35.241446 4703 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 09 12:23:35 crc kubenswrapper[4703]: E1209 12:23:35.242057 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ba302959-e371-4d55-a320-062f7aeeefea-webhook-certs podName:ba302959-e371-4d55-a320-062f7aeeefea nodeName:}" failed. No retries permitted until 2025-12-09 12:23:51.242029329 +0000 UTC m=+1130.490792958 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ba302959-e371-4d55-a320-062f7aeeefea-webhook-certs") pod "openstack-operator-controller-manager-586c894b5-s992d" (UID: "ba302959-e371-4d55-a320-062f7aeeefea") : secret "webhook-server-cert" not found Dec 09 12:23:35 crc kubenswrapper[4703]: I1209 12:23:35.242037 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ba302959-e371-4d55-a320-062f7aeeefea-metrics-certs\") pod \"openstack-operator-controller-manager-586c894b5-s992d\" (UID: \"ba302959-e371-4d55-a320-062f7aeeefea\") " pod="openstack-operators/openstack-operator-controller-manager-586c894b5-s992d" Dec 09 12:23:35 crc kubenswrapper[4703]: E1209 12:23:35.242256 4703 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 09 12:23:35 crc kubenswrapper[4703]: E1209 12:23:35.242311 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ba302959-e371-4d55-a320-062f7aeeefea-metrics-certs podName:ba302959-e371-4d55-a320-062f7aeeefea nodeName:}" failed. No retries permitted until 2025-12-09 12:23:51.242301157 +0000 UTC m=+1130.491064676 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ba302959-e371-4d55-a320-062f7aeeefea-metrics-certs") pod "openstack-operator-controller-manager-586c894b5-s992d" (UID: "ba302959-e371-4d55-a320-062f7aeeefea") : secret "metrics-server-cert" not found Dec 09 12:23:38 crc kubenswrapper[4703]: E1209 12:23:38.997172 4703 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:424da951f13f1fbe9083215dc9f5088f90676dd813f01fdf3c1a8639b61cbaad" Dec 09 12:23:38 crc kubenswrapper[4703]: E1209 12:23:38.998044 4703 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:424da951f13f1fbe9083215dc9f5088f90676dd813f01fdf3c1a8639b61cbaad,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wlgm6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-79c8c4686c-tv5tz_openstack-operators(de1bc545-3573-49b4-9a2d-8db33d6f37d1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 12:23:40 crc kubenswrapper[4703]: E1209 12:23:40.146960 4703 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f" Dec 09 12:23:40 crc kubenswrapper[4703]: E1209 12:23:40.147149 4703 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mvmtg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-78f8948974-pswqz_openstack-operators(37c73372-42dd-44a5-a5cc-e7d324be6981): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 12:23:41 crc kubenswrapper[4703]: E1209 12:23:41.531524 4703 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:44126f9c6b1d2bf752ddf989e20a4fc4cc1c07723d4fcb78465ccb2f55da6b3a" Dec 09 12:23:41 crc kubenswrapper[4703]: E1209 12:23:41.532142 4703 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:44126f9c6b1d2bf752ddf989e20a4fc4cc1c07723d4fcb78465ccb2f55da6b3a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-q74nb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-5b5fd79c9c-8xf75_openstack-operators(df921247-4ca5-4916-b42e-15fc060d72c4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 12:23:42 crc kubenswrapper[4703]: E1209 12:23:42.462698 4703 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/glance-operator@sha256:5370dc4a8e776923eec00bb50cbdb2e390e9dde50be26bdc04a216bd2d6b5027" Dec 09 12:23:42 crc kubenswrapper[4703]: E1209 12:23:42.463427 4703 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/glance-operator@sha256:5370dc4a8e776923eec00bb50cbdb2e390e9dde50be26bdc04a216bd2d6b5027,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dvm54,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-5697bb5779-v9r65_openstack-operators(8aff308b-1702-4057-80f7-517462396b76): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 12:23:43 crc kubenswrapper[4703]: E1209 12:23:43.517976 4703 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:0b3fb69f35c151895d3dffd514974a9f9fe1c77c3bca69b78b81efb183cf4557" Dec 09 12:23:43 crc kubenswrapper[4703]: E1209 12:23:43.518154 4703 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:0b3fb69f35c151895d3dffd514974a9f9fe1c77c3bca69b78b81efb183cf4557,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-l2nnh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-5fdfd5b6b5-k7rqx_openstack-operators(1facff82-6e9e-4bed-8145-1b00dcc84f51): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 12:23:47 crc kubenswrapper[4703]: E1209 12:23:47.211249 4703 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:9e847f4dbdea19ab997f32a02b3680a9bd966f9c705911645c3866a19fda9ea5" Dec 09 12:23:47 crc kubenswrapper[4703]: E1209 12:23:47.212540 4703 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:9e847f4dbdea19ab997f32a02b3680a9bd966f9c705911645c3866a19fda9ea5,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4jh75,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-68c6d99b8f-srhz6_openstack-operators(13a53c62-2578-4060-8dbf-17fccd6080b1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 12:23:48 crc kubenswrapper[4703]: E1209 12:23:48.030519 4703 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:3aa109bb973253ae9dcf339b9b65abbd1176cdb4be672c93e538a5f113816991" Dec 09 12:23:48 crc kubenswrapper[4703]: E1209 12:23:48.030814 4703 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:3aa109bb973253ae9dcf339b9b65abbd1176cdb4be672c93e538a5f113816991,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vb9th,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-9d58d64bc-zcck5_openstack-operators(4868f7dd-ada4-4df8-9bc8-ae5ca73f2935): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 12:23:48 crc kubenswrapper[4703]: E1209 12:23:48.713458 4703 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94" Dec 09 12:23:48 crc kubenswrapper[4703]: E1209 12:23:48.713702 4703 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bblbz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5854674fcc-kvpf4_openstack-operators(79e382ca-8d65-45fc-8dbf-3626827cb50f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 12:23:49 crc kubenswrapper[4703]: E1209 12:23:49.805654 4703 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:c4abfc148600dfa85915f3dc911d988ea2335f26cb6b8d749fe79bfe53e5e429" Dec 09 12:23:49 crc kubenswrapper[4703]: E1209 12:23:49.805862 4703 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:c4abfc148600dfa85915f3dc911d988ea2335f26cb6b8d749fe79bfe53e5e429,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-j8bgf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-5f64f6f8bb-9x27n_openstack-operators(063b1e9b-8501-497e-b999-280076922605): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 12:23:50 crc kubenswrapper[4703]: I1209 12:23:50.617464 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a3ff4025-d356-4b31-b42b-5e198155ba91-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879fgcmht\" (UID: \"a3ff4025-d356-4b31-b42b-5e198155ba91\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fgcmht" Dec 09 12:23:50 crc kubenswrapper[4703]: I1209 12:23:50.624549 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a3ff4025-d356-4b31-b42b-5e198155ba91-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879fgcmht\" (UID: \"a3ff4025-d356-4b31-b42b-5e198155ba91\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fgcmht" Dec 09 12:23:50 crc kubenswrapper[4703]: I1209 12:23:50.892295 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fgcmht" Dec 09 12:23:51 crc kubenswrapper[4703]: E1209 12:23:51.265242 4703 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/barbican-operator@sha256:f6059a0fbf031d34dcf086d14ce8c0546caeaee23c5780e90b5037c5feee9fea" Dec 09 12:23:51 crc kubenswrapper[4703]: E1209 12:23:51.265612 4703 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/barbican-operator@sha256:f6059a0fbf031d34dcf086d14ce8c0546caeaee23c5780e90b5037c5feee9fea,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2rs89,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-operator-controller-manager-7d9dfd778-rqtl7_openstack-operators(56aba94b-3065-4e94-a683-ddcb0f0f1734): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 12:23:51 crc kubenswrapper[4703]: I1209 12:23:51.329659 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ba302959-e371-4d55-a320-062f7aeeefea-webhook-certs\") pod \"openstack-operator-controller-manager-586c894b5-s992d\" (UID: \"ba302959-e371-4d55-a320-062f7aeeefea\") " pod="openstack-operators/openstack-operator-controller-manager-586c894b5-s992d" Dec 09 12:23:51 crc kubenswrapper[4703]: I1209 12:23:51.329778 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ba302959-e371-4d55-a320-062f7aeeefea-metrics-certs\") pod \"openstack-operator-controller-manager-586c894b5-s992d\" (UID: \"ba302959-e371-4d55-a320-062f7aeeefea\") " pod="openstack-operators/openstack-operator-controller-manager-586c894b5-s992d" Dec 09 12:23:51 crc kubenswrapper[4703]: I1209 12:23:51.335397 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ba302959-e371-4d55-a320-062f7aeeefea-webhook-certs\") pod \"openstack-operator-controller-manager-586c894b5-s992d\" (UID: \"ba302959-e371-4d55-a320-062f7aeeefea\") " pod="openstack-operators/openstack-operator-controller-manager-586c894b5-s992d" Dec 09 12:23:51 crc kubenswrapper[4703]: I1209 12:23:51.335584 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ba302959-e371-4d55-a320-062f7aeeefea-metrics-certs\") pod \"openstack-operator-controller-manager-586c894b5-s992d\" (UID: \"ba302959-e371-4d55-a320-062f7aeeefea\") " pod="openstack-operators/openstack-operator-controller-manager-586c894b5-s992d" Dec 09 12:23:51 crc kubenswrapper[4703]: I1209 12:23:51.434534 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-586c894b5-s992d" Dec 09 12:23:52 crc kubenswrapper[4703]: E1209 12:23:52.055348 4703 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/cinder-operator@sha256:981b6a8f95934a86c5f10ef6e198b07265aeba7f11cf84b9ccd13dfaf06f3ca3" Dec 09 12:23:52 crc kubenswrapper[4703]: E1209 12:23:52.055930 4703 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/cinder-operator@sha256:981b6a8f95934a86c5f10ef6e198b07265aeba7f11cf84b9ccd13dfaf06f3ca3,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gx5sx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-6c677c69b-4njvk_openstack-operators(874a8c8a-8438-4764-9660-31185bf873e6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 12:23:52 crc kubenswrapper[4703]: E1209 12:23:52.668664 4703 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168" Dec 09 12:23:52 crc kubenswrapper[4703]: E1209 12:23:52.668861 4703 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lnx2x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-998648c74-md4r8_openstack-operators(2866fa2f-a90b-4137-8ef3-23e9e1140899): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 12:23:53 crc kubenswrapper[4703]: E1209 12:23:53.406903 4703 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/watcher-operator@sha256:6b3e0302608a2e70f9b5ae9167f6fbf59264f226d9db99d48f70466ab2f216b8" Dec 09 12:23:53 crc kubenswrapper[4703]: E1209 12:23:53.407427 4703 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:6b3e0302608a2e70f9b5ae9167f6fbf59264f226d9db99d48f70466ab2f216b8,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zjt87,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-667bd8d554-959xw_openstack-operators(d34eb12d-d10a-406c-bfd9-9f772f9e63eb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 12:23:55 crc kubenswrapper[4703]: E1209 12:23:55.330858 4703 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670" Dec 09 12:23:55 crc kubenswrapper[4703]: E1209 12:23:55.331114 4703 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zvr8p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-697bc559fc-gpbmp_openstack-operators(eb139c31-b7c2-4d15-b9be-c541adf0c87f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 12:23:57 crc kubenswrapper[4703]: E1209 12:23:57.014145 4703 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:72ad6517987f674af0d0ae092cbb874aeae909c8b8b60188099c311762ebc8f7" Dec 09 12:23:57 crc kubenswrapper[4703]: E1209 12:23:57.014695 4703 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:72ad6517987f674af0d0ae092cbb874aeae909c8b8b60188099c311762ebc8f7,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tmt8z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-7765d96ddf-bv7zd_openstack-operators(73fb1f5d-7761-420a-b327-568cab0fb0d2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 12:23:57 crc kubenswrapper[4703]: I1209 12:23:57.359105 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-78d48bff9d-2gwjv"] Dec 09 12:23:57 crc kubenswrapper[4703]: E1209 12:23:57.836964 4703 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Dec 09 12:23:57 crc kubenswrapper[4703]: E1209 12:23:57.837262 4703 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-z2bjr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-842l9_openstack-operators(b37cc5b3-46d5-403e-be1b-46eebc75f0ef): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 12:23:57 crc kubenswrapper[4703]: E1209 12:23:57.838459 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-842l9" podUID="b37cc5b3-46d5-403e-be1b-46eebc75f0ef" Dec 09 12:23:57 crc kubenswrapper[4703]: E1209 12:23:57.906753 4703 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.142:5001/openstack-k8s-operators/telemetry-operator:d3ea47b1122f22fdda4bc30dd95b8db90981973f" Dec 09 12:23:57 crc kubenswrapper[4703]: E1209 12:23:57.906841 4703 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.142:5001/openstack-k8s-operators/telemetry-operator:d3ea47b1122f22fdda4bc30dd95b8db90981973f" Dec 09 12:23:57 crc kubenswrapper[4703]: E1209 12:23:57.907034 4703 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.142:5001/openstack-k8s-operators/telemetry-operator:d3ea47b1122f22fdda4bc30dd95b8db90981973f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pdp7f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-797ff5dd46-77fms_openstack-operators(db6f122b-a853-4ecb-8d82-2a8a04c8224e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 12:23:58 crc kubenswrapper[4703]: I1209 12:23:58.670632 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-967d97867-4qrc4" event={"ID":"53f0e694-8a8f-4985-a614-f8ca11f6cf32","Type":"ContainerStarted","Data":"5511aecd1e7431bf7f0a2d8701bd60403716741ad86e4b67bcc82b6bcc8c1788"} Dec 09 12:23:58 crc kubenswrapper[4703]: I1209 12:23:58.672002 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-2gwjv" event={"ID":"28c19114-205f-4c58-8ca3-7a7a19b0968b","Type":"ContainerStarted","Data":"2f5389dfecc4dbbe9d31fe7dff408b2d093a2cf66f043053a4cc9416c4109833"} Dec 09 12:23:58 crc kubenswrapper[4703]: I1209 12:23:58.840083 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fgcmht"] Dec 09 12:23:58 crc kubenswrapper[4703]: W1209 12:23:58.908257 4703 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda3ff4025_d356_4b31_b42b_5e198155ba91.slice/crio-ea5b61715f78248e264272f631a35d1c15f2f0852598731c1c27f8733e44aebd WatchSource:0}: Error finding container ea5b61715f78248e264272f631a35d1c15f2f0852598731c1c27f8733e44aebd: Status 404 returned error can't find the container with id ea5b61715f78248e264272f631a35d1c15f2f0852598731c1c27f8733e44aebd Dec 09 12:23:58 crc kubenswrapper[4703]: I1209 12:23:58.911148 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-586c894b5-s992d"] Dec 09 12:23:59 crc kubenswrapper[4703]: W1209 12:23:59.009915 4703 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podba302959_e371_4d55_a320_062f7aeeefea.slice/crio-a9f31612193249f3938044712200499c63f04f20d6c53481c0ef8a6d0b7c231c WatchSource:0}: Error finding container a9f31612193249f3938044712200499c63f04f20d6c53481c0ef8a6d0b7c231c: Status 404 returned error can't find the container with id a9f31612193249f3938044712200499c63f04f20d6c53481c0ef8a6d0b7c231c Dec 09 12:23:59 crc kubenswrapper[4703]: I1209 12:23:59.683796 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-cnjkv" event={"ID":"fc2c796b-a300-435a-bce4-be428b7b4ac6","Type":"ContainerStarted","Data":"a1ef11661b53fbe303ed302eadb6a19926884696e4af1070de333d05064da762"} Dec 09 12:23:59 crc kubenswrapper[4703]: I1209 12:23:59.685592 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-586c894b5-s992d" event={"ID":"ba302959-e371-4d55-a320-062f7aeeefea","Type":"ContainerStarted","Data":"a9f31612193249f3938044712200499c63f04f20d6c53481c0ef8a6d0b7c231c"} Dec 09 12:23:59 crc kubenswrapper[4703]: I1209 12:23:59.687092 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fgcmht" event={"ID":"a3ff4025-d356-4b31-b42b-5e198155ba91","Type":"ContainerStarted","Data":"ea5b61715f78248e264272f631a35d1c15f2f0852598731c1c27f8733e44aebd"} Dec 09 12:23:59 crc kubenswrapper[4703]: I1209 12:23:59.688566 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-lrk9g" event={"ID":"acd94eef-86bb-4acc-8790-011d87eb0da4","Type":"ContainerStarted","Data":"3297ba167f7386d97a7fc5ff37b75afdb42daa4dc9e689d506cce2101b60341c"} Dec 09 12:24:00 crc kubenswrapper[4703]: I1209 12:24:00.083724 4703 patch_prober.go:28] interesting pod/machine-config-daemon-q8sfk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 12:24:00 crc kubenswrapper[4703]: I1209 12:24:00.083819 4703 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 12:24:00 crc kubenswrapper[4703]: I1209 12:24:00.715760 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-586c894b5-s992d" event={"ID":"ba302959-e371-4d55-a320-062f7aeeefea","Type":"ContainerStarted","Data":"9b2ce3351fef9fb1634268363d236485dcd663df58e23af77a54aa7e1e04f9f1"} Dec 09 12:24:00 crc kubenswrapper[4703]: I1209 12:24:00.717047 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-586c894b5-s992d" Dec 09 12:24:00 crc kubenswrapper[4703]: I1209 12:24:00.744730 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-586c894b5-s992d" podStartSLOduration=42.744715538 podStartE2EDuration="42.744715538s" podCreationTimestamp="2025-12-09 12:23:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:24:00.740049364 +0000 UTC m=+1139.988812883" watchObservedRunningTime="2025-12-09 12:24:00.744715538 +0000 UTC m=+1139.993479057" Dec 09 12:24:11 crc kubenswrapper[4703]: E1209 12:24:11.074579 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-842l9" podUID="b37cc5b3-46d5-403e-be1b-46eebc75f0ef" Dec 09 12:24:11 crc kubenswrapper[4703]: I1209 12:24:11.441342 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-586c894b5-s992d" Dec 09 12:24:15 crc kubenswrapper[4703]: E1209 12:24:15.276585 4703 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 09 12:24:15 crc kubenswrapper[4703]: E1209 12:24:15.277005 4703 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wlgm6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-79c8c4686c-tv5tz_openstack-operators(de1bc545-3573-49b4-9a2d-8db33d6f37d1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 12:24:15 crc kubenswrapper[4703]: E1209 12:24:15.278506 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"]" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-tv5tz" podUID="de1bc545-3573-49b4-9a2d-8db33d6f37d1" Dec 09 12:24:15 crc kubenswrapper[4703]: E1209 12:24:15.290014 4703 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 09 12:24:15 crc kubenswrapper[4703]: E1209 12:24:15.290183 4703 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2rs89,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-operator-controller-manager-7d9dfd778-rqtl7_openstack-operators(56aba94b-3065-4e94-a683-ddcb0f0f1734): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 12:24:15 crc kubenswrapper[4703]: E1209 12:24:15.291460 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"]" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-rqtl7" podUID="56aba94b-3065-4e94-a683-ddcb0f0f1734" Dec 09 12:24:15 crc kubenswrapper[4703]: E1209 12:24:15.296538 4703 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 09 12:24:15 crc kubenswrapper[4703]: E1209 12:24:15.296704 4703 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-q74nb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-5b5fd79c9c-8xf75_openstack-operators(df921247-4ca5-4916-b42e-15fc060d72c4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 12:24:15 crc kubenswrapper[4703]: E1209 12:24:15.298131 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"]" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-8xf75" podUID="df921247-4ca5-4916-b42e-15fc060d72c4" Dec 09 12:24:15 crc kubenswrapper[4703]: E1209 12:24:15.321475 4703 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 09 12:24:15 crc kubenswrapper[4703]: E1209 12:24:15.322496 4703 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zvr8p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-697bc559fc-gpbmp_openstack-operators(eb139c31-b7c2-4d15-b9be-c541adf0c87f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 12:24:15 crc kubenswrapper[4703]: E1209 12:24:15.323690 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"]" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-gpbmp" podUID="eb139c31-b7c2-4d15-b9be-c541adf0c87f" Dec 09 12:24:15 crc kubenswrapper[4703]: E1209 12:24:15.341244 4703 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 09 12:24:15 crc kubenswrapper[4703]: E1209 12:24:15.341447 4703 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-l2nnh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-5fdfd5b6b5-k7rqx_openstack-operators(1facff82-6e9e-4bed-8145-1b00dcc84f51): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 12:24:15 crc kubenswrapper[4703]: E1209 12:24:15.342770 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"]" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-k7rqx" podUID="1facff82-6e9e-4bed-8145-1b00dcc84f51" Dec 09 12:24:15 crc kubenswrapper[4703]: E1209 12:24:15.351052 4703 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 09 12:24:15 crc kubenswrapper[4703]: E1209 12:24:15.351339 4703 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gx5sx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-6c677c69b-4njvk_openstack-operators(874a8c8a-8438-4764-9660-31185bf873e6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 12:24:15 crc kubenswrapper[4703]: E1209 12:24:15.352568 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"]" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-4njvk" podUID="874a8c8a-8438-4764-9660-31185bf873e6" Dec 09 12:24:15 crc kubenswrapper[4703]: E1209 12:24:15.363993 4703 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 09 12:24:15 crc kubenswrapper[4703]: E1209 12:24:15.364174 4703 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mvmtg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-78f8948974-pswqz_openstack-operators(37c73372-42dd-44a5-a5cc-e7d324be6981): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 12:24:15 crc kubenswrapper[4703]: E1209 12:24:15.365383 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"]" pod="openstack-operators/placement-operator-controller-manager-78f8948974-pswqz" podUID="37c73372-42dd-44a5-a5cc-e7d324be6981" Dec 09 12:24:15 crc kubenswrapper[4703]: E1209 12:24:15.414139 4703 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 09 12:24:15 crc kubenswrapper[4703]: E1209 12:24:15.414424 4703 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-j8bgf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-5f64f6f8bb-9x27n_openstack-operators(063b1e9b-8501-497e-b999-280076922605): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 12:24:15 crc kubenswrapper[4703]: E1209 12:24:15.415637 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"]" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-9x27n" podUID="063b1e9b-8501-497e-b999-280076922605" Dec 09 12:24:15 crc kubenswrapper[4703]: E1209 12:24:15.469686 4703 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 09 12:24:15 crc kubenswrapper[4703]: E1209 12:24:15.470115 4703 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vb9th,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-9d58d64bc-zcck5_openstack-operators(4868f7dd-ada4-4df8-9bc8-ae5ca73f2935): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 12:24:15 crc kubenswrapper[4703]: E1209 12:24:15.471338 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"]" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-zcck5" podUID="4868f7dd-ada4-4df8-9bc8-ae5ca73f2935" Dec 09 12:24:15 crc kubenswrapper[4703]: E1209 12:24:15.566394 4703 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 09 12:24:15 crc kubenswrapper[4703]: E1209 12:24:15.566561 4703 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zjt87,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-667bd8d554-959xw_openstack-operators(d34eb12d-d10a-406c-bfd9-9f772f9e63eb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 12:24:15 crc kubenswrapper[4703]: E1209 12:24:15.568023 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"]" pod="openstack-operators/watcher-operator-controller-manager-667bd8d554-959xw" podUID="d34eb12d-d10a-406c-bfd9-9f772f9e63eb" Dec 09 12:24:15 crc kubenswrapper[4703]: E1209 12:24:15.667172 4703 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 09 12:24:15 crc kubenswrapper[4703]: E1209 12:24:15.668032 4703 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dvm54,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-5697bb5779-v9r65_openstack-operators(8aff308b-1702-4057-80f7-517462396b76): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 12:24:15 crc kubenswrapper[4703]: E1209 12:24:15.669251 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"]" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-v9r65" podUID="8aff308b-1702-4057-80f7-517462396b76" Dec 09 12:24:15 crc kubenswrapper[4703]: E1209 12:24:15.775115 4703 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 09 12:24:15 crc kubenswrapper[4703]: E1209 12:24:15.775292 4703 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4jh75,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-68c6d99b8f-srhz6_openstack-operators(13a53c62-2578-4060-8dbf-17fccd6080b1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 12:24:15 crc kubenswrapper[4703]: E1209 12:24:15.776463 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"]" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-srhz6" podUID="13a53c62-2578-4060-8dbf-17fccd6080b1" Dec 09 12:24:15 crc kubenswrapper[4703]: E1209 12:24:15.893360 4703 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:424da951f13f1fbe9083215dc9f5088f90676dd813f01fdf3c1a8639b61cbaad,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wlgm6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-79c8c4686c-tv5tz_openstack-operators(de1bc545-3573-49b4-9a2d-8db33d6f37d1): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 09 12:24:15 crc kubenswrapper[4703]: E1209 12:24:15.896560 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-tv5tz" podUID="de1bc545-3573-49b4-9a2d-8db33d6f37d1" Dec 09 12:24:15 crc kubenswrapper[4703]: E1209 12:24:15.941234 4703 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:9d539fb6b72f91cfc6200bb91b7c6dbaeab17c7711342dd3a9549c66762a2d48" Dec 09 12:24:15 crc kubenswrapper[4703]: E1209 12:24:15.942767 4703 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:9d539fb6b72f91cfc6200bb91b7c6dbaeab17c7711342dd3a9549c66762a2d48,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:true,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-baremetal-operator-agent:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_ANSIBLEEE_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_EVALUATOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-evaluator:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_LISTENER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-listener:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_NOTIFIER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-notifier:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_APACHE_IMAGE_URL_DEFAULT,Value:registry.redhat.io/ubi9/httpd-24:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_KEYSTONE_LISTENER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-keystone-listener:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_CENTRAL_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_COMPUTE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_IPMI_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_MYSQLD_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/prometheus/mysqld-exporter:v0.15.1,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_NOTIFICATION_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-notification:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_SGCORE_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/sg-core:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_BACKUP_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-backup:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_VOLUME_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-volume:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CLOUDKITTY_API_IMAGE_URL_DEFAULT,Value:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CLOUDKITTY_PROC_IMAGE_URL_DEFAULT,Value:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-processor:current,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_BACKENDBIND9_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-backend-bind9:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_CENTRAL_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-central:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_MDNS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-mdns:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_PRODUCER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-producer:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_UNBOUND_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-unbound:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_FRR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-frr:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_ISCSID_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-iscsid:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_KEPLER_IMAGE_URL_DEFAULT,Value:quay.io/sustainable_computing_io/kepler:release-0.7.12,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_LOGROTATE_CROND_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cron:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_MULTIPATHD_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-multipathd:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_DHCP_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_METADATA_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_OVN_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-ovn-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_SRIOV_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NODE_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/prometheus/node-exporter:v1.5.0,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_OVN_BGP_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-bgp-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_PODMAN_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/navidys/prometheus-podman-exporter:v1.10.1,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_GLANCE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_CFNAPI_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-api-cfn:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_ENGINE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HORIZON_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_MEMCACHED_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_REDIS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-redis:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_CONDUCTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-conductor:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_INSPECTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-inspector:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_NEUTRON_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-neutron-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_PXE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-pxe:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_PYTHON_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/ironic-python-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KEYSTONE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-keystone:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KSM_IMAGE_URL_DEFAULT,Value:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_SHARE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-share:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MARIADB_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NET_UTILS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-netutils:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NEUTRON_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_COMPUTE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_CONDUCTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_NOVNC_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-novncproxy:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_HEALTHMANAGER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-health-manager:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_HOUSEKEEPING_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-housekeeping:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_RSYSLOG_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-rsyslog:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_CLIENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_MUST_GATHER_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-must-gather:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_NETWORK_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OS_CONTAINER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/edpm-hardened-uefi:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_CONTROLLER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_CONTROLLER_OVS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-base:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_NB_DBCLUSTER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_NORTHD_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-northd:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_SB_DBCLUSTER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PLACEMENT_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_RABBITMQ_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_ACCOUNT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-account:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_CONTAINER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-container:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_OBJECT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-object:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_PROXY_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-proxy-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_TEST_TEMPEST_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_APPLIER_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-applier:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_DECISION_ENGINE_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-decision-engine:current-podified,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-954w6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-baremetal-operator-controller-manager-84b575879fgcmht_openstack-operators(a3ff4025-d356-4b31-b42b-5e198155ba91): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 12:24:15 crc kubenswrapper[4703]: E1209 12:24:15.949092 4703 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-954w6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-baremetal-operator-controller-manager-84b575879fgcmht_openstack-operators(a3ff4025-d356-4b31-b42b-5e198155ba91): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 09 12:24:15 crc kubenswrapper[4703]: E1209 12:24:15.950880 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fgcmht" podUID="a3ff4025-d356-4b31-b42b-5e198155ba91" Dec 09 12:24:16 crc kubenswrapper[4703]: E1209 12:24:16.899385 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:9d539fb6b72f91cfc6200bb91b7c6dbaeab17c7711342dd3a9549c66762a2d48\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fgcmht" podUID="a3ff4025-d356-4b31-b42b-5e198155ba91" Dec 09 12:24:17 crc kubenswrapper[4703]: E1209 12:24:17.454272 4703 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying layer: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 09 12:24:17 crc kubenswrapper[4703]: E1209 12:24:17.454438 4703 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4jt7q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-b6456fdb6-lrk9g_openstack-operators(acd94eef-86bb-4acc-8790-011d87eb0da4): ErrImagePull: rpc error: code = Canceled desc = copying layer: context canceled" logger="UnhandledError" Dec 09 12:24:17 crc kubenswrapper[4703]: E1209 12:24:17.455593 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying layer: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-lrk9g" podUID="acd94eef-86bb-4acc-8790-011d87eb0da4" Dec 09 12:24:17 crc kubenswrapper[4703]: E1209 12:24:17.459843 4703 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 09 12:24:17 crc kubenswrapper[4703]: E1209 12:24:17.459889 4703 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 09 12:24:17 crc kubenswrapper[4703]: E1209 12:24:17.459985 4703 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tmt8z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-7765d96ddf-bv7zd_openstack-operators(73fb1f5d-7761-420a-b327-568cab0fb0d2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 12:24:17 crc kubenswrapper[4703]: E1209 12:24:17.460028 4703 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying layer: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 09 12:24:17 crc kubenswrapper[4703]: E1209 12:24:17.460066 4703 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying layer: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 09 12:24:17 crc kubenswrapper[4703]: E1209 12:24:17.460120 4703 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pdp7f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-797ff5dd46-77fms_openstack-operators(db6f122b-a853-4ecb-8d82-2a8a04c8224e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 12:24:17 crc kubenswrapper[4703]: E1209 12:24:17.460154 4703 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-s6glr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-697fb699cf-cnjkv_openstack-operators(fc2c796b-a300-435a-bce4-be428b7b4ac6): ErrImagePull: rpc error: code = Canceled desc = copying layer: context canceled" logger="UnhandledError" Dec 09 12:24:17 crc kubenswrapper[4703]: E1209 12:24:17.460211 4703 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zf2r9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-967d97867-4qrc4_openstack-operators(53f0e694-8a8f-4985-a614-f8ca11f6cf32): ErrImagePull: rpc error: code = Canceled desc = copying layer: context canceled" logger="UnhandledError" Dec 09 12:24:17 crc kubenswrapper[4703]: E1209 12:24:17.461294 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying layer: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-967d97867-4qrc4" podUID="53f0e694-8a8f-4985-a614-f8ca11f6cf32" Dec 09 12:24:17 crc kubenswrapper[4703]: E1209 12:24:17.461317 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying layer: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-cnjkv" podUID="fc2c796b-a300-435a-bce4-be428b7b4ac6" Dec 09 12:24:17 crc kubenswrapper[4703]: E1209 12:24:17.461362 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"]" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-bv7zd" podUID="73fb1f5d-7761-420a-b327-568cab0fb0d2" Dec 09 12:24:17 crc kubenswrapper[4703]: E1209 12:24:17.461386 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"]" pod="openstack-operators/telemetry-operator-controller-manager-797ff5dd46-77fms" podUID="db6f122b-a853-4ecb-8d82-2a8a04c8224e" Dec 09 12:24:17 crc kubenswrapper[4703]: E1209 12:24:17.493286 4703 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/infra-operator@sha256:ccc60d56d8efc2e91a7d8a7131eb7e06c189c32247f2a819818c084ba2e2f2ab" Dec 09 12:24:17 crc kubenswrapper[4703]: E1209 12:24:17.493480 4703 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/infra-operator@sha256:ccc60d56d8efc2e91a7d8a7131eb7e06c189c32247f2a819818c084ba2e2f2ab,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:true,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{600 -3} {} 600m DecimalSI},memory: {{2147483648 0} {} 2Gi BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{536870912 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rxjws,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infra-operator-controller-manager-78d48bff9d-2gwjv_openstack-operators(28c19114-205f-4c58-8ca3-7a7a19b0968b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 12:24:17 crc kubenswrapper[4703]: E1209 12:24:17.608219 4703 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 09 12:24:17 crc kubenswrapper[4703]: E1209 12:24:17.608442 4703 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bblbz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5854674fcc-kvpf4_openstack-operators(79e382ca-8d65-45fc-8dbf-3626827cb50f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 12:24:17 crc kubenswrapper[4703]: E1209 12:24:17.609785 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"]" pod="openstack-operators/test-operator-controller-manager-5854674fcc-kvpf4" podUID="79e382ca-8d65-45fc-8dbf-3626827cb50f" Dec 09 12:24:17 crc kubenswrapper[4703]: E1209 12:24:17.796627 4703 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 09 12:24:17 crc kubenswrapper[4703]: E1209 12:24:17.796930 4703 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lnx2x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-998648c74-md4r8_openstack-operators(2866fa2f-a90b-4137-8ef3-23e9e1140899): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 12:24:17 crc kubenswrapper[4703]: E1209 12:24:17.798240 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"]" pod="openstack-operators/octavia-operator-controller-manager-998648c74-md4r8" podUID="2866fa2f-a90b-4137-8ef3-23e9e1140899" Dec 09 12:24:17 crc kubenswrapper[4703]: I1209 12:24:17.904922 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-967d97867-4qrc4" Dec 09 12:24:17 crc kubenswrapper[4703]: I1209 12:24:17.906680 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-cnjkv" Dec 09 12:24:17 crc kubenswrapper[4703]: I1209 12:24:17.906720 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-lrk9g" Dec 09 12:24:17 crc kubenswrapper[4703]: I1209 12:24:17.908682 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-967d97867-4qrc4" Dec 09 12:24:17 crc kubenswrapper[4703]: I1209 12:24:17.910105 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-cnjkv" Dec 09 12:24:17 crc kubenswrapper[4703]: I1209 12:24:17.910593 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-lrk9g" Dec 09 12:24:17 crc kubenswrapper[4703]: E1209 12:24:17.934957 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-967d97867-4qrc4" podUID="53f0e694-8a8f-4985-a614-f8ca11f6cf32" Dec 09 12:24:17 crc kubenswrapper[4703]: E1209 12:24:17.969377 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-cnjkv" podUID="fc2c796b-a300-435a-bce4-be428b7b4ac6" Dec 09 12:24:17 crc kubenswrapper[4703]: E1209 12:24:17.969769 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-lrk9g" podUID="acd94eef-86bb-4acc-8790-011d87eb0da4" Dec 09 12:24:18 crc kubenswrapper[4703]: E1209 12:24:18.121827 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-rqtl7" podUID="56aba94b-3065-4e94-a683-ddcb0f0f1734" Dec 09 12:24:18 crc kubenswrapper[4703]: E1209 12:24:18.374402 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-2gwjv" podUID="28c19114-205f-4c58-8ca3-7a7a19b0968b" Dec 09 12:24:18 crc kubenswrapper[4703]: I1209 12:24:18.924166 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-v9r65" event={"ID":"8aff308b-1702-4057-80f7-517462396b76","Type":"ContainerStarted","Data":"1b63425321ccd4822176ecdd4da173a9a87046b7fc6d8f839a11000af52d3696"} Dec 09 12:24:18 crc kubenswrapper[4703]: I1209 12:24:18.944573 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-pswqz" event={"ID":"37c73372-42dd-44a5-a5cc-e7d324be6981","Type":"ContainerStarted","Data":"4013dfa6259886995abb41b70a27bdcc4d6c689c68275ad5b9baed333a1b21ad"} Dec 09 12:24:18 crc kubenswrapper[4703]: I1209 12:24:18.944663 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-pswqz" event={"ID":"37c73372-42dd-44a5-a5cc-e7d324be6981","Type":"ContainerStarted","Data":"4118a73c04d8c605307c5595a1b7f66424df279dcc302536775a6572762fa87e"} Dec 09 12:24:18 crc kubenswrapper[4703]: I1209 12:24:18.945492 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-78f8948974-pswqz" Dec 09 12:24:18 crc kubenswrapper[4703]: I1209 12:24:18.963606 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-9x27n" event={"ID":"063b1e9b-8501-497e-b999-280076922605","Type":"ContainerStarted","Data":"30f4e30a5dc17a195b255ba6547233b6bd57b1f57d40f68a11c2ef72f1273b11"} Dec 09 12:24:18 crc kubenswrapper[4703]: I1209 12:24:18.971450 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-8xf75" event={"ID":"df921247-4ca5-4916-b42e-15fc060d72c4","Type":"ContainerStarted","Data":"f78c14d5e1ab1b40592b58b32d6477ee598175d3080c0e3df1b206391bac9366"} Dec 09 12:24:18 crc kubenswrapper[4703]: I1209 12:24:18.971513 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-8xf75" event={"ID":"df921247-4ca5-4916-b42e-15fc060d72c4","Type":"ContainerStarted","Data":"1aba0e160c339ada2b610337c80246635dacac6b8b7f8c4a608a27179a2d5516"} Dec 09 12:24:18 crc kubenswrapper[4703]: I1209 12:24:18.985541 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-k7rqx" event={"ID":"1facff82-6e9e-4bed-8145-1b00dcc84f51","Type":"ContainerStarted","Data":"5c78c4321bf4994910ea93bd060809acd999ad3b852a191d725dffcba8e35c4b"} Dec 09 12:24:18 crc kubenswrapper[4703]: I1209 12:24:18.985613 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-k7rqx" event={"ID":"1facff82-6e9e-4bed-8145-1b00dcc84f51","Type":"ContainerStarted","Data":"c3f81da6a3880a393d61204e0d01fe6b9d8842b432864c5dccc03c61a6b3eeab"} Dec 09 12:24:18 crc kubenswrapper[4703]: I1209 12:24:18.986415 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-k7rqx" Dec 09 12:24:19 crc kubenswrapper[4703]: I1209 12:24:19.016181 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-4njvk" event={"ID":"874a8c8a-8438-4764-9660-31185bf873e6","Type":"ContainerStarted","Data":"76f0530e89d583030a8424189ef8bc3a718609d83a9e60e6daf0ed6009b5fb8b"} Dec 09 12:24:19 crc kubenswrapper[4703]: I1209 12:24:19.018371 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-gpbmp" event={"ID":"eb139c31-b7c2-4d15-b9be-c541adf0c87f","Type":"ContainerStarted","Data":"15d2c2bff27a109b6d066f9d934d803bda92452c87720a6e18b0e111e2a187cd"} Dec 09 12:24:19 crc kubenswrapper[4703]: I1209 12:24:19.018399 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-gpbmp" event={"ID":"eb139c31-b7c2-4d15-b9be-c541adf0c87f","Type":"ContainerStarted","Data":"18a2d2dcf97963131c4cbfd6a23a29b056680df4468887ae66c67c9498894797"} Dec 09 12:24:19 crc kubenswrapper[4703]: I1209 12:24:19.019068 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-gpbmp" Dec 09 12:24:19 crc kubenswrapper[4703]: I1209 12:24:19.019917 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-zcck5" event={"ID":"4868f7dd-ada4-4df8-9bc8-ae5ca73f2935","Type":"ContainerStarted","Data":"62585abe666b92ea0b6133628da17a14f8ccbdb58252482cef90261b753237d8"} Dec 09 12:24:19 crc kubenswrapper[4703]: I1209 12:24:19.021006 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-srhz6" event={"ID":"13a53c62-2578-4060-8dbf-17fccd6080b1","Type":"ContainerStarted","Data":"433695231af846899e7e4fc0845289d7e3bec42f551a6553aa342238e4060993"} Dec 09 12:24:19 crc kubenswrapper[4703]: I1209 12:24:19.021210 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-srhz6" Dec 09 12:24:19 crc kubenswrapper[4703]: I1209 12:24:19.031805 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-78f8948974-pswqz" podStartSLOduration=3.9241376580000003 podStartE2EDuration="1m1.031783343s" podCreationTimestamp="2025-12-09 12:23:18 +0000 UTC" firstStartedPulling="2025-12-09 12:23:20.613775828 +0000 UTC m=+1099.862539347" lastFinishedPulling="2025-12-09 12:24:17.721421513 +0000 UTC m=+1156.970185032" observedRunningTime="2025-12-09 12:24:19.030983252 +0000 UTC m=+1158.279746771" watchObservedRunningTime="2025-12-09 12:24:19.031783343 +0000 UTC m=+1158.280546862" Dec 09 12:24:19 crc kubenswrapper[4703]: I1209 12:24:19.033453 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-rqtl7" event={"ID":"56aba94b-3065-4e94-a683-ddcb0f0f1734","Type":"ContainerStarted","Data":"18d051ae66a7505cb69dd281bb0867b0f1c8e3d9d5297d48244eab4c7f146ab6"} Dec 09 12:24:19 crc kubenswrapper[4703]: I1209 12:24:19.034348 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-rqtl7" Dec 09 12:24:19 crc kubenswrapper[4703]: I1209 12:24:19.038246 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-2gwjv" event={"ID":"28c19114-205f-4c58-8ca3-7a7a19b0968b","Type":"ContainerStarted","Data":"3625e5911d50e951744f7d4335c8b3b4da3087a33d57006be1d9503265994fb1"} Dec 09 12:24:19 crc kubenswrapper[4703]: E1209 12:24:19.042363 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/infra-operator@sha256:ccc60d56d8efc2e91a7d8a7131eb7e06c189c32247f2a819818c084ba2e2f2ab\\\"\"" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-2gwjv" podUID="28c19114-205f-4c58-8ca3-7a7a19b0968b" Dec 09 12:24:19 crc kubenswrapper[4703]: I1209 12:24:19.162015 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-srhz6" podStartSLOduration=3.856325145 podStartE2EDuration="1m2.161994074s" podCreationTimestamp="2025-12-09 12:23:17 +0000 UTC" firstStartedPulling="2025-12-09 12:23:19.390419495 +0000 UTC m=+1098.639183014" lastFinishedPulling="2025-12-09 12:24:17.696088424 +0000 UTC m=+1156.944851943" observedRunningTime="2025-12-09 12:24:19.11301586 +0000 UTC m=+1158.361779389" watchObservedRunningTime="2025-12-09 12:24:19.161994074 +0000 UTC m=+1158.410757593" Dec 09 12:24:19 crc kubenswrapper[4703]: I1209 12:24:19.278424 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-k7rqx" podStartSLOduration=4.011453888 podStartE2EDuration="1m1.278406561s" podCreationTimestamp="2025-12-09 12:23:18 +0000 UTC" firstStartedPulling="2025-12-09 12:23:20.45331668 +0000 UTC m=+1099.702080199" lastFinishedPulling="2025-12-09 12:24:17.720269353 +0000 UTC m=+1156.969032872" observedRunningTime="2025-12-09 12:24:19.193183828 +0000 UTC m=+1158.441947347" watchObservedRunningTime="2025-12-09 12:24:19.278406561 +0000 UTC m=+1158.527170080" Dec 09 12:24:19 crc kubenswrapper[4703]: I1209 12:24:19.280106 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-gpbmp" podStartSLOduration=4.288186713 podStartE2EDuration="1m1.280100635s" podCreationTimestamp="2025-12-09 12:23:18 +0000 UTC" firstStartedPulling="2025-12-09 12:23:20.703894824 +0000 UTC m=+1099.952658343" lastFinishedPulling="2025-12-09 12:24:17.695808746 +0000 UTC m=+1156.944572265" observedRunningTime="2025-12-09 12:24:19.249401534 +0000 UTC m=+1158.498165053" watchObservedRunningTime="2025-12-09 12:24:19.280100635 +0000 UTC m=+1158.528864154" Dec 09 12:24:20 crc kubenswrapper[4703]: I1209 12:24:20.069345 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-srhz6" event={"ID":"13a53c62-2578-4060-8dbf-17fccd6080b1","Type":"ContainerStarted","Data":"b698f88e0ac201b6c75a1d71040c982111951f4c01ffdcfb75e8d17dbf5b920c"} Dec 09 12:24:20 crc kubenswrapper[4703]: I1209 12:24:20.083071 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-v9r65" event={"ID":"8aff308b-1702-4057-80f7-517462396b76","Type":"ContainerStarted","Data":"5bf0b243de83d053f7538edf90cb8f137e0ca38528a8d8224f84bdc632babcf6"} Dec 09 12:24:20 crc kubenswrapper[4703]: I1209 12:24:20.084478 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-v9r65" Dec 09 12:24:20 crc kubenswrapper[4703]: I1209 12:24:20.109602 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-rqtl7" event={"ID":"56aba94b-3065-4e94-a683-ddcb0f0f1734","Type":"ContainerStarted","Data":"a995b88d9813e5e90f89495f014f614125ace8f4e2e9c3cc308fa41f9c85b313"} Dec 09 12:24:20 crc kubenswrapper[4703]: I1209 12:24:20.125382 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-4njvk" event={"ID":"874a8c8a-8438-4764-9660-31185bf873e6","Type":"ContainerStarted","Data":"8ec0f04f0aed0ba37c6e573b02d526e21b8951e4be6b736248aee11aa9cc0844"} Dec 09 12:24:20 crc kubenswrapper[4703]: I1209 12:24:20.126004 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-4njvk" Dec 09 12:24:20 crc kubenswrapper[4703]: I1209 12:24:20.251649 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-zcck5" event={"ID":"4868f7dd-ada4-4df8-9bc8-ae5ca73f2935","Type":"ContainerStarted","Data":"9c572b2135f11a5ab36ba8d2e608beb6eec3042d14ac275951d82524fb2c8f68"} Dec 09 12:24:20 crc kubenswrapper[4703]: I1209 12:24:20.251686 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-zcck5" Dec 09 12:24:20 crc kubenswrapper[4703]: I1209 12:24:20.252049 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-8xf75" Dec 09 12:24:20 crc kubenswrapper[4703]: E1209 12:24:20.255256 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/infra-operator@sha256:ccc60d56d8efc2e91a7d8a7131eb7e06c189c32247f2a819818c084ba2e2f2ab\\\"\"" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-2gwjv" podUID="28c19114-205f-4c58-8ca3-7a7a19b0968b" Dec 09 12:24:20 crc kubenswrapper[4703]: I1209 12:24:20.289706 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-rqtl7" podStartSLOduration=4.944127675 podStartE2EDuration="1m3.289682566s" podCreationTimestamp="2025-12-09 12:23:17 +0000 UTC" firstStartedPulling="2025-12-09 12:23:19.375904763 +0000 UTC m=+1098.624668292" lastFinishedPulling="2025-12-09 12:24:17.721459664 +0000 UTC m=+1156.970223183" observedRunningTime="2025-12-09 12:24:20.286618595 +0000 UTC m=+1159.535382134" watchObservedRunningTime="2025-12-09 12:24:20.289682566 +0000 UTC m=+1159.538446085" Dec 09 12:24:20 crc kubenswrapper[4703]: I1209 12:24:20.292738 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-v9r65" podStartSLOduration=5.264073158 podStartE2EDuration="1m3.292729626s" podCreationTimestamp="2025-12-09 12:23:17 +0000 UTC" firstStartedPulling="2025-12-09 12:23:20.047937151 +0000 UTC m=+1099.296700660" lastFinishedPulling="2025-12-09 12:24:18.076593609 +0000 UTC m=+1157.325357128" observedRunningTime="2025-12-09 12:24:20.185876923 +0000 UTC m=+1159.434640442" watchObservedRunningTime="2025-12-09 12:24:20.292729626 +0000 UTC m=+1159.541493145" Dec 09 12:24:20 crc kubenswrapper[4703]: I1209 12:24:20.328331 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-4njvk" podStartSLOduration=4.786949922 podStartE2EDuration="1m3.328311117s" podCreationTimestamp="2025-12-09 12:23:17 +0000 UTC" firstStartedPulling="2025-12-09 12:23:19.624881843 +0000 UTC m=+1098.873645362" lastFinishedPulling="2025-12-09 12:24:18.166243038 +0000 UTC m=+1157.415006557" observedRunningTime="2025-12-09 12:24:20.321264881 +0000 UTC m=+1159.570028410" watchObservedRunningTime="2025-12-09 12:24:20.328311117 +0000 UTC m=+1159.577074636" Dec 09 12:24:20 crc kubenswrapper[4703]: I1209 12:24:20.373311 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-zcck5" podStartSLOduration=5.291751742 podStartE2EDuration="1m2.373291586s" podCreationTimestamp="2025-12-09 12:23:18 +0000 UTC" firstStartedPulling="2025-12-09 12:23:20.614246741 +0000 UTC m=+1099.863010260" lastFinishedPulling="2025-12-09 12:24:17.695786585 +0000 UTC m=+1156.944550104" observedRunningTime="2025-12-09 12:24:20.368638532 +0000 UTC m=+1159.617402051" watchObservedRunningTime="2025-12-09 12:24:20.373291586 +0000 UTC m=+1159.622055105" Dec 09 12:24:20 crc kubenswrapper[4703]: I1209 12:24:20.558154 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-8xf75" podStartSLOduration=5.200506618 podStartE2EDuration="1m2.558132201s" podCreationTimestamp="2025-12-09 12:23:18 +0000 UTC" firstStartedPulling="2025-12-09 12:23:20.49696896 +0000 UTC m=+1099.745732479" lastFinishedPulling="2025-12-09 12:24:17.854594523 +0000 UTC m=+1157.103358062" observedRunningTime="2025-12-09 12:24:20.548565048 +0000 UTC m=+1159.797328567" watchObservedRunningTime="2025-12-09 12:24:20.558132201 +0000 UTC m=+1159.806895720" Dec 09 12:24:21 crc kubenswrapper[4703]: I1209 12:24:21.259637 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-cnjkv" event={"ID":"fc2c796b-a300-435a-bce4-be428b7b4ac6","Type":"ContainerStarted","Data":"97462a286a72caa23bc57622eced002812b8e008cbd70e3e897e76bb5afb3079"} Dec 09 12:24:21 crc kubenswrapper[4703]: I1209 12:24:21.263157 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-kvpf4" event={"ID":"79e382ca-8d65-45fc-8dbf-3626827cb50f","Type":"ContainerStarted","Data":"dd8af55c4628dd352805d9ddfa26ba9998e10abc2a2afe019f488f886e9f86ef"} Dec 09 12:24:21 crc kubenswrapper[4703]: I1209 12:24:21.263220 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-kvpf4" event={"ID":"79e382ca-8d65-45fc-8dbf-3626827cb50f","Type":"ContainerStarted","Data":"445b053d6a194d4c25723325966850cfeb8f030c6bd3398d3fde38afca59af77"} Dec 09 12:24:21 crc kubenswrapper[4703]: I1209 12:24:21.263477 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5854674fcc-kvpf4" Dec 09 12:24:21 crc kubenswrapper[4703]: I1209 12:24:21.265452 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-967d97867-4qrc4" event={"ID":"53f0e694-8a8f-4985-a614-f8ca11f6cf32","Type":"ContainerStarted","Data":"b2efcff1f6187d436b8b03a90db104606efb67fbf325b625d8e9156f2a825ff4"} Dec 09 12:24:21 crc kubenswrapper[4703]: I1209 12:24:21.267522 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-bv7zd" event={"ID":"73fb1f5d-7761-420a-b327-568cab0fb0d2","Type":"ContainerStarted","Data":"8470ff9c26734e64a6a301a95c982cef8df4895ab29e657b52a13177436cf523"} Dec 09 12:24:21 crc kubenswrapper[4703]: I1209 12:24:21.267552 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-bv7zd" event={"ID":"73fb1f5d-7761-420a-b327-568cab0fb0d2","Type":"ContainerStarted","Data":"659e7e8d414435f283286c967b39b9e095cb4470ba2050779ea0f4d434dc525a"} Dec 09 12:24:21 crc kubenswrapper[4703]: I1209 12:24:21.267714 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-bv7zd" Dec 09 12:24:21 crc kubenswrapper[4703]: I1209 12:24:21.269257 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-md4r8" event={"ID":"2866fa2f-a90b-4137-8ef3-23e9e1140899","Type":"ContainerStarted","Data":"eb1402c395bb73f3a8635ac8014ff45fa673acc9453c327a0144a1cb64472e08"} Dec 09 12:24:21 crc kubenswrapper[4703]: I1209 12:24:21.269289 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-md4r8" event={"ID":"2866fa2f-a90b-4137-8ef3-23e9e1140899","Type":"ContainerStarted","Data":"610808aaf40ea161aef4bc13adbd9aa6f6dff6ab84151536fbaf4dd763405c4f"} Dec 09 12:24:21 crc kubenswrapper[4703]: I1209 12:24:21.269933 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-998648c74-md4r8" Dec 09 12:24:21 crc kubenswrapper[4703]: I1209 12:24:21.272128 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-9x27n" event={"ID":"063b1e9b-8501-497e-b999-280076922605","Type":"ContainerStarted","Data":"af7caefbbd7656081e70e67e0ee38a3d3d31155dc8080f59851ae3297b0b61b2"} Dec 09 12:24:21 crc kubenswrapper[4703]: I1209 12:24:21.272879 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-9x27n" Dec 09 12:24:21 crc kubenswrapper[4703]: I1209 12:24:21.275435 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-lrk9g" event={"ID":"acd94eef-86bb-4acc-8790-011d87eb0da4","Type":"ContainerStarted","Data":"aa054c84f06b063441ca48ec0e0abbfc825bc37a775bd48e8632ddfc88dbdb85"} Dec 09 12:24:21 crc kubenswrapper[4703]: I1209 12:24:21.307010 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-cnjkv" podStartSLOduration=27.606311068 podStartE2EDuration="1m4.306990812s" podCreationTimestamp="2025-12-09 12:23:17 +0000 UTC" firstStartedPulling="2025-12-09 12:23:19.346756905 +0000 UTC m=+1098.595520424" lastFinishedPulling="2025-12-09 12:23:56.047436649 +0000 UTC m=+1135.296200168" observedRunningTime="2025-12-09 12:24:21.299090313 +0000 UTC m=+1160.547853842" watchObservedRunningTime="2025-12-09 12:24:21.306990812 +0000 UTC m=+1160.555754331" Dec 09 12:24:21 crc kubenswrapper[4703]: I1209 12:24:21.376147 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-998648c74-md4r8" podStartSLOduration=5.17458011 podStartE2EDuration="1m3.376124648s" podCreationTimestamp="2025-12-09 12:23:18 +0000 UTC" firstStartedPulling="2025-12-09 12:23:20.755076029 +0000 UTC m=+1100.003839548" lastFinishedPulling="2025-12-09 12:24:18.956620567 +0000 UTC m=+1158.205384086" observedRunningTime="2025-12-09 12:24:21.331051678 +0000 UTC m=+1160.579815187" watchObservedRunningTime="2025-12-09 12:24:21.376124648 +0000 UTC m=+1160.624888167" Dec 09 12:24:21 crc kubenswrapper[4703]: I1209 12:24:21.409889 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-bv7zd" podStartSLOduration=4.467002767 podStartE2EDuration="1m3.40986606s" podCreationTimestamp="2025-12-09 12:23:18 +0000 UTC" firstStartedPulling="2025-12-09 12:23:20.105451156 +0000 UTC m=+1099.354214675" lastFinishedPulling="2025-12-09 12:24:19.048314449 +0000 UTC m=+1158.297077968" observedRunningTime="2025-12-09 12:24:21.380471403 +0000 UTC m=+1160.629234922" watchObservedRunningTime="2025-12-09 12:24:21.40986606 +0000 UTC m=+1160.658629589" Dec 09 12:24:21 crc kubenswrapper[4703]: I1209 12:24:21.411347 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5854674fcc-kvpf4" podStartSLOduration=4.867675874 podStartE2EDuration="1m3.411337369s" podCreationTimestamp="2025-12-09 12:23:18 +0000 UTC" firstStartedPulling="2025-12-09 12:23:20.663327498 +0000 UTC m=+1099.912091017" lastFinishedPulling="2025-12-09 12:24:19.206988993 +0000 UTC m=+1158.455752512" observedRunningTime="2025-12-09 12:24:21.411203175 +0000 UTC m=+1160.659966694" watchObservedRunningTime="2025-12-09 12:24:21.411337369 +0000 UTC m=+1160.660100888" Dec 09 12:24:21 crc kubenswrapper[4703]: I1209 12:24:21.436299 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-lrk9g" podStartSLOduration=27.832556229 podStartE2EDuration="1m3.436282118s" podCreationTimestamp="2025-12-09 12:23:18 +0000 UTC" firstStartedPulling="2025-12-09 12:23:20.443828923 +0000 UTC m=+1099.692592442" lastFinishedPulling="2025-12-09 12:23:56.047554812 +0000 UTC m=+1135.296318331" observedRunningTime="2025-12-09 12:24:21.432808046 +0000 UTC m=+1160.681571575" watchObservedRunningTime="2025-12-09 12:24:21.436282118 +0000 UTC m=+1160.685045637" Dec 09 12:24:21 crc kubenswrapper[4703]: I1209 12:24:21.460899 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-9x27n" podStartSLOduration=6.626169934 podStartE2EDuration="1m4.460881119s" podCreationTimestamp="2025-12-09 12:23:17 +0000 UTC" firstStartedPulling="2025-12-09 12:23:20.142016435 +0000 UTC m=+1099.390779954" lastFinishedPulling="2025-12-09 12:24:17.97672762 +0000 UTC m=+1157.225491139" observedRunningTime="2025-12-09 12:24:21.45641729 +0000 UTC m=+1160.705180809" watchObservedRunningTime="2025-12-09 12:24:21.460881119 +0000 UTC m=+1160.709644638" Dec 09 12:24:21 crc kubenswrapper[4703]: I1209 12:24:21.514933 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-967d97867-4qrc4" podStartSLOduration=28.529386846 podStartE2EDuration="1m4.514906686s" podCreationTimestamp="2025-12-09 12:23:17 +0000 UTC" firstStartedPulling="2025-12-09 12:23:20.061997341 +0000 UTC m=+1099.310760860" lastFinishedPulling="2025-12-09 12:23:56.047517181 +0000 UTC m=+1135.296280700" observedRunningTime="2025-12-09 12:24:21.508801005 +0000 UTC m=+1160.757564534" watchObservedRunningTime="2025-12-09 12:24:21.514906686 +0000 UTC m=+1160.763670205" Dec 09 12:24:23 crc kubenswrapper[4703]: I1209 12:24:23.292619 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-9x27n" Dec 09 12:24:26 crc kubenswrapper[4703]: I1209 12:24:26.319018 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-842l9" event={"ID":"b37cc5b3-46d5-403e-be1b-46eebc75f0ef","Type":"ContainerStarted","Data":"93a64ac89105f7f4de79a6740f14ee0fea2ddf43492527342c995a30a63c6db9"} Dec 09 12:24:26 crc kubenswrapper[4703]: I1209 12:24:26.340003 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-842l9" podStartSLOduration=3.205204809 podStartE2EDuration="1m8.339980321s" podCreationTimestamp="2025-12-09 12:23:18 +0000 UTC" firstStartedPulling="2025-12-09 12:23:20.763667151 +0000 UTC m=+1100.012430670" lastFinishedPulling="2025-12-09 12:24:25.898442663 +0000 UTC m=+1165.147206182" observedRunningTime="2025-12-09 12:24:26.335654567 +0000 UTC m=+1165.584418096" watchObservedRunningTime="2025-12-09 12:24:26.339980321 +0000 UTC m=+1165.588743830" Dec 09 12:24:27 crc kubenswrapper[4703]: E1209 12:24:27.282776 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:424da951f13f1fbe9083215dc9f5088f90676dd813f01fdf3c1a8639b61cbaad\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-tv5tz" podUID="de1bc545-3573-49b4-9a2d-8db33d6f37d1" Dec 09 12:24:27 crc kubenswrapper[4703]: I1209 12:24:27.328651 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-tv5tz" event={"ID":"de1bc545-3573-49b4-9a2d-8db33d6f37d1","Type":"ContainerStarted","Data":"ff0e8fa317e384b50422f2530e226ab8e70eb425c8d41c825a6415c757529ff7"} Dec 09 12:24:27 crc kubenswrapper[4703]: E1209 12:24:27.330601 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:424da951f13f1fbe9083215dc9f5088f90676dd813f01fdf3c1a8639b61cbaad\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-tv5tz" podUID="de1bc545-3573-49b4-9a2d-8db33d6f37d1" Dec 09 12:24:28 crc kubenswrapper[4703]: I1209 12:24:28.227119 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-rqtl7" Dec 09 12:24:28 crc kubenswrapper[4703]: I1209 12:24:28.254755 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-4njvk" Dec 09 12:24:28 crc kubenswrapper[4703]: I1209 12:24:28.318792 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-srhz6" Dec 09 12:24:28 crc kubenswrapper[4703]: I1209 12:24:28.588218 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-v9r65" Dec 09 12:24:28 crc kubenswrapper[4703]: I1209 12:24:28.703306 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-bv7zd" Dec 09 12:24:28 crc kubenswrapper[4703]: I1209 12:24:28.743749 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-8xf75" Dec 09 12:24:28 crc kubenswrapper[4703]: I1209 12:24:28.804939 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-k7rqx" Dec 09 12:24:28 crc kubenswrapper[4703]: I1209 12:24:28.814862 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-gpbmp" Dec 09 12:24:28 crc kubenswrapper[4703]: I1209 12:24:28.863860 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-78f8948974-pswqz" Dec 09 12:24:28 crc kubenswrapper[4703]: I1209 12:24:28.965908 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-zcck5" Dec 09 12:24:28 crc kubenswrapper[4703]: I1209 12:24:28.977331 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5854674fcc-kvpf4" Dec 09 12:24:29 crc kubenswrapper[4703]: I1209 12:24:29.086901 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-998648c74-md4r8" Dec 09 12:24:29 crc kubenswrapper[4703]: I1209 12:24:29.351945 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-667bd8d554-959xw" event={"ID":"d34eb12d-d10a-406c-bfd9-9f772f9e63eb","Type":"ContainerStarted","Data":"6ef103f31b7cab93094ffbb3737f7c716f9d468b250e3724267c4c7c22e82ee7"} Dec 09 12:24:30 crc kubenswrapper[4703]: I1209 12:24:30.083821 4703 patch_prober.go:28] interesting pod/machine-config-daemon-q8sfk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 12:24:30 crc kubenswrapper[4703]: I1209 12:24:30.084148 4703 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 12:24:30 crc kubenswrapper[4703]: I1209 12:24:30.084234 4703 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" Dec 09 12:24:30 crc kubenswrapper[4703]: I1209 12:24:30.084999 4703 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"852cd8ebe9b36e4877ac2f4fe135ba61b72af0fc110102ec40d7b7e1b7e0423f"} pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 12:24:30 crc kubenswrapper[4703]: I1209 12:24:30.085061 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" containerName="machine-config-daemon" containerID="cri-o://852cd8ebe9b36e4877ac2f4fe135ba61b72af0fc110102ec40d7b7e1b7e0423f" gracePeriod=600 Dec 09 12:24:30 crc kubenswrapper[4703]: I1209 12:24:30.366900 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-667bd8d554-959xw" event={"ID":"d34eb12d-d10a-406c-bfd9-9f772f9e63eb","Type":"ContainerStarted","Data":"2a08a242273a46fdcc1ed40ce6b76a8ec4c1cb505edb8e0a17b34350f8bc84f3"} Dec 09 12:24:30 crc kubenswrapper[4703]: I1209 12:24:30.366983 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-667bd8d554-959xw" Dec 09 12:24:30 crc kubenswrapper[4703]: I1209 12:24:30.372747 4703 generic.go:334] "Generic (PLEG): container finished" podID="32956ceb-8540-406e-8693-e86efb46cd42" containerID="852cd8ebe9b36e4877ac2f4fe135ba61b72af0fc110102ec40d7b7e1b7e0423f" exitCode=0 Dec 09 12:24:30 crc kubenswrapper[4703]: I1209 12:24:30.372812 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" event={"ID":"32956ceb-8540-406e-8693-e86efb46cd42","Type":"ContainerDied","Data":"852cd8ebe9b36e4877ac2f4fe135ba61b72af0fc110102ec40d7b7e1b7e0423f"} Dec 09 12:24:30 crc kubenswrapper[4703]: I1209 12:24:30.372859 4703 scope.go:117] "RemoveContainer" containerID="b834b447788d8be29753e6c06c8a6c802214a19ed04f8682755c759ef6ba04af" Dec 09 12:24:30 crc kubenswrapper[4703]: I1209 12:24:30.391473 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-667bd8d554-959xw" podStartSLOduration=4.102647032 podStartE2EDuration="1m12.391445743s" podCreationTimestamp="2025-12-09 12:23:18 +0000 UTC" firstStartedPulling="2025-12-09 12:23:20.755798438 +0000 UTC m=+1100.004561957" lastFinishedPulling="2025-12-09 12:24:29.044597149 +0000 UTC m=+1168.293360668" observedRunningTime="2025-12-09 12:24:30.388328781 +0000 UTC m=+1169.637092300" watchObservedRunningTime="2025-12-09 12:24:30.391445743 +0000 UTC m=+1169.640209262" Dec 09 12:24:31 crc kubenswrapper[4703]: I1209 12:24:31.384209 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" event={"ID":"32956ceb-8540-406e-8693-e86efb46cd42","Type":"ContainerStarted","Data":"75604f5dbc97ce29a121f656b65fc7350b377b2e69e9598ea482a258333f6101"} Dec 09 12:24:31 crc kubenswrapper[4703]: I1209 12:24:31.388075 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fgcmht" event={"ID":"a3ff4025-d356-4b31-b42b-5e198155ba91","Type":"ContainerStarted","Data":"0800bfabb933614cb2c485ca657af6b4b5af604a2620060d3a53e891c494d5d6"} Dec 09 12:24:31 crc kubenswrapper[4703]: I1209 12:24:31.388125 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fgcmht" event={"ID":"a3ff4025-d356-4b31-b42b-5e198155ba91","Type":"ContainerStarted","Data":"2f25f71199af448f3ba0460f307a3b7533c7966189171e304c02e4eeec53b442"} Dec 09 12:24:31 crc kubenswrapper[4703]: I1209 12:24:31.388293 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fgcmht" Dec 09 12:24:31 crc kubenswrapper[4703]: I1209 12:24:31.441325 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fgcmht" podStartSLOduration=42.192815912 podStartE2EDuration="1m13.441304348s" podCreationTimestamp="2025-12-09 12:23:18 +0000 UTC" firstStartedPulling="2025-12-09 12:23:58.911448798 +0000 UTC m=+1138.160212327" lastFinishedPulling="2025-12-09 12:24:30.159937244 +0000 UTC m=+1169.408700763" observedRunningTime="2025-12-09 12:24:31.437814336 +0000 UTC m=+1170.686577855" watchObservedRunningTime="2025-12-09 12:24:31.441304348 +0000 UTC m=+1170.690067867" Dec 09 12:24:32 crc kubenswrapper[4703]: I1209 12:24:32.399098 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-797ff5dd46-77fms" event={"ID":"db6f122b-a853-4ecb-8d82-2a8a04c8224e","Type":"ContainerStarted","Data":"50c463b12707148b112f755795ced579b67bd76ca399833ea00bdeac17f5f656"} Dec 09 12:24:32 crc kubenswrapper[4703]: I1209 12:24:32.399620 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-797ff5dd46-77fms" event={"ID":"db6f122b-a853-4ecb-8d82-2a8a04c8224e","Type":"ContainerStarted","Data":"50da6a354f8d55c791cae12f4ffd3c9663d369f3543fe4fe1b10046dba411284"} Dec 09 12:24:32 crc kubenswrapper[4703]: I1209 12:24:32.399991 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-797ff5dd46-77fms" Dec 09 12:24:32 crc kubenswrapper[4703]: I1209 12:24:32.424602 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-797ff5dd46-77fms" podStartSLOduration=3.9812792420000003 podStartE2EDuration="1m14.424568824s" podCreationTimestamp="2025-12-09 12:23:18 +0000 UTC" firstStartedPulling="2025-12-09 12:23:20.78326113 +0000 UTC m=+1100.032024649" lastFinishedPulling="2025-12-09 12:24:31.226550712 +0000 UTC m=+1170.475314231" observedRunningTime="2025-12-09 12:24:32.420235449 +0000 UTC m=+1171.668998968" watchObservedRunningTime="2025-12-09 12:24:32.424568824 +0000 UTC m=+1171.673332333" Dec 09 12:24:37 crc kubenswrapper[4703]: I1209 12:24:37.450974 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-2gwjv" event={"ID":"28c19114-205f-4c58-8ca3-7a7a19b0968b","Type":"ContainerStarted","Data":"2382ff33cec75af2534cddaf6ac402a004b68bc4f1ab341b72c4ce335704eaf0"} Dec 09 12:24:37 crc kubenswrapper[4703]: I1209 12:24:37.451701 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-2gwjv" Dec 09 12:24:37 crc kubenswrapper[4703]: I1209 12:24:37.470104 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-2gwjv" podStartSLOduration=41.853634711 podStartE2EDuration="1m20.470073245s" podCreationTimestamp="2025-12-09 12:23:17 +0000 UTC" firstStartedPulling="2025-12-09 12:23:58.017339299 +0000 UTC m=+1137.266102818" lastFinishedPulling="2025-12-09 12:24:36.633777833 +0000 UTC m=+1175.882541352" observedRunningTime="2025-12-09 12:24:37.466025239 +0000 UTC m=+1176.714788758" watchObservedRunningTime="2025-12-09 12:24:37.470073245 +0000 UTC m=+1176.718836764" Dec 09 12:24:39 crc kubenswrapper[4703]: I1209 12:24:39.194902 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-797ff5dd46-77fms" Dec 09 12:24:39 crc kubenswrapper[4703]: I1209 12:24:39.312697 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-667bd8d554-959xw" Dec 09 12:24:40 crc kubenswrapper[4703]: I1209 12:24:40.901155 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fgcmht" Dec 09 12:24:43 crc kubenswrapper[4703]: I1209 12:24:43.507035 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-tv5tz" event={"ID":"de1bc545-3573-49b4-9a2d-8db33d6f37d1","Type":"ContainerStarted","Data":"c51afdbf11c38b01f35a0ea66dd4eb2d2b9b21df96eba71facf2ed7b7bf4a6c2"} Dec 09 12:24:43 crc kubenswrapper[4703]: I1209 12:24:43.507561 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-tv5tz" Dec 09 12:24:43 crc kubenswrapper[4703]: I1209 12:24:43.543989 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-tv5tz" podStartSLOduration=3.687749407 podStartE2EDuration="1m25.543957885s" podCreationTimestamp="2025-12-09 12:23:18 +0000 UTC" firstStartedPulling="2025-12-09 12:23:20.755416197 +0000 UTC m=+1100.004179716" lastFinishedPulling="2025-12-09 12:24:42.611624675 +0000 UTC m=+1181.860388194" observedRunningTime="2025-12-09 12:24:43.532768309 +0000 UTC m=+1182.781531828" watchObservedRunningTime="2025-12-09 12:24:43.543957885 +0000 UTC m=+1182.792721404" Dec 09 12:24:44 crc kubenswrapper[4703]: I1209 12:24:44.361644 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-2gwjv" Dec 09 12:24:49 crc kubenswrapper[4703]: I1209 12:24:49.031903 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-tv5tz" Dec 09 12:25:08 crc kubenswrapper[4703]: I1209 12:25:08.068345 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-kmmrj"] Dec 09 12:25:08 crc kubenswrapper[4703]: I1209 12:25:08.072476 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-kmmrj" Dec 09 12:25:08 crc kubenswrapper[4703]: I1209 12:25:08.075163 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-66hjv" Dec 09 12:25:08 crc kubenswrapper[4703]: I1209 12:25:08.076367 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Dec 09 12:25:08 crc kubenswrapper[4703]: I1209 12:25:08.076506 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Dec 09 12:25:08 crc kubenswrapper[4703]: I1209 12:25:08.076510 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Dec 09 12:25:08 crc kubenswrapper[4703]: I1209 12:25:08.086897 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-kmmrj"] Dec 09 12:25:08 crc kubenswrapper[4703]: I1209 12:25:08.180652 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-qmw2d"] Dec 09 12:25:08 crc kubenswrapper[4703]: I1209 12:25:08.182124 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-qmw2d" Dec 09 12:25:08 crc kubenswrapper[4703]: I1209 12:25:08.201793 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Dec 09 12:25:08 crc kubenswrapper[4703]: I1209 12:25:08.209488 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-qmw2d"] Dec 09 12:25:08 crc kubenswrapper[4703]: I1209 12:25:08.250985 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1670b15c-6a1f-429f-adba-1f4c9e61fc58-config\") pod \"dnsmasq-dns-675f4bcbfc-kmmrj\" (UID: \"1670b15c-6a1f-429f-adba-1f4c9e61fc58\") " pod="openstack/dnsmasq-dns-675f4bcbfc-kmmrj" Dec 09 12:25:08 crc kubenswrapper[4703]: I1209 12:25:08.251055 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/750b23e7-4a88-413d-89f8-9bb42e8f48a6-config\") pod \"dnsmasq-dns-78dd6ddcc-qmw2d\" (UID: \"750b23e7-4a88-413d-89f8-9bb42e8f48a6\") " pod="openstack/dnsmasq-dns-78dd6ddcc-qmw2d" Dec 09 12:25:08 crc kubenswrapper[4703]: I1209 12:25:08.251257 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/750b23e7-4a88-413d-89f8-9bb42e8f48a6-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-qmw2d\" (UID: \"750b23e7-4a88-413d-89f8-9bb42e8f48a6\") " pod="openstack/dnsmasq-dns-78dd6ddcc-qmw2d" Dec 09 12:25:08 crc kubenswrapper[4703]: I1209 12:25:08.251307 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zc6sm\" (UniqueName: \"kubernetes.io/projected/1670b15c-6a1f-429f-adba-1f4c9e61fc58-kube-api-access-zc6sm\") pod \"dnsmasq-dns-675f4bcbfc-kmmrj\" (UID: \"1670b15c-6a1f-429f-adba-1f4c9e61fc58\") " pod="openstack/dnsmasq-dns-675f4bcbfc-kmmrj" Dec 09 12:25:08 crc kubenswrapper[4703]: I1209 12:25:08.251331 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nt4xk\" (UniqueName: \"kubernetes.io/projected/750b23e7-4a88-413d-89f8-9bb42e8f48a6-kube-api-access-nt4xk\") pod \"dnsmasq-dns-78dd6ddcc-qmw2d\" (UID: \"750b23e7-4a88-413d-89f8-9bb42e8f48a6\") " pod="openstack/dnsmasq-dns-78dd6ddcc-qmw2d" Dec 09 12:25:08 crc kubenswrapper[4703]: I1209 12:25:08.351778 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/750b23e7-4a88-413d-89f8-9bb42e8f48a6-config\") pod \"dnsmasq-dns-78dd6ddcc-qmw2d\" (UID: \"750b23e7-4a88-413d-89f8-9bb42e8f48a6\") " pod="openstack/dnsmasq-dns-78dd6ddcc-qmw2d" Dec 09 12:25:08 crc kubenswrapper[4703]: I1209 12:25:08.351843 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/750b23e7-4a88-413d-89f8-9bb42e8f48a6-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-qmw2d\" (UID: \"750b23e7-4a88-413d-89f8-9bb42e8f48a6\") " pod="openstack/dnsmasq-dns-78dd6ddcc-qmw2d" Dec 09 12:25:08 crc kubenswrapper[4703]: I1209 12:25:08.351877 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zc6sm\" (UniqueName: \"kubernetes.io/projected/1670b15c-6a1f-429f-adba-1f4c9e61fc58-kube-api-access-zc6sm\") pod \"dnsmasq-dns-675f4bcbfc-kmmrj\" (UID: \"1670b15c-6a1f-429f-adba-1f4c9e61fc58\") " pod="openstack/dnsmasq-dns-675f4bcbfc-kmmrj" Dec 09 12:25:08 crc kubenswrapper[4703]: I1209 12:25:08.351928 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nt4xk\" (UniqueName: \"kubernetes.io/projected/750b23e7-4a88-413d-89f8-9bb42e8f48a6-kube-api-access-nt4xk\") pod \"dnsmasq-dns-78dd6ddcc-qmw2d\" (UID: \"750b23e7-4a88-413d-89f8-9bb42e8f48a6\") " pod="openstack/dnsmasq-dns-78dd6ddcc-qmw2d" Dec 09 12:25:08 crc kubenswrapper[4703]: I1209 12:25:08.352000 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1670b15c-6a1f-429f-adba-1f4c9e61fc58-config\") pod \"dnsmasq-dns-675f4bcbfc-kmmrj\" (UID: \"1670b15c-6a1f-429f-adba-1f4c9e61fc58\") " pod="openstack/dnsmasq-dns-675f4bcbfc-kmmrj" Dec 09 12:25:08 crc kubenswrapper[4703]: I1209 12:25:08.352892 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1670b15c-6a1f-429f-adba-1f4c9e61fc58-config\") pod \"dnsmasq-dns-675f4bcbfc-kmmrj\" (UID: \"1670b15c-6a1f-429f-adba-1f4c9e61fc58\") " pod="openstack/dnsmasq-dns-675f4bcbfc-kmmrj" Dec 09 12:25:08 crc kubenswrapper[4703]: I1209 12:25:08.352953 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/750b23e7-4a88-413d-89f8-9bb42e8f48a6-config\") pod \"dnsmasq-dns-78dd6ddcc-qmw2d\" (UID: \"750b23e7-4a88-413d-89f8-9bb42e8f48a6\") " pod="openstack/dnsmasq-dns-78dd6ddcc-qmw2d" Dec 09 12:25:08 crc kubenswrapper[4703]: I1209 12:25:08.352979 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/750b23e7-4a88-413d-89f8-9bb42e8f48a6-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-qmw2d\" (UID: \"750b23e7-4a88-413d-89f8-9bb42e8f48a6\") " pod="openstack/dnsmasq-dns-78dd6ddcc-qmw2d" Dec 09 12:25:08 crc kubenswrapper[4703]: I1209 12:25:08.373524 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zc6sm\" (UniqueName: \"kubernetes.io/projected/1670b15c-6a1f-429f-adba-1f4c9e61fc58-kube-api-access-zc6sm\") pod \"dnsmasq-dns-675f4bcbfc-kmmrj\" (UID: \"1670b15c-6a1f-429f-adba-1f4c9e61fc58\") " pod="openstack/dnsmasq-dns-675f4bcbfc-kmmrj" Dec 09 12:25:08 crc kubenswrapper[4703]: I1209 12:25:08.388227 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nt4xk\" (UniqueName: \"kubernetes.io/projected/750b23e7-4a88-413d-89f8-9bb42e8f48a6-kube-api-access-nt4xk\") pod \"dnsmasq-dns-78dd6ddcc-qmw2d\" (UID: \"750b23e7-4a88-413d-89f8-9bb42e8f48a6\") " pod="openstack/dnsmasq-dns-78dd6ddcc-qmw2d" Dec 09 12:25:08 crc kubenswrapper[4703]: I1209 12:25:08.406291 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-kmmrj" Dec 09 12:25:08 crc kubenswrapper[4703]: I1209 12:25:08.500055 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-qmw2d" Dec 09 12:25:09 crc kubenswrapper[4703]: I1209 12:25:09.385242 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-kmmrj"] Dec 09 12:25:09 crc kubenswrapper[4703]: I1209 12:25:09.847514 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-qmw2d"] Dec 09 12:25:10 crc kubenswrapper[4703]: I1209 12:25:10.107243 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-kmmrj" event={"ID":"1670b15c-6a1f-429f-adba-1f4c9e61fc58","Type":"ContainerStarted","Data":"054c5cb01548155af78490d72fd9fd77989388740c6be0fd7243dc969f143778"} Dec 09 12:25:10 crc kubenswrapper[4703]: I1209 12:25:10.109011 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-qmw2d" event={"ID":"750b23e7-4a88-413d-89f8-9bb42e8f48a6","Type":"ContainerStarted","Data":"6292c4d37f3634d5028306e1127f544dd88d52b018bffa73b3aa3bf9ed634207"} Dec 09 12:25:11 crc kubenswrapper[4703]: I1209 12:25:11.057701 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-kmmrj"] Dec 09 12:25:11 crc kubenswrapper[4703]: I1209 12:25:11.107615 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-tbdjp"] Dec 09 12:25:11 crc kubenswrapper[4703]: I1209 12:25:11.113757 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-tbdjp" Dec 09 12:25:11 crc kubenswrapper[4703]: I1209 12:25:11.161504 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-tbdjp"] Dec 09 12:25:11 crc kubenswrapper[4703]: I1209 12:25:11.205842 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a733df4a-37e3-4c6a-825e-cb27ba720b71-config\") pod \"dnsmasq-dns-666b6646f7-tbdjp\" (UID: \"a733df4a-37e3-4c6a-825e-cb27ba720b71\") " pod="openstack/dnsmasq-dns-666b6646f7-tbdjp" Dec 09 12:25:11 crc kubenswrapper[4703]: I1209 12:25:11.206146 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wd89f\" (UniqueName: \"kubernetes.io/projected/a733df4a-37e3-4c6a-825e-cb27ba720b71-kube-api-access-wd89f\") pod \"dnsmasq-dns-666b6646f7-tbdjp\" (UID: \"a733df4a-37e3-4c6a-825e-cb27ba720b71\") " pod="openstack/dnsmasq-dns-666b6646f7-tbdjp" Dec 09 12:25:11 crc kubenswrapper[4703]: I1209 12:25:11.206210 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a733df4a-37e3-4c6a-825e-cb27ba720b71-dns-svc\") pod \"dnsmasq-dns-666b6646f7-tbdjp\" (UID: \"a733df4a-37e3-4c6a-825e-cb27ba720b71\") " pod="openstack/dnsmasq-dns-666b6646f7-tbdjp" Dec 09 12:25:11 crc kubenswrapper[4703]: I1209 12:25:11.344305 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wd89f\" (UniqueName: \"kubernetes.io/projected/a733df4a-37e3-4c6a-825e-cb27ba720b71-kube-api-access-wd89f\") pod \"dnsmasq-dns-666b6646f7-tbdjp\" (UID: \"a733df4a-37e3-4c6a-825e-cb27ba720b71\") " pod="openstack/dnsmasq-dns-666b6646f7-tbdjp" Dec 09 12:25:11 crc kubenswrapper[4703]: I1209 12:25:11.344355 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a733df4a-37e3-4c6a-825e-cb27ba720b71-dns-svc\") pod \"dnsmasq-dns-666b6646f7-tbdjp\" (UID: \"a733df4a-37e3-4c6a-825e-cb27ba720b71\") " pod="openstack/dnsmasq-dns-666b6646f7-tbdjp" Dec 09 12:25:11 crc kubenswrapper[4703]: I1209 12:25:11.344390 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a733df4a-37e3-4c6a-825e-cb27ba720b71-config\") pod \"dnsmasq-dns-666b6646f7-tbdjp\" (UID: \"a733df4a-37e3-4c6a-825e-cb27ba720b71\") " pod="openstack/dnsmasq-dns-666b6646f7-tbdjp" Dec 09 12:25:11 crc kubenswrapper[4703]: I1209 12:25:11.345437 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a733df4a-37e3-4c6a-825e-cb27ba720b71-config\") pod \"dnsmasq-dns-666b6646f7-tbdjp\" (UID: \"a733df4a-37e3-4c6a-825e-cb27ba720b71\") " pod="openstack/dnsmasq-dns-666b6646f7-tbdjp" Dec 09 12:25:11 crc kubenswrapper[4703]: I1209 12:25:11.345661 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a733df4a-37e3-4c6a-825e-cb27ba720b71-dns-svc\") pod \"dnsmasq-dns-666b6646f7-tbdjp\" (UID: \"a733df4a-37e3-4c6a-825e-cb27ba720b71\") " pod="openstack/dnsmasq-dns-666b6646f7-tbdjp" Dec 09 12:25:11 crc kubenswrapper[4703]: I1209 12:25:11.389608 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wd89f\" (UniqueName: \"kubernetes.io/projected/a733df4a-37e3-4c6a-825e-cb27ba720b71-kube-api-access-wd89f\") pod \"dnsmasq-dns-666b6646f7-tbdjp\" (UID: \"a733df4a-37e3-4c6a-825e-cb27ba720b71\") " pod="openstack/dnsmasq-dns-666b6646f7-tbdjp" Dec 09 12:25:11 crc kubenswrapper[4703]: I1209 12:25:11.450565 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-tbdjp" Dec 09 12:25:11 crc kubenswrapper[4703]: I1209 12:25:11.479063 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-qmw2d"] Dec 09 12:25:11 crc kubenswrapper[4703]: I1209 12:25:11.514592 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-rxj84"] Dec 09 12:25:11 crc kubenswrapper[4703]: I1209 12:25:11.521975 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-rxj84" Dec 09 12:25:11 crc kubenswrapper[4703]: I1209 12:25:11.536454 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-rxj84"] Dec 09 12:25:11 crc kubenswrapper[4703]: I1209 12:25:11.648549 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2da8660a-e38a-41f1-8c85-e262a4ad191a-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-rxj84\" (UID: \"2da8660a-e38a-41f1-8c85-e262a4ad191a\") " pod="openstack/dnsmasq-dns-57d769cc4f-rxj84" Dec 09 12:25:11 crc kubenswrapper[4703]: I1209 12:25:11.648621 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bz8qf\" (UniqueName: \"kubernetes.io/projected/2da8660a-e38a-41f1-8c85-e262a4ad191a-kube-api-access-bz8qf\") pod \"dnsmasq-dns-57d769cc4f-rxj84\" (UID: \"2da8660a-e38a-41f1-8c85-e262a4ad191a\") " pod="openstack/dnsmasq-dns-57d769cc4f-rxj84" Dec 09 12:25:11 crc kubenswrapper[4703]: I1209 12:25:11.648681 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2da8660a-e38a-41f1-8c85-e262a4ad191a-config\") pod \"dnsmasq-dns-57d769cc4f-rxj84\" (UID: \"2da8660a-e38a-41f1-8c85-e262a4ad191a\") " pod="openstack/dnsmasq-dns-57d769cc4f-rxj84" Dec 09 12:25:11 crc kubenswrapper[4703]: I1209 12:25:11.755609 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2da8660a-e38a-41f1-8c85-e262a4ad191a-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-rxj84\" (UID: \"2da8660a-e38a-41f1-8c85-e262a4ad191a\") " pod="openstack/dnsmasq-dns-57d769cc4f-rxj84" Dec 09 12:25:11 crc kubenswrapper[4703]: I1209 12:25:11.755688 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bz8qf\" (UniqueName: \"kubernetes.io/projected/2da8660a-e38a-41f1-8c85-e262a4ad191a-kube-api-access-bz8qf\") pod \"dnsmasq-dns-57d769cc4f-rxj84\" (UID: \"2da8660a-e38a-41f1-8c85-e262a4ad191a\") " pod="openstack/dnsmasq-dns-57d769cc4f-rxj84" Dec 09 12:25:11 crc kubenswrapper[4703]: I1209 12:25:11.755759 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2da8660a-e38a-41f1-8c85-e262a4ad191a-config\") pod \"dnsmasq-dns-57d769cc4f-rxj84\" (UID: \"2da8660a-e38a-41f1-8c85-e262a4ad191a\") " pod="openstack/dnsmasq-dns-57d769cc4f-rxj84" Dec 09 12:25:11 crc kubenswrapper[4703]: I1209 12:25:11.756715 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2da8660a-e38a-41f1-8c85-e262a4ad191a-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-rxj84\" (UID: \"2da8660a-e38a-41f1-8c85-e262a4ad191a\") " pod="openstack/dnsmasq-dns-57d769cc4f-rxj84" Dec 09 12:25:11 crc kubenswrapper[4703]: I1209 12:25:11.756744 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2da8660a-e38a-41f1-8c85-e262a4ad191a-config\") pod \"dnsmasq-dns-57d769cc4f-rxj84\" (UID: \"2da8660a-e38a-41f1-8c85-e262a4ad191a\") " pod="openstack/dnsmasq-dns-57d769cc4f-rxj84" Dec 09 12:25:11 crc kubenswrapper[4703]: I1209 12:25:11.778975 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bz8qf\" (UniqueName: \"kubernetes.io/projected/2da8660a-e38a-41f1-8c85-e262a4ad191a-kube-api-access-bz8qf\") pod \"dnsmasq-dns-57d769cc4f-rxj84\" (UID: \"2da8660a-e38a-41f1-8c85-e262a4ad191a\") " pod="openstack/dnsmasq-dns-57d769cc4f-rxj84" Dec 09 12:25:11 crc kubenswrapper[4703]: I1209 12:25:11.865667 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-rxj84" Dec 09 12:25:12 crc kubenswrapper[4703]: I1209 12:25:12.299258 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Dec 09 12:25:12 crc kubenswrapper[4703]: I1209 12:25:12.300785 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 09 12:25:12 crc kubenswrapper[4703]: I1209 12:25:12.330444 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Dec 09 12:25:12 crc kubenswrapper[4703]: I1209 12:25:12.330674 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Dec 09 12:25:12 crc kubenswrapper[4703]: I1209 12:25:12.330825 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Dec 09 12:25:12 crc kubenswrapper[4703]: I1209 12:25:12.331129 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Dec 09 12:25:12 crc kubenswrapper[4703]: I1209 12:25:12.331386 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-5pfx6" Dec 09 12:25:12 crc kubenswrapper[4703]: I1209 12:25:12.332131 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Dec 09 12:25:12 crc kubenswrapper[4703]: I1209 12:25:12.332507 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Dec 09 12:25:12 crc kubenswrapper[4703]: I1209 12:25:12.345445 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 09 12:25:12 crc kubenswrapper[4703]: I1209 12:25:12.419496 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-tbdjp"] Dec 09 12:25:12 crc kubenswrapper[4703]: I1209 12:25:12.426298 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0b084a1a-44b8-439b-ad26-d1ead9d2f225-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"0b084a1a-44b8-439b-ad26-d1ead9d2f225\") " pod="openstack/rabbitmq-server-0" Dec 09 12:25:12 crc kubenswrapper[4703]: I1209 12:25:12.426355 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0b084a1a-44b8-439b-ad26-d1ead9d2f225-server-conf\") pod \"rabbitmq-server-0\" (UID: \"0b084a1a-44b8-439b-ad26-d1ead9d2f225\") " pod="openstack/rabbitmq-server-0" Dec 09 12:25:12 crc kubenswrapper[4703]: I1209 12:25:12.426394 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dv4zt\" (UniqueName: \"kubernetes.io/projected/0b084a1a-44b8-439b-ad26-d1ead9d2f225-kube-api-access-dv4zt\") pod \"rabbitmq-server-0\" (UID: \"0b084a1a-44b8-439b-ad26-d1ead9d2f225\") " pod="openstack/rabbitmq-server-0" Dec 09 12:25:12 crc kubenswrapper[4703]: I1209 12:25:12.426430 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0b084a1a-44b8-439b-ad26-d1ead9d2f225-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"0b084a1a-44b8-439b-ad26-d1ead9d2f225\") " pod="openstack/rabbitmq-server-0" Dec 09 12:25:12 crc kubenswrapper[4703]: I1209 12:25:12.426472 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0b084a1a-44b8-439b-ad26-d1ead9d2f225-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"0b084a1a-44b8-439b-ad26-d1ead9d2f225\") " pod="openstack/rabbitmq-server-0" Dec 09 12:25:12 crc kubenswrapper[4703]: I1209 12:25:12.426502 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-71cad072-5587-4a80-81ac-ea30a725ded4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-71cad072-5587-4a80-81ac-ea30a725ded4\") pod \"rabbitmq-server-0\" (UID: \"0b084a1a-44b8-439b-ad26-d1ead9d2f225\") " pod="openstack/rabbitmq-server-0" Dec 09 12:25:12 crc kubenswrapper[4703]: I1209 12:25:12.426531 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0b084a1a-44b8-439b-ad26-d1ead9d2f225-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"0b084a1a-44b8-439b-ad26-d1ead9d2f225\") " pod="openstack/rabbitmq-server-0" Dec 09 12:25:12 crc kubenswrapper[4703]: I1209 12:25:12.426566 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0b084a1a-44b8-439b-ad26-d1ead9d2f225-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"0b084a1a-44b8-439b-ad26-d1ead9d2f225\") " pod="openstack/rabbitmq-server-0" Dec 09 12:25:12 crc kubenswrapper[4703]: I1209 12:25:12.426596 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0b084a1a-44b8-439b-ad26-d1ead9d2f225-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"0b084a1a-44b8-439b-ad26-d1ead9d2f225\") " pod="openstack/rabbitmq-server-0" Dec 09 12:25:12 crc kubenswrapper[4703]: I1209 12:25:12.426624 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0b084a1a-44b8-439b-ad26-d1ead9d2f225-config-data\") pod \"rabbitmq-server-0\" (UID: \"0b084a1a-44b8-439b-ad26-d1ead9d2f225\") " pod="openstack/rabbitmq-server-0" Dec 09 12:25:12 crc kubenswrapper[4703]: I1209 12:25:12.426829 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0b084a1a-44b8-439b-ad26-d1ead9d2f225-pod-info\") pod \"rabbitmq-server-0\" (UID: \"0b084a1a-44b8-439b-ad26-d1ead9d2f225\") " pod="openstack/rabbitmq-server-0" Dec 09 12:25:12 crc kubenswrapper[4703]: W1209 12:25:12.516854 4703 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda733df4a_37e3_4c6a_825e_cb27ba720b71.slice/crio-11940062818d03432589dcbf1ae7095dfad66c0cb6365b38da50f98d0ab7c64d WatchSource:0}: Error finding container 11940062818d03432589dcbf1ae7095dfad66c0cb6365b38da50f98d0ab7c64d: Status 404 returned error can't find the container with id 11940062818d03432589dcbf1ae7095dfad66c0cb6365b38da50f98d0ab7c64d Dec 09 12:25:12 crc kubenswrapper[4703]: I1209 12:25:12.528273 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0b084a1a-44b8-439b-ad26-d1ead9d2f225-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"0b084a1a-44b8-439b-ad26-d1ead9d2f225\") " pod="openstack/rabbitmq-server-0" Dec 09 12:25:12 crc kubenswrapper[4703]: I1209 12:25:12.528342 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0b084a1a-44b8-439b-ad26-d1ead9d2f225-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"0b084a1a-44b8-439b-ad26-d1ead9d2f225\") " pod="openstack/rabbitmq-server-0" Dec 09 12:25:12 crc kubenswrapper[4703]: I1209 12:25:12.528370 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-71cad072-5587-4a80-81ac-ea30a725ded4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-71cad072-5587-4a80-81ac-ea30a725ded4\") pod \"rabbitmq-server-0\" (UID: \"0b084a1a-44b8-439b-ad26-d1ead9d2f225\") " pod="openstack/rabbitmq-server-0" Dec 09 12:25:12 crc kubenswrapper[4703]: I1209 12:25:12.528405 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0b084a1a-44b8-439b-ad26-d1ead9d2f225-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"0b084a1a-44b8-439b-ad26-d1ead9d2f225\") " pod="openstack/rabbitmq-server-0" Dec 09 12:25:12 crc kubenswrapper[4703]: I1209 12:25:12.528436 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0b084a1a-44b8-439b-ad26-d1ead9d2f225-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"0b084a1a-44b8-439b-ad26-d1ead9d2f225\") " pod="openstack/rabbitmq-server-0" Dec 09 12:25:12 crc kubenswrapper[4703]: I1209 12:25:12.528466 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0b084a1a-44b8-439b-ad26-d1ead9d2f225-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"0b084a1a-44b8-439b-ad26-d1ead9d2f225\") " pod="openstack/rabbitmq-server-0" Dec 09 12:25:12 crc kubenswrapper[4703]: I1209 12:25:12.528492 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0b084a1a-44b8-439b-ad26-d1ead9d2f225-config-data\") pod \"rabbitmq-server-0\" (UID: \"0b084a1a-44b8-439b-ad26-d1ead9d2f225\") " pod="openstack/rabbitmq-server-0" Dec 09 12:25:12 crc kubenswrapper[4703]: I1209 12:25:12.528519 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0b084a1a-44b8-439b-ad26-d1ead9d2f225-pod-info\") pod \"rabbitmq-server-0\" (UID: \"0b084a1a-44b8-439b-ad26-d1ead9d2f225\") " pod="openstack/rabbitmq-server-0" Dec 09 12:25:12 crc kubenswrapper[4703]: I1209 12:25:12.528574 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0b084a1a-44b8-439b-ad26-d1ead9d2f225-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"0b084a1a-44b8-439b-ad26-d1ead9d2f225\") " pod="openstack/rabbitmq-server-0" Dec 09 12:25:12 crc kubenswrapper[4703]: I1209 12:25:12.528607 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0b084a1a-44b8-439b-ad26-d1ead9d2f225-server-conf\") pod \"rabbitmq-server-0\" (UID: \"0b084a1a-44b8-439b-ad26-d1ead9d2f225\") " pod="openstack/rabbitmq-server-0" Dec 09 12:25:12 crc kubenswrapper[4703]: I1209 12:25:12.528739 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dv4zt\" (UniqueName: \"kubernetes.io/projected/0b084a1a-44b8-439b-ad26-d1ead9d2f225-kube-api-access-dv4zt\") pod \"rabbitmq-server-0\" (UID: \"0b084a1a-44b8-439b-ad26-d1ead9d2f225\") " pod="openstack/rabbitmq-server-0" Dec 09 12:25:12 crc kubenswrapper[4703]: I1209 12:25:12.531092 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0b084a1a-44b8-439b-ad26-d1ead9d2f225-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"0b084a1a-44b8-439b-ad26-d1ead9d2f225\") " pod="openstack/rabbitmq-server-0" Dec 09 12:25:12 crc kubenswrapper[4703]: I1209 12:25:12.531926 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0b084a1a-44b8-439b-ad26-d1ead9d2f225-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"0b084a1a-44b8-439b-ad26-d1ead9d2f225\") " pod="openstack/rabbitmq-server-0" Dec 09 12:25:12 crc kubenswrapper[4703]: I1209 12:25:12.532435 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0b084a1a-44b8-439b-ad26-d1ead9d2f225-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"0b084a1a-44b8-439b-ad26-d1ead9d2f225\") " pod="openstack/rabbitmq-server-0" Dec 09 12:25:12 crc kubenswrapper[4703]: I1209 12:25:12.532525 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0b084a1a-44b8-439b-ad26-d1ead9d2f225-config-data\") pod \"rabbitmq-server-0\" (UID: \"0b084a1a-44b8-439b-ad26-d1ead9d2f225\") " pod="openstack/rabbitmq-server-0" Dec 09 12:25:12 crc kubenswrapper[4703]: I1209 12:25:12.532532 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0b084a1a-44b8-439b-ad26-d1ead9d2f225-server-conf\") pod \"rabbitmq-server-0\" (UID: \"0b084a1a-44b8-439b-ad26-d1ead9d2f225\") " pod="openstack/rabbitmq-server-0" Dec 09 12:25:12 crc kubenswrapper[4703]: I1209 12:25:12.543241 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0b084a1a-44b8-439b-ad26-d1ead9d2f225-pod-info\") pod \"rabbitmq-server-0\" (UID: \"0b084a1a-44b8-439b-ad26-d1ead9d2f225\") " pod="openstack/rabbitmq-server-0" Dec 09 12:25:12 crc kubenswrapper[4703]: I1209 12:25:12.545003 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0b084a1a-44b8-439b-ad26-d1ead9d2f225-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"0b084a1a-44b8-439b-ad26-d1ead9d2f225\") " pod="openstack/rabbitmq-server-0" Dec 09 12:25:12 crc kubenswrapper[4703]: I1209 12:25:12.545326 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0b084a1a-44b8-439b-ad26-d1ead9d2f225-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"0b084a1a-44b8-439b-ad26-d1ead9d2f225\") " pod="openstack/rabbitmq-server-0" Dec 09 12:25:12 crc kubenswrapper[4703]: I1209 12:25:12.553103 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0b084a1a-44b8-439b-ad26-d1ead9d2f225-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"0b084a1a-44b8-439b-ad26-d1ead9d2f225\") " pod="openstack/rabbitmq-server-0" Dec 09 12:25:12 crc kubenswrapper[4703]: I1209 12:25:12.569987 4703 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 09 12:25:12 crc kubenswrapper[4703]: I1209 12:25:12.570259 4703 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-71cad072-5587-4a80-81ac-ea30a725ded4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-71cad072-5587-4a80-81ac-ea30a725ded4\") pod \"rabbitmq-server-0\" (UID: \"0b084a1a-44b8-439b-ad26-d1ead9d2f225\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/edeb694bd7f36907ab0d9fdf73908fcc67dae265d9e4f5f826263d7e949d0d97/globalmount\"" pod="openstack/rabbitmq-server-0" Dec 09 12:25:12 crc kubenswrapper[4703]: I1209 12:25:12.574519 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dv4zt\" (UniqueName: \"kubernetes.io/projected/0b084a1a-44b8-439b-ad26-d1ead9d2f225-kube-api-access-dv4zt\") pod \"rabbitmq-server-0\" (UID: \"0b084a1a-44b8-439b-ad26-d1ead9d2f225\") " pod="openstack/rabbitmq-server-0" Dec 09 12:25:12 crc kubenswrapper[4703]: I1209 12:25:12.665945 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-71cad072-5587-4a80-81ac-ea30a725ded4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-71cad072-5587-4a80-81ac-ea30a725ded4\") pod \"rabbitmq-server-0\" (UID: \"0b084a1a-44b8-439b-ad26-d1ead9d2f225\") " pod="openstack/rabbitmq-server-0" Dec 09 12:25:12 crc kubenswrapper[4703]: I1209 12:25:12.691240 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 09 12:25:12 crc kubenswrapper[4703]: I1209 12:25:12.693034 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:25:12 crc kubenswrapper[4703]: I1209 12:25:12.708821 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Dec 09 12:25:12 crc kubenswrapper[4703]: I1209 12:25:12.709007 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Dec 09 12:25:12 crc kubenswrapper[4703]: I1209 12:25:12.726905 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Dec 09 12:25:12 crc kubenswrapper[4703]: I1209 12:25:12.727433 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Dec 09 12:25:12 crc kubenswrapper[4703]: I1209 12:25:12.727632 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-brp2m" Dec 09 12:25:12 crc kubenswrapper[4703]: I1209 12:25:12.727787 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Dec 09 12:25:12 crc kubenswrapper[4703]: I1209 12:25:12.727940 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Dec 09 12:25:12 crc kubenswrapper[4703]: I1209 12:25:12.747356 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 09 12:25:12 crc kubenswrapper[4703]: I1209 12:25:12.842914 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:25:12 crc kubenswrapper[4703]: I1209 12:25:12.842964 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:25:12 crc kubenswrapper[4703]: I1209 12:25:12.842985 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlc6p\" (UniqueName: \"kubernetes.io/projected/8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692-kube-api-access-qlc6p\") pod \"rabbitmq-cell1-server-0\" (UID: \"8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:25:12 crc kubenswrapper[4703]: I1209 12:25:12.843003 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:25:12 crc kubenswrapper[4703]: I1209 12:25:12.843034 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:25:12 crc kubenswrapper[4703]: I1209 12:25:12.843065 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-5c94eac5-19ef-4f6c-b198-8127baefb4df\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5c94eac5-19ef-4f6c-b198-8127baefb4df\") pod \"rabbitmq-cell1-server-0\" (UID: \"8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:25:12 crc kubenswrapper[4703]: I1209 12:25:12.843088 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:25:12 crc kubenswrapper[4703]: I1209 12:25:12.843149 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:25:12 crc kubenswrapper[4703]: I1209 12:25:12.843173 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:25:12 crc kubenswrapper[4703]: I1209 12:25:12.843238 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:25:12 crc kubenswrapper[4703]: I1209 12:25:12.843259 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:25:12 crc kubenswrapper[4703]: I1209 12:25:12.928735 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 09 12:25:12 crc kubenswrapper[4703]: I1209 12:25:12.948421 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:25:12 crc kubenswrapper[4703]: I1209 12:25:12.949555 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:25:12 crc kubenswrapper[4703]: I1209 12:25:12.949740 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:25:12 crc kubenswrapper[4703]: I1209 12:25:12.949940 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:25:12 crc kubenswrapper[4703]: I1209 12:25:12.950108 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:25:12 crc kubenswrapper[4703]: I1209 12:25:12.950307 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:25:12 crc kubenswrapper[4703]: I1209 12:25:12.950337 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qlc6p\" (UniqueName: \"kubernetes.io/projected/8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692-kube-api-access-qlc6p\") pod \"rabbitmq-cell1-server-0\" (UID: \"8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:25:12 crc kubenswrapper[4703]: I1209 12:25:12.950261 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:25:12 crc kubenswrapper[4703]: I1209 12:25:12.950469 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:25:12 crc kubenswrapper[4703]: I1209 12:25:12.950498 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:25:12 crc kubenswrapper[4703]: I1209 12:25:12.950637 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-5c94eac5-19ef-4f6c-b198-8127baefb4df\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5c94eac5-19ef-4f6c-b198-8127baefb4df\") pod \"rabbitmq-cell1-server-0\" (UID: \"8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:25:12 crc kubenswrapper[4703]: I1209 12:25:12.950789 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:25:12 crc kubenswrapper[4703]: I1209 12:25:12.951335 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:25:12 crc kubenswrapper[4703]: I1209 12:25:12.953083 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:25:12 crc kubenswrapper[4703]: I1209 12:25:12.953594 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:25:12 crc kubenswrapper[4703]: I1209 12:25:12.953755 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:25:12 crc kubenswrapper[4703]: I1209 12:25:12.954619 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:25:12 crc kubenswrapper[4703]: I1209 12:25:12.959812 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:25:12 crc kubenswrapper[4703]: I1209 12:25:12.960219 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:25:12 crc kubenswrapper[4703]: I1209 12:25:12.962630 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:25:12 crc kubenswrapper[4703]: I1209 12:25:12.978383 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlc6p\" (UniqueName: \"kubernetes.io/projected/8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692-kube-api-access-qlc6p\") pod \"rabbitmq-cell1-server-0\" (UID: \"8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:25:13 crc kubenswrapper[4703]: I1209 12:25:13.013650 4703 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 09 12:25:13 crc kubenswrapper[4703]: I1209 12:25:13.013782 4703 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-5c94eac5-19ef-4f6c-b198-8127baefb4df\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5c94eac5-19ef-4f6c-b198-8127baefb4df\") pod \"rabbitmq-cell1-server-0\" (UID: \"8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/8335e5646221fd2e994a320d6d2dc8103f0d7ceb1e7a0b133efe500933756fc8/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:25:13 crc kubenswrapper[4703]: I1209 12:25:13.036280 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-rxj84"] Dec 09 12:25:13 crc kubenswrapper[4703]: W1209 12:25:13.044694 4703 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2da8660a_e38a_41f1_8c85_e262a4ad191a.slice/crio-7af8c900f12a8946e9ff7f91472096311f7b1faaaaa4bb594e0e1dd00b39c024 WatchSource:0}: Error finding container 7af8c900f12a8946e9ff7f91472096311f7b1faaaaa4bb594e0e1dd00b39c024: Status 404 returned error can't find the container with id 7af8c900f12a8946e9ff7f91472096311f7b1faaaaa4bb594e0e1dd00b39c024 Dec 09 12:25:13 crc kubenswrapper[4703]: I1209 12:25:13.067690 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-5c94eac5-19ef-4f6c-b198-8127baefb4df\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5c94eac5-19ef-4f6c-b198-8127baefb4df\") pod \"rabbitmq-cell1-server-0\" (UID: \"8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:25:13 crc kubenswrapper[4703]: I1209 12:25:13.175108 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-rxj84" event={"ID":"2da8660a-e38a-41f1-8c85-e262a4ad191a","Type":"ContainerStarted","Data":"7af8c900f12a8946e9ff7f91472096311f7b1faaaaa4bb594e0e1dd00b39c024"} Dec 09 12:25:13 crc kubenswrapper[4703]: I1209 12:25:13.177108 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-tbdjp" event={"ID":"a733df4a-37e3-4c6a-825e-cb27ba720b71","Type":"ContainerStarted","Data":"11940062818d03432589dcbf1ae7095dfad66c0cb6365b38da50f98d0ab7c64d"} Dec 09 12:25:13 crc kubenswrapper[4703]: I1209 12:25:13.352602 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:25:13 crc kubenswrapper[4703]: I1209 12:25:13.687880 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Dec 09 12:25:13 crc kubenswrapper[4703]: I1209 12:25:13.689325 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Dec 09 12:25:13 crc kubenswrapper[4703]: I1209 12:25:13.691350 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Dec 09 12:25:13 crc kubenswrapper[4703]: I1209 12:25:13.694402 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Dec 09 12:25:13 crc kubenswrapper[4703]: I1209 12:25:13.694704 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-vmkvn" Dec 09 12:25:13 crc kubenswrapper[4703]: I1209 12:25:13.696540 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Dec 09 12:25:13 crc kubenswrapper[4703]: I1209 12:25:13.715338 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Dec 09 12:25:13 crc kubenswrapper[4703]: I1209 12:25:13.727103 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Dec 09 12:25:13 crc kubenswrapper[4703]: I1209 12:25:13.741735 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/00872775-af9b-49e8-9a6e-08baa2171c88-config-data-generated\") pod \"openstack-galera-0\" (UID: \"00872775-af9b-49e8-9a6e-08baa2171c88\") " pod="openstack/openstack-galera-0" Dec 09 12:25:13 crc kubenswrapper[4703]: I1209 12:25:13.741790 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/00872775-af9b-49e8-9a6e-08baa2171c88-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"00872775-af9b-49e8-9a6e-08baa2171c88\") " pod="openstack/openstack-galera-0" Dec 09 12:25:13 crc kubenswrapper[4703]: I1209 12:25:13.741904 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/00872775-af9b-49e8-9a6e-08baa2171c88-kolla-config\") pod \"openstack-galera-0\" (UID: \"00872775-af9b-49e8-9a6e-08baa2171c88\") " pod="openstack/openstack-galera-0" Dec 09 12:25:13 crc kubenswrapper[4703]: I1209 12:25:13.741942 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/00872775-af9b-49e8-9a6e-08baa2171c88-config-data-default\") pod \"openstack-galera-0\" (UID: \"00872775-af9b-49e8-9a6e-08baa2171c88\") " pod="openstack/openstack-galera-0" Dec 09 12:25:13 crc kubenswrapper[4703]: I1209 12:25:13.741979 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcq8d\" (UniqueName: \"kubernetes.io/projected/00872775-af9b-49e8-9a6e-08baa2171c88-kube-api-access-qcq8d\") pod \"openstack-galera-0\" (UID: \"00872775-af9b-49e8-9a6e-08baa2171c88\") " pod="openstack/openstack-galera-0" Dec 09 12:25:13 crc kubenswrapper[4703]: I1209 12:25:13.742047 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00872775-af9b-49e8-9a6e-08baa2171c88-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"00872775-af9b-49e8-9a6e-08baa2171c88\") " pod="openstack/openstack-galera-0" Dec 09 12:25:13 crc kubenswrapper[4703]: I1209 12:25:13.742083 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/00872775-af9b-49e8-9a6e-08baa2171c88-operator-scripts\") pod \"openstack-galera-0\" (UID: \"00872775-af9b-49e8-9a6e-08baa2171c88\") " pod="openstack/openstack-galera-0" Dec 09 12:25:13 crc kubenswrapper[4703]: I1209 12:25:13.742121 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4b5ca6a9-7c13-41ee-b41f-3c5e0f220297\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4b5ca6a9-7c13-41ee-b41f-3c5e0f220297\") pod \"openstack-galera-0\" (UID: \"00872775-af9b-49e8-9a6e-08baa2171c88\") " pod="openstack/openstack-galera-0" Dec 09 12:25:13 crc kubenswrapper[4703]: I1209 12:25:13.753264 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 09 12:25:13 crc kubenswrapper[4703]: W1209 12:25:13.830438 4703 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b084a1a_44b8_439b_ad26_d1ead9d2f225.slice/crio-ee89ae1dacb0777659f275aa529f052f89b881ae878beb7d9bcaa173c0cecabd WatchSource:0}: Error finding container ee89ae1dacb0777659f275aa529f052f89b881ae878beb7d9bcaa173c0cecabd: Status 404 returned error can't find the container with id ee89ae1dacb0777659f275aa529f052f89b881ae878beb7d9bcaa173c0cecabd Dec 09 12:25:13 crc kubenswrapper[4703]: I1209 12:25:13.845149 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/00872775-af9b-49e8-9a6e-08baa2171c88-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"00872775-af9b-49e8-9a6e-08baa2171c88\") " pod="openstack/openstack-galera-0" Dec 09 12:25:13 crc kubenswrapper[4703]: I1209 12:25:13.845405 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/00872775-af9b-49e8-9a6e-08baa2171c88-kolla-config\") pod \"openstack-galera-0\" (UID: \"00872775-af9b-49e8-9a6e-08baa2171c88\") " pod="openstack/openstack-galera-0" Dec 09 12:25:13 crc kubenswrapper[4703]: I1209 12:25:13.845480 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/00872775-af9b-49e8-9a6e-08baa2171c88-config-data-default\") pod \"openstack-galera-0\" (UID: \"00872775-af9b-49e8-9a6e-08baa2171c88\") " pod="openstack/openstack-galera-0" Dec 09 12:25:13 crc kubenswrapper[4703]: I1209 12:25:13.845532 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcq8d\" (UniqueName: \"kubernetes.io/projected/00872775-af9b-49e8-9a6e-08baa2171c88-kube-api-access-qcq8d\") pod \"openstack-galera-0\" (UID: \"00872775-af9b-49e8-9a6e-08baa2171c88\") " pod="openstack/openstack-galera-0" Dec 09 12:25:13 crc kubenswrapper[4703]: I1209 12:25:13.845560 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00872775-af9b-49e8-9a6e-08baa2171c88-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"00872775-af9b-49e8-9a6e-08baa2171c88\") " pod="openstack/openstack-galera-0" Dec 09 12:25:13 crc kubenswrapper[4703]: I1209 12:25:13.845610 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/00872775-af9b-49e8-9a6e-08baa2171c88-operator-scripts\") pod \"openstack-galera-0\" (UID: \"00872775-af9b-49e8-9a6e-08baa2171c88\") " pod="openstack/openstack-galera-0" Dec 09 12:25:13 crc kubenswrapper[4703]: I1209 12:25:13.845652 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-4b5ca6a9-7c13-41ee-b41f-3c5e0f220297\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4b5ca6a9-7c13-41ee-b41f-3c5e0f220297\") pod \"openstack-galera-0\" (UID: \"00872775-af9b-49e8-9a6e-08baa2171c88\") " pod="openstack/openstack-galera-0" Dec 09 12:25:13 crc kubenswrapper[4703]: I1209 12:25:13.845786 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/00872775-af9b-49e8-9a6e-08baa2171c88-config-data-generated\") pod \"openstack-galera-0\" (UID: \"00872775-af9b-49e8-9a6e-08baa2171c88\") " pod="openstack/openstack-galera-0" Dec 09 12:25:13 crc kubenswrapper[4703]: I1209 12:25:13.846569 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/00872775-af9b-49e8-9a6e-08baa2171c88-kolla-config\") pod \"openstack-galera-0\" (UID: \"00872775-af9b-49e8-9a6e-08baa2171c88\") " pod="openstack/openstack-galera-0" Dec 09 12:25:13 crc kubenswrapper[4703]: I1209 12:25:13.849131 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/00872775-af9b-49e8-9a6e-08baa2171c88-operator-scripts\") pod \"openstack-galera-0\" (UID: \"00872775-af9b-49e8-9a6e-08baa2171c88\") " pod="openstack/openstack-galera-0" Dec 09 12:25:13 crc kubenswrapper[4703]: I1209 12:25:13.849965 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/00872775-af9b-49e8-9a6e-08baa2171c88-config-data-default\") pod \"openstack-galera-0\" (UID: \"00872775-af9b-49e8-9a6e-08baa2171c88\") " pod="openstack/openstack-galera-0" Dec 09 12:25:13 crc kubenswrapper[4703]: I1209 12:25:13.850747 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/00872775-af9b-49e8-9a6e-08baa2171c88-config-data-generated\") pod \"openstack-galera-0\" (UID: \"00872775-af9b-49e8-9a6e-08baa2171c88\") " pod="openstack/openstack-galera-0" Dec 09 12:25:13 crc kubenswrapper[4703]: I1209 12:25:13.852698 4703 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 09 12:25:13 crc kubenswrapper[4703]: I1209 12:25:13.852752 4703 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-4b5ca6a9-7c13-41ee-b41f-3c5e0f220297\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4b5ca6a9-7c13-41ee-b41f-3c5e0f220297\") pod \"openstack-galera-0\" (UID: \"00872775-af9b-49e8-9a6e-08baa2171c88\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/3b25d07746e8caa76c0210b7e9183455b741f6e5d0ed2d716628360825fa0afb/globalmount\"" pod="openstack/openstack-galera-0" Dec 09 12:25:13 crc kubenswrapper[4703]: I1209 12:25:13.885762 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/00872775-af9b-49e8-9a6e-08baa2171c88-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"00872775-af9b-49e8-9a6e-08baa2171c88\") " pod="openstack/openstack-galera-0" Dec 09 12:25:13 crc kubenswrapper[4703]: I1209 12:25:13.887027 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00872775-af9b-49e8-9a6e-08baa2171c88-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"00872775-af9b-49e8-9a6e-08baa2171c88\") " pod="openstack/openstack-galera-0" Dec 09 12:25:13 crc kubenswrapper[4703]: I1209 12:25:13.904608 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcq8d\" (UniqueName: \"kubernetes.io/projected/00872775-af9b-49e8-9a6e-08baa2171c88-kube-api-access-qcq8d\") pod \"openstack-galera-0\" (UID: \"00872775-af9b-49e8-9a6e-08baa2171c88\") " pod="openstack/openstack-galera-0" Dec 09 12:25:13 crc kubenswrapper[4703]: I1209 12:25:13.912341 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-4b5ca6a9-7c13-41ee-b41f-3c5e0f220297\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4b5ca6a9-7c13-41ee-b41f-3c5e0f220297\") pod \"openstack-galera-0\" (UID: \"00872775-af9b-49e8-9a6e-08baa2171c88\") " pod="openstack/openstack-galera-0" Dec 09 12:25:14 crc kubenswrapper[4703]: I1209 12:25:14.035685 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Dec 09 12:25:14 crc kubenswrapper[4703]: I1209 12:25:14.114422 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 09 12:25:14 crc kubenswrapper[4703]: W1209 12:25:14.121038 4703 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8a4d15bf_fdb7_47b8_b5ce_d2aff4b1f692.slice/crio-611d8c263d6dabd06ed4013a8a2f3d9226e26da12d0b00fdf8bb13ca0fdd5e49 WatchSource:0}: Error finding container 611d8c263d6dabd06ed4013a8a2f3d9226e26da12d0b00fdf8bb13ca0fdd5e49: Status 404 returned error can't find the container with id 611d8c263d6dabd06ed4013a8a2f3d9226e26da12d0b00fdf8bb13ca0fdd5e49 Dec 09 12:25:14 crc kubenswrapper[4703]: I1209 12:25:14.292440 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0b084a1a-44b8-439b-ad26-d1ead9d2f225","Type":"ContainerStarted","Data":"ee89ae1dacb0777659f275aa529f052f89b881ae878beb7d9bcaa173c0cecabd"} Dec 09 12:25:14 crc kubenswrapper[4703]: I1209 12:25:14.298346 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692","Type":"ContainerStarted","Data":"611d8c263d6dabd06ed4013a8a2f3d9226e26da12d0b00fdf8bb13ca0fdd5e49"} Dec 09 12:25:14 crc kubenswrapper[4703]: I1209 12:25:14.808030 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Dec 09 12:25:14 crc kubenswrapper[4703]: W1209 12:25:14.834327 4703 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod00872775_af9b_49e8_9a6e_08baa2171c88.slice/crio-cc4aaf84a075b3bd982ce23156758196ad9ce50a32d71cc1f703e26461a7c604 WatchSource:0}: Error finding container cc4aaf84a075b3bd982ce23156758196ad9ce50a32d71cc1f703e26461a7c604: Status 404 returned error can't find the container with id cc4aaf84a075b3bd982ce23156758196ad9ce50a32d71cc1f703e26461a7c604 Dec 09 12:25:14 crc kubenswrapper[4703]: I1209 12:25:14.903838 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 09 12:25:14 crc kubenswrapper[4703]: I1209 12:25:14.908129 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Dec 09 12:25:14 crc kubenswrapper[4703]: I1209 12:25:14.912474 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-w759z" Dec 09 12:25:14 crc kubenswrapper[4703]: I1209 12:25:14.912972 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Dec 09 12:25:14 crc kubenswrapper[4703]: I1209 12:25:14.913351 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Dec 09 12:25:14 crc kubenswrapper[4703]: I1209 12:25:14.913510 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Dec 09 12:25:14 crc kubenswrapper[4703]: I1209 12:25:14.927607 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 09 12:25:15 crc kubenswrapper[4703]: I1209 12:25:15.111925 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9bdf302a-4c2d-41c3-b1be-c08e52c5244c-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"9bdf302a-4c2d-41c3-b1be-c08e52c5244c\") " pod="openstack/openstack-cell1-galera-0" Dec 09 12:25:15 crc kubenswrapper[4703]: I1209 12:25:15.112439 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9bdf302a-4c2d-41c3-b1be-c08e52c5244c-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"9bdf302a-4c2d-41c3-b1be-c08e52c5244c\") " pod="openstack/openstack-cell1-galera-0" Dec 09 12:25:15 crc kubenswrapper[4703]: I1209 12:25:15.112521 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9bdf302a-4c2d-41c3-b1be-c08e52c5244c-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"9bdf302a-4c2d-41c3-b1be-c08e52c5244c\") " pod="openstack/openstack-cell1-galera-0" Dec 09 12:25:15 crc kubenswrapper[4703]: I1209 12:25:15.112551 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9bdf302a-4c2d-41c3-b1be-c08e52c5244c-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"9bdf302a-4c2d-41c3-b1be-c08e52c5244c\") " pod="openstack/openstack-cell1-galera-0" Dec 09 12:25:15 crc kubenswrapper[4703]: I1209 12:25:15.112593 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hntq2\" (UniqueName: \"kubernetes.io/projected/9bdf302a-4c2d-41c3-b1be-c08e52c5244c-kube-api-access-hntq2\") pod \"openstack-cell1-galera-0\" (UID: \"9bdf302a-4c2d-41c3-b1be-c08e52c5244c\") " pod="openstack/openstack-cell1-galera-0" Dec 09 12:25:15 crc kubenswrapper[4703]: I1209 12:25:15.112625 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9bdf302a-4c2d-41c3-b1be-c08e52c5244c-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"9bdf302a-4c2d-41c3-b1be-c08e52c5244c\") " pod="openstack/openstack-cell1-galera-0" Dec 09 12:25:15 crc kubenswrapper[4703]: I1209 12:25:15.112702 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4a8ac92f-c4b4-4f8d-ae94-d9d650b984af\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4a8ac92f-c4b4-4f8d-ae94-d9d650b984af\") pod \"openstack-cell1-galera-0\" (UID: \"9bdf302a-4c2d-41c3-b1be-c08e52c5244c\") " pod="openstack/openstack-cell1-galera-0" Dec 09 12:25:15 crc kubenswrapper[4703]: I1209 12:25:15.112741 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bdf302a-4c2d-41c3-b1be-c08e52c5244c-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"9bdf302a-4c2d-41c3-b1be-c08e52c5244c\") " pod="openstack/openstack-cell1-galera-0" Dec 09 12:25:15 crc kubenswrapper[4703]: I1209 12:25:15.135352 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Dec 09 12:25:15 crc kubenswrapper[4703]: I1209 12:25:15.137151 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Dec 09 12:25:15 crc kubenswrapper[4703]: I1209 12:25:15.143876 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Dec 09 12:25:15 crc kubenswrapper[4703]: I1209 12:25:15.143950 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Dec 09 12:25:15 crc kubenswrapper[4703]: I1209 12:25:15.144391 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-scfmg" Dec 09 12:25:15 crc kubenswrapper[4703]: I1209 12:25:15.167779 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Dec 09 12:25:15 crc kubenswrapper[4703]: I1209 12:25:15.215880 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9bdf302a-4c2d-41c3-b1be-c08e52c5244c-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"9bdf302a-4c2d-41c3-b1be-c08e52c5244c\") " pod="openstack/openstack-cell1-galera-0" Dec 09 12:25:15 crc kubenswrapper[4703]: I1209 12:25:15.215978 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9bdf302a-4c2d-41c3-b1be-c08e52c5244c-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"9bdf302a-4c2d-41c3-b1be-c08e52c5244c\") " pod="openstack/openstack-cell1-galera-0" Dec 09 12:25:15 crc kubenswrapper[4703]: I1209 12:25:15.216158 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9bdf302a-4c2d-41c3-b1be-c08e52c5244c-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"9bdf302a-4c2d-41c3-b1be-c08e52c5244c\") " pod="openstack/openstack-cell1-galera-0" Dec 09 12:25:15 crc kubenswrapper[4703]: I1209 12:25:15.216210 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9bdf302a-4c2d-41c3-b1be-c08e52c5244c-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"9bdf302a-4c2d-41c3-b1be-c08e52c5244c\") " pod="openstack/openstack-cell1-galera-0" Dec 09 12:25:15 crc kubenswrapper[4703]: I1209 12:25:15.216253 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hntq2\" (UniqueName: \"kubernetes.io/projected/9bdf302a-4c2d-41c3-b1be-c08e52c5244c-kube-api-access-hntq2\") pod \"openstack-cell1-galera-0\" (UID: \"9bdf302a-4c2d-41c3-b1be-c08e52c5244c\") " pod="openstack/openstack-cell1-galera-0" Dec 09 12:25:15 crc kubenswrapper[4703]: I1209 12:25:15.216313 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9bdf302a-4c2d-41c3-b1be-c08e52c5244c-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"9bdf302a-4c2d-41c3-b1be-c08e52c5244c\") " pod="openstack/openstack-cell1-galera-0" Dec 09 12:25:15 crc kubenswrapper[4703]: I1209 12:25:15.216448 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-4a8ac92f-c4b4-4f8d-ae94-d9d650b984af\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4a8ac92f-c4b4-4f8d-ae94-d9d650b984af\") pod \"openstack-cell1-galera-0\" (UID: \"9bdf302a-4c2d-41c3-b1be-c08e52c5244c\") " pod="openstack/openstack-cell1-galera-0" Dec 09 12:25:15 crc kubenswrapper[4703]: I1209 12:25:15.216493 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bdf302a-4c2d-41c3-b1be-c08e52c5244c-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"9bdf302a-4c2d-41c3-b1be-c08e52c5244c\") " pod="openstack/openstack-cell1-galera-0" Dec 09 12:25:15 crc kubenswrapper[4703]: I1209 12:25:15.216915 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9bdf302a-4c2d-41c3-b1be-c08e52c5244c-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"9bdf302a-4c2d-41c3-b1be-c08e52c5244c\") " pod="openstack/openstack-cell1-galera-0" Dec 09 12:25:15 crc kubenswrapper[4703]: I1209 12:25:15.223954 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9bdf302a-4c2d-41c3-b1be-c08e52c5244c-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"9bdf302a-4c2d-41c3-b1be-c08e52c5244c\") " pod="openstack/openstack-cell1-galera-0" Dec 09 12:25:15 crc kubenswrapper[4703]: I1209 12:25:15.226801 4703 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 09 12:25:15 crc kubenswrapper[4703]: I1209 12:25:15.226844 4703 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-4a8ac92f-c4b4-4f8d-ae94-d9d650b984af\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4a8ac92f-c4b4-4f8d-ae94-d9d650b984af\") pod \"openstack-cell1-galera-0\" (UID: \"9bdf302a-4c2d-41c3-b1be-c08e52c5244c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/2c7dbeee801c4327fc1283cb3bbef64affb168d624670b68366e9725855d2c6a/globalmount\"" pod="openstack/openstack-cell1-galera-0" Dec 09 12:25:15 crc kubenswrapper[4703]: I1209 12:25:15.230255 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9bdf302a-4c2d-41c3-b1be-c08e52c5244c-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"9bdf302a-4c2d-41c3-b1be-c08e52c5244c\") " pod="openstack/openstack-cell1-galera-0" Dec 09 12:25:15 crc kubenswrapper[4703]: I1209 12:25:15.233269 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9bdf302a-4c2d-41c3-b1be-c08e52c5244c-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"9bdf302a-4c2d-41c3-b1be-c08e52c5244c\") " pod="openstack/openstack-cell1-galera-0" Dec 09 12:25:15 crc kubenswrapper[4703]: I1209 12:25:15.244007 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9bdf302a-4c2d-41c3-b1be-c08e52c5244c-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"9bdf302a-4c2d-41c3-b1be-c08e52c5244c\") " pod="openstack/openstack-cell1-galera-0" Dec 09 12:25:15 crc kubenswrapper[4703]: I1209 12:25:15.246116 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bdf302a-4c2d-41c3-b1be-c08e52c5244c-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"9bdf302a-4c2d-41c3-b1be-c08e52c5244c\") " pod="openstack/openstack-cell1-galera-0" Dec 09 12:25:15 crc kubenswrapper[4703]: I1209 12:25:15.293894 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hntq2\" (UniqueName: \"kubernetes.io/projected/9bdf302a-4c2d-41c3-b1be-c08e52c5244c-kube-api-access-hntq2\") pod \"openstack-cell1-galera-0\" (UID: \"9bdf302a-4c2d-41c3-b1be-c08e52c5244c\") " pod="openstack/openstack-cell1-galera-0" Dec 09 12:25:15 crc kubenswrapper[4703]: I1209 12:25:15.319328 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/dad57d2e-6021-4515-9075-243ab3ce4aec-kolla-config\") pod \"memcached-0\" (UID: \"dad57d2e-6021-4515-9075-243ab3ce4aec\") " pod="openstack/memcached-0" Dec 09 12:25:15 crc kubenswrapper[4703]: I1209 12:25:15.319410 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdm4d\" (UniqueName: \"kubernetes.io/projected/dad57d2e-6021-4515-9075-243ab3ce4aec-kube-api-access-cdm4d\") pod \"memcached-0\" (UID: \"dad57d2e-6021-4515-9075-243ab3ce4aec\") " pod="openstack/memcached-0" Dec 09 12:25:15 crc kubenswrapper[4703]: I1209 12:25:15.319630 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dad57d2e-6021-4515-9075-243ab3ce4aec-config-data\") pod \"memcached-0\" (UID: \"dad57d2e-6021-4515-9075-243ab3ce4aec\") " pod="openstack/memcached-0" Dec 09 12:25:15 crc kubenswrapper[4703]: I1209 12:25:15.319696 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/dad57d2e-6021-4515-9075-243ab3ce4aec-memcached-tls-certs\") pod \"memcached-0\" (UID: \"dad57d2e-6021-4515-9075-243ab3ce4aec\") " pod="openstack/memcached-0" Dec 09 12:25:15 crc kubenswrapper[4703]: I1209 12:25:15.319757 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dad57d2e-6021-4515-9075-243ab3ce4aec-combined-ca-bundle\") pod \"memcached-0\" (UID: \"dad57d2e-6021-4515-9075-243ab3ce4aec\") " pod="openstack/memcached-0" Dec 09 12:25:15 crc kubenswrapper[4703]: I1209 12:25:15.351580 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"00872775-af9b-49e8-9a6e-08baa2171c88","Type":"ContainerStarted","Data":"cc4aaf84a075b3bd982ce23156758196ad9ce50a32d71cc1f703e26461a7c604"} Dec 09 12:25:15 crc kubenswrapper[4703]: I1209 12:25:15.384552 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-4a8ac92f-c4b4-4f8d-ae94-d9d650b984af\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4a8ac92f-c4b4-4f8d-ae94-d9d650b984af\") pod \"openstack-cell1-galera-0\" (UID: \"9bdf302a-4c2d-41c3-b1be-c08e52c5244c\") " pod="openstack/openstack-cell1-galera-0" Dec 09 12:25:15 crc kubenswrapper[4703]: I1209 12:25:15.421626 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/dad57d2e-6021-4515-9075-243ab3ce4aec-kolla-config\") pod \"memcached-0\" (UID: \"dad57d2e-6021-4515-9075-243ab3ce4aec\") " pod="openstack/memcached-0" Dec 09 12:25:15 crc kubenswrapper[4703]: I1209 12:25:15.421705 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdm4d\" (UniqueName: \"kubernetes.io/projected/dad57d2e-6021-4515-9075-243ab3ce4aec-kube-api-access-cdm4d\") pod \"memcached-0\" (UID: \"dad57d2e-6021-4515-9075-243ab3ce4aec\") " pod="openstack/memcached-0" Dec 09 12:25:15 crc kubenswrapper[4703]: I1209 12:25:15.421755 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dad57d2e-6021-4515-9075-243ab3ce4aec-config-data\") pod \"memcached-0\" (UID: \"dad57d2e-6021-4515-9075-243ab3ce4aec\") " pod="openstack/memcached-0" Dec 09 12:25:15 crc kubenswrapper[4703]: I1209 12:25:15.421777 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/dad57d2e-6021-4515-9075-243ab3ce4aec-memcached-tls-certs\") pod \"memcached-0\" (UID: \"dad57d2e-6021-4515-9075-243ab3ce4aec\") " pod="openstack/memcached-0" Dec 09 12:25:15 crc kubenswrapper[4703]: I1209 12:25:15.421807 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dad57d2e-6021-4515-9075-243ab3ce4aec-combined-ca-bundle\") pod \"memcached-0\" (UID: \"dad57d2e-6021-4515-9075-243ab3ce4aec\") " pod="openstack/memcached-0" Dec 09 12:25:15 crc kubenswrapper[4703]: I1209 12:25:15.426051 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/dad57d2e-6021-4515-9075-243ab3ce4aec-kolla-config\") pod \"memcached-0\" (UID: \"dad57d2e-6021-4515-9075-243ab3ce4aec\") " pod="openstack/memcached-0" Dec 09 12:25:15 crc kubenswrapper[4703]: I1209 12:25:15.427224 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dad57d2e-6021-4515-9075-243ab3ce4aec-config-data\") pod \"memcached-0\" (UID: \"dad57d2e-6021-4515-9075-243ab3ce4aec\") " pod="openstack/memcached-0" Dec 09 12:25:15 crc kubenswrapper[4703]: I1209 12:25:15.433174 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/dad57d2e-6021-4515-9075-243ab3ce4aec-memcached-tls-certs\") pod \"memcached-0\" (UID: \"dad57d2e-6021-4515-9075-243ab3ce4aec\") " pod="openstack/memcached-0" Dec 09 12:25:15 crc kubenswrapper[4703]: I1209 12:25:15.443718 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dad57d2e-6021-4515-9075-243ab3ce4aec-combined-ca-bundle\") pod \"memcached-0\" (UID: \"dad57d2e-6021-4515-9075-243ab3ce4aec\") " pod="openstack/memcached-0" Dec 09 12:25:15 crc kubenswrapper[4703]: I1209 12:25:15.466934 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdm4d\" (UniqueName: \"kubernetes.io/projected/dad57d2e-6021-4515-9075-243ab3ce4aec-kube-api-access-cdm4d\") pod \"memcached-0\" (UID: \"dad57d2e-6021-4515-9075-243ab3ce4aec\") " pod="openstack/memcached-0" Dec 09 12:25:15 crc kubenswrapper[4703]: I1209 12:25:15.479913 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Dec 09 12:25:15 crc kubenswrapper[4703]: I1209 12:25:15.554673 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Dec 09 12:25:16 crc kubenswrapper[4703]: I1209 12:25:16.633056 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Dec 09 12:25:16 crc kubenswrapper[4703]: I1209 12:25:16.853447 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 09 12:25:17 crc kubenswrapper[4703]: I1209 12:25:17.164234 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Dec 09 12:25:17 crc kubenswrapper[4703]: I1209 12:25:17.165638 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 09 12:25:17 crc kubenswrapper[4703]: I1209 12:25:17.168240 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-5kwjd" Dec 09 12:25:17 crc kubenswrapper[4703]: I1209 12:25:17.193328 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 09 12:25:17 crc kubenswrapper[4703]: I1209 12:25:17.317591 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v28sm\" (UniqueName: \"kubernetes.io/projected/ab5733f2-517c-433a-bf6c-f1cd26dde97b-kube-api-access-v28sm\") pod \"kube-state-metrics-0\" (UID: \"ab5733f2-517c-433a-bf6c-f1cd26dde97b\") " pod="openstack/kube-state-metrics-0" Dec 09 12:25:17 crc kubenswrapper[4703]: I1209 12:25:17.420159 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v28sm\" (UniqueName: \"kubernetes.io/projected/ab5733f2-517c-433a-bf6c-f1cd26dde97b-kube-api-access-v28sm\") pod \"kube-state-metrics-0\" (UID: \"ab5733f2-517c-433a-bf6c-f1cd26dde97b\") " pod="openstack/kube-state-metrics-0" Dec 09 12:25:17 crc kubenswrapper[4703]: I1209 12:25:17.485941 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v28sm\" (UniqueName: \"kubernetes.io/projected/ab5733f2-517c-433a-bf6c-f1cd26dde97b-kube-api-access-v28sm\") pod \"kube-state-metrics-0\" (UID: \"ab5733f2-517c-433a-bf6c-f1cd26dde97b\") " pod="openstack/kube-state-metrics-0" Dec 09 12:25:17 crc kubenswrapper[4703]: I1209 12:25:17.498592 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"dad57d2e-6021-4515-9075-243ab3ce4aec","Type":"ContainerStarted","Data":"b316b7959d9bbd4ac7cc81a07f277232a32dc7004dbf7463593c99ea61b417f6"} Dec 09 12:25:17 crc kubenswrapper[4703]: I1209 12:25:17.503954 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"9bdf302a-4c2d-41c3-b1be-c08e52c5244c","Type":"ContainerStarted","Data":"595e1560750dbfc8e2e44eb93f49ed74a601aad939455757333c4173a321c56e"} Dec 09 12:25:17 crc kubenswrapper[4703]: I1209 12:25:17.515387 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 09 12:25:18 crc kubenswrapper[4703]: I1209 12:25:18.088053 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/alertmanager-metric-storage-0"] Dec 09 12:25:18 crc kubenswrapper[4703]: I1209 12:25:18.090709 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Dec 09 12:25:18 crc kubenswrapper[4703]: I1209 12:25:18.106412 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-alertmanager-dockercfg-wdh77" Dec 09 12:25:18 crc kubenswrapper[4703]: I1209 12:25:18.106655 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-web-config" Dec 09 12:25:18 crc kubenswrapper[4703]: I1209 12:25:18.106809 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-tls-assets-0" Dec 09 12:25:18 crc kubenswrapper[4703]: I1209 12:25:18.106934 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-generated" Dec 09 12:25:18 crc kubenswrapper[4703]: I1209 12:25:18.107040 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-cluster-tls-config" Dec 09 12:25:18 crc kubenswrapper[4703]: I1209 12:25:18.138883 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Dec 09 12:25:18 crc kubenswrapper[4703]: I1209 12:25:18.259645 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/fe0eb656-a6e6-4b2e-9caf-53ef2d0a3f2b-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"fe0eb656-a6e6-4b2e-9caf-53ef2d0a3f2b\") " pod="openstack/alertmanager-metric-storage-0" Dec 09 12:25:18 crc kubenswrapper[4703]: I1209 12:25:18.259749 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/fe0eb656-a6e6-4b2e-9caf-53ef2d0a3f2b-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"fe0eb656-a6e6-4b2e-9caf-53ef2d0a3f2b\") " pod="openstack/alertmanager-metric-storage-0" Dec 09 12:25:18 crc kubenswrapper[4703]: I1209 12:25:18.259810 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/fe0eb656-a6e6-4b2e-9caf-53ef2d0a3f2b-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"fe0eb656-a6e6-4b2e-9caf-53ef2d0a3f2b\") " pod="openstack/alertmanager-metric-storage-0" Dec 09 12:25:18 crc kubenswrapper[4703]: I1209 12:25:18.259852 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/fe0eb656-a6e6-4b2e-9caf-53ef2d0a3f2b-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"fe0eb656-a6e6-4b2e-9caf-53ef2d0a3f2b\") " pod="openstack/alertmanager-metric-storage-0" Dec 09 12:25:18 crc kubenswrapper[4703]: I1209 12:25:18.259874 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkqdx\" (UniqueName: \"kubernetes.io/projected/fe0eb656-a6e6-4b2e-9caf-53ef2d0a3f2b-kube-api-access-qkqdx\") pod \"alertmanager-metric-storage-0\" (UID: \"fe0eb656-a6e6-4b2e-9caf-53ef2d0a3f2b\") " pod="openstack/alertmanager-metric-storage-0" Dec 09 12:25:18 crc kubenswrapper[4703]: I1209 12:25:18.259918 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/fe0eb656-a6e6-4b2e-9caf-53ef2d0a3f2b-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"fe0eb656-a6e6-4b2e-9caf-53ef2d0a3f2b\") " pod="openstack/alertmanager-metric-storage-0" Dec 09 12:25:18 crc kubenswrapper[4703]: I1209 12:25:18.259968 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/fe0eb656-a6e6-4b2e-9caf-53ef2d0a3f2b-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"fe0eb656-a6e6-4b2e-9caf-53ef2d0a3f2b\") " pod="openstack/alertmanager-metric-storage-0" Dec 09 12:25:18 crc kubenswrapper[4703]: I1209 12:25:18.361762 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/fe0eb656-a6e6-4b2e-9caf-53ef2d0a3f2b-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"fe0eb656-a6e6-4b2e-9caf-53ef2d0a3f2b\") " pod="openstack/alertmanager-metric-storage-0" Dec 09 12:25:18 crc kubenswrapper[4703]: I1209 12:25:18.361857 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/fe0eb656-a6e6-4b2e-9caf-53ef2d0a3f2b-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"fe0eb656-a6e6-4b2e-9caf-53ef2d0a3f2b\") " pod="openstack/alertmanager-metric-storage-0" Dec 09 12:25:18 crc kubenswrapper[4703]: I1209 12:25:18.361899 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/fe0eb656-a6e6-4b2e-9caf-53ef2d0a3f2b-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"fe0eb656-a6e6-4b2e-9caf-53ef2d0a3f2b\") " pod="openstack/alertmanager-metric-storage-0" Dec 09 12:25:18 crc kubenswrapper[4703]: I1209 12:25:18.361962 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/fe0eb656-a6e6-4b2e-9caf-53ef2d0a3f2b-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"fe0eb656-a6e6-4b2e-9caf-53ef2d0a3f2b\") " pod="openstack/alertmanager-metric-storage-0" Dec 09 12:25:18 crc kubenswrapper[4703]: I1209 12:25:18.362010 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/fe0eb656-a6e6-4b2e-9caf-53ef2d0a3f2b-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"fe0eb656-a6e6-4b2e-9caf-53ef2d0a3f2b\") " pod="openstack/alertmanager-metric-storage-0" Dec 09 12:25:18 crc kubenswrapper[4703]: I1209 12:25:18.362044 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/fe0eb656-a6e6-4b2e-9caf-53ef2d0a3f2b-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"fe0eb656-a6e6-4b2e-9caf-53ef2d0a3f2b\") " pod="openstack/alertmanager-metric-storage-0" Dec 09 12:25:18 crc kubenswrapper[4703]: I1209 12:25:18.362064 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkqdx\" (UniqueName: \"kubernetes.io/projected/fe0eb656-a6e6-4b2e-9caf-53ef2d0a3f2b-kube-api-access-qkqdx\") pod \"alertmanager-metric-storage-0\" (UID: \"fe0eb656-a6e6-4b2e-9caf-53ef2d0a3f2b\") " pod="openstack/alertmanager-metric-storage-0" Dec 09 12:25:18 crc kubenswrapper[4703]: I1209 12:25:18.365754 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/fe0eb656-a6e6-4b2e-9caf-53ef2d0a3f2b-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"fe0eb656-a6e6-4b2e-9caf-53ef2d0a3f2b\") " pod="openstack/alertmanager-metric-storage-0" Dec 09 12:25:18 crc kubenswrapper[4703]: I1209 12:25:18.380940 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/fe0eb656-a6e6-4b2e-9caf-53ef2d0a3f2b-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"fe0eb656-a6e6-4b2e-9caf-53ef2d0a3f2b\") " pod="openstack/alertmanager-metric-storage-0" Dec 09 12:25:18 crc kubenswrapper[4703]: I1209 12:25:18.382371 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/fe0eb656-a6e6-4b2e-9caf-53ef2d0a3f2b-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"fe0eb656-a6e6-4b2e-9caf-53ef2d0a3f2b\") " pod="openstack/alertmanager-metric-storage-0" Dec 09 12:25:18 crc kubenswrapper[4703]: I1209 12:25:18.382814 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/fe0eb656-a6e6-4b2e-9caf-53ef2d0a3f2b-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"fe0eb656-a6e6-4b2e-9caf-53ef2d0a3f2b\") " pod="openstack/alertmanager-metric-storage-0" Dec 09 12:25:18 crc kubenswrapper[4703]: I1209 12:25:18.418748 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/fe0eb656-a6e6-4b2e-9caf-53ef2d0a3f2b-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"fe0eb656-a6e6-4b2e-9caf-53ef2d0a3f2b\") " pod="openstack/alertmanager-metric-storage-0" Dec 09 12:25:18 crc kubenswrapper[4703]: I1209 12:25:18.431366 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/fe0eb656-a6e6-4b2e-9caf-53ef2d0a3f2b-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"fe0eb656-a6e6-4b2e-9caf-53ef2d0a3f2b\") " pod="openstack/alertmanager-metric-storage-0" Dec 09 12:25:18 crc kubenswrapper[4703]: I1209 12:25:18.435095 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkqdx\" (UniqueName: \"kubernetes.io/projected/fe0eb656-a6e6-4b2e-9caf-53ef2d0a3f2b-kube-api-access-qkqdx\") pod \"alertmanager-metric-storage-0\" (UID: \"fe0eb656-a6e6-4b2e-9caf-53ef2d0a3f2b\") " pod="openstack/alertmanager-metric-storage-0" Dec 09 12:25:18 crc kubenswrapper[4703]: I1209 12:25:18.509797 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 09 12:25:18 crc kubenswrapper[4703]: I1209 12:25:18.512494 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Dec 09 12:25:18 crc kubenswrapper[4703]: I1209 12:25:18.536372 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Dec 09 12:25:18 crc kubenswrapper[4703]: I1209 12:25:18.543267 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Dec 09 12:25:18 crc kubenswrapper[4703]: I1209 12:25:18.544098 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Dec 09 12:25:18 crc kubenswrapper[4703]: I1209 12:25:18.544464 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-7dvs9" Dec 09 12:25:18 crc kubenswrapper[4703]: I1209 12:25:18.545518 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Dec 09 12:25:18 crc kubenswrapper[4703]: I1209 12:25:18.545743 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Dec 09 12:25:18 crc kubenswrapper[4703]: I1209 12:25:18.564537 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 09 12:25:18 crc kubenswrapper[4703]: I1209 12:25:18.569183 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-71cbae65-332b-48bf-9748-f734228bec27\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-71cbae65-332b-48bf-9748-f734228bec27\") pod \"prometheus-metric-storage-0\" (UID: \"4ff9ddab-6c86-431b-b31b-3ec7372b7144\") " pod="openstack/prometheus-metric-storage-0" Dec 09 12:25:18 crc kubenswrapper[4703]: I1209 12:25:18.569278 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/4ff9ddab-6c86-431b-b31b-3ec7372b7144-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"4ff9ddab-6c86-431b-b31b-3ec7372b7144\") " pod="openstack/prometheus-metric-storage-0" Dec 09 12:25:18 crc kubenswrapper[4703]: I1209 12:25:18.569329 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4ff9ddab-6c86-431b-b31b-3ec7372b7144-config\") pod \"prometheus-metric-storage-0\" (UID: \"4ff9ddab-6c86-431b-b31b-3ec7372b7144\") " pod="openstack/prometheus-metric-storage-0" Dec 09 12:25:18 crc kubenswrapper[4703]: I1209 12:25:18.569368 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/4ff9ddab-6c86-431b-b31b-3ec7372b7144-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"4ff9ddab-6c86-431b-b31b-3ec7372b7144\") " pod="openstack/prometheus-metric-storage-0" Dec 09 12:25:18 crc kubenswrapper[4703]: I1209 12:25:18.569395 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/4ff9ddab-6c86-431b-b31b-3ec7372b7144-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"4ff9ddab-6c86-431b-b31b-3ec7372b7144\") " pod="openstack/prometheus-metric-storage-0" Dec 09 12:25:18 crc kubenswrapper[4703]: I1209 12:25:18.569420 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/4ff9ddab-6c86-431b-b31b-3ec7372b7144-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"4ff9ddab-6c86-431b-b31b-3ec7372b7144\") " pod="openstack/prometheus-metric-storage-0" Dec 09 12:25:18 crc kubenswrapper[4703]: I1209 12:25:18.569439 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/4ff9ddab-6c86-431b-b31b-3ec7372b7144-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"4ff9ddab-6c86-431b-b31b-3ec7372b7144\") " pod="openstack/prometheus-metric-storage-0" Dec 09 12:25:18 crc kubenswrapper[4703]: I1209 12:25:18.569463 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltxxx\" (UniqueName: \"kubernetes.io/projected/4ff9ddab-6c86-431b-b31b-3ec7372b7144-kube-api-access-ltxxx\") pod \"prometheus-metric-storage-0\" (UID: \"4ff9ddab-6c86-431b-b31b-3ec7372b7144\") " pod="openstack/prometheus-metric-storage-0" Dec 09 12:25:18 crc kubenswrapper[4703]: I1209 12:25:18.661550 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 09 12:25:18 crc kubenswrapper[4703]: I1209 12:25:18.702615 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/4ff9ddab-6c86-431b-b31b-3ec7372b7144-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"4ff9ddab-6c86-431b-b31b-3ec7372b7144\") " pod="openstack/prometheus-metric-storage-0" Dec 09 12:25:18 crc kubenswrapper[4703]: I1209 12:25:18.702739 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/4ff9ddab-6c86-431b-b31b-3ec7372b7144-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"4ff9ddab-6c86-431b-b31b-3ec7372b7144\") " pod="openstack/prometheus-metric-storage-0" Dec 09 12:25:18 crc kubenswrapper[4703]: I1209 12:25:18.702822 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/4ff9ddab-6c86-431b-b31b-3ec7372b7144-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"4ff9ddab-6c86-431b-b31b-3ec7372b7144\") " pod="openstack/prometheus-metric-storage-0" Dec 09 12:25:18 crc kubenswrapper[4703]: I1209 12:25:18.702879 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/4ff9ddab-6c86-431b-b31b-3ec7372b7144-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"4ff9ddab-6c86-431b-b31b-3ec7372b7144\") " pod="openstack/prometheus-metric-storage-0" Dec 09 12:25:18 crc kubenswrapper[4703]: I1209 12:25:18.702943 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltxxx\" (UniqueName: \"kubernetes.io/projected/4ff9ddab-6c86-431b-b31b-3ec7372b7144-kube-api-access-ltxxx\") pod \"prometheus-metric-storage-0\" (UID: \"4ff9ddab-6c86-431b-b31b-3ec7372b7144\") " pod="openstack/prometheus-metric-storage-0" Dec 09 12:25:18 crc kubenswrapper[4703]: I1209 12:25:18.703088 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-71cbae65-332b-48bf-9748-f734228bec27\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-71cbae65-332b-48bf-9748-f734228bec27\") pod \"prometheus-metric-storage-0\" (UID: \"4ff9ddab-6c86-431b-b31b-3ec7372b7144\") " pod="openstack/prometheus-metric-storage-0" Dec 09 12:25:18 crc kubenswrapper[4703]: I1209 12:25:18.703284 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/4ff9ddab-6c86-431b-b31b-3ec7372b7144-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"4ff9ddab-6c86-431b-b31b-3ec7372b7144\") " pod="openstack/prometheus-metric-storage-0" Dec 09 12:25:18 crc kubenswrapper[4703]: I1209 12:25:18.703428 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4ff9ddab-6c86-431b-b31b-3ec7372b7144-config\") pod \"prometheus-metric-storage-0\" (UID: \"4ff9ddab-6c86-431b-b31b-3ec7372b7144\") " pod="openstack/prometheus-metric-storage-0" Dec 09 12:25:18 crc kubenswrapper[4703]: I1209 12:25:18.703704 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/4ff9ddab-6c86-431b-b31b-3ec7372b7144-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"4ff9ddab-6c86-431b-b31b-3ec7372b7144\") " pod="openstack/prometheus-metric-storage-0" Dec 09 12:25:18 crc kubenswrapper[4703]: I1209 12:25:18.711745 4703 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 09 12:25:18 crc kubenswrapper[4703]: I1209 12:25:18.711813 4703 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-71cbae65-332b-48bf-9748-f734228bec27\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-71cbae65-332b-48bf-9748-f734228bec27\") pod \"prometheus-metric-storage-0\" (UID: \"4ff9ddab-6c86-431b-b31b-3ec7372b7144\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/d37810f32880edcabb6347a05932ab8c2bcb1d3e05f799d5d3282bcd71eea829/globalmount\"" pod="openstack/prometheus-metric-storage-0" Dec 09 12:25:18 crc kubenswrapper[4703]: I1209 12:25:18.713332 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/4ff9ddab-6c86-431b-b31b-3ec7372b7144-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"4ff9ddab-6c86-431b-b31b-3ec7372b7144\") " pod="openstack/prometheus-metric-storage-0" Dec 09 12:25:18 crc kubenswrapper[4703]: I1209 12:25:18.713784 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/4ff9ddab-6c86-431b-b31b-3ec7372b7144-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"4ff9ddab-6c86-431b-b31b-3ec7372b7144\") " pod="openstack/prometheus-metric-storage-0" Dec 09 12:25:18 crc kubenswrapper[4703]: I1209 12:25:18.723073 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/4ff9ddab-6c86-431b-b31b-3ec7372b7144-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"4ff9ddab-6c86-431b-b31b-3ec7372b7144\") " pod="openstack/prometheus-metric-storage-0" Dec 09 12:25:18 crc kubenswrapper[4703]: I1209 12:25:18.723487 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/4ff9ddab-6c86-431b-b31b-3ec7372b7144-config\") pod \"prometheus-metric-storage-0\" (UID: \"4ff9ddab-6c86-431b-b31b-3ec7372b7144\") " pod="openstack/prometheus-metric-storage-0" Dec 09 12:25:18 crc kubenswrapper[4703]: I1209 12:25:18.726975 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Dec 09 12:25:18 crc kubenswrapper[4703]: I1209 12:25:18.729558 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltxxx\" (UniqueName: \"kubernetes.io/projected/4ff9ddab-6c86-431b-b31b-3ec7372b7144-kube-api-access-ltxxx\") pod \"prometheus-metric-storage-0\" (UID: \"4ff9ddab-6c86-431b-b31b-3ec7372b7144\") " pod="openstack/prometheus-metric-storage-0" Dec 09 12:25:18 crc kubenswrapper[4703]: I1209 12:25:18.738592 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/4ff9ddab-6c86-431b-b31b-3ec7372b7144-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"4ff9ddab-6c86-431b-b31b-3ec7372b7144\") " pod="openstack/prometheus-metric-storage-0" Dec 09 12:25:18 crc kubenswrapper[4703]: I1209 12:25:18.821235 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-71cbae65-332b-48bf-9748-f734228bec27\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-71cbae65-332b-48bf-9748-f734228bec27\") pod \"prometheus-metric-storage-0\" (UID: \"4ff9ddab-6c86-431b-b31b-3ec7372b7144\") " pod="openstack/prometheus-metric-storage-0" Dec 09 12:25:18 crc kubenswrapper[4703]: I1209 12:25:18.970315 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Dec 09 12:25:19 crc kubenswrapper[4703]: I1209 12:25:19.544035 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"ab5733f2-517c-433a-bf6c-f1cd26dde97b","Type":"ContainerStarted","Data":"0f5ed5d45924690b28ee5c7af61742cd6d0610af4d65cc72fcd1d880d830c660"} Dec 09 12:25:19 crc kubenswrapper[4703]: I1209 12:25:19.754704 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Dec 09 12:25:19 crc kubenswrapper[4703]: I1209 12:25:19.900925 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 09 12:25:20 crc kubenswrapper[4703]: I1209 12:25:20.600351 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"fe0eb656-a6e6-4b2e-9caf-53ef2d0a3f2b","Type":"ContainerStarted","Data":"e32b2f29f3df192887c2226e59388194a4eb98d23e1a08ba1b8dea3a83a74f9b"} Dec 09 12:25:20 crc kubenswrapper[4703]: I1209 12:25:20.654688 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"4ff9ddab-6c86-431b-b31b-3ec7372b7144","Type":"ContainerStarted","Data":"14de0b022292cf950d4236a81b72d33a2c58a631a903965ca39cd975201f61c6"} Dec 09 12:25:21 crc kubenswrapper[4703]: I1209 12:25:21.210888 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-lsj78"] Dec 09 12:25:21 crc kubenswrapper[4703]: I1209 12:25:21.212575 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-lsj78" Dec 09 12:25:21 crc kubenswrapper[4703]: I1209 12:25:21.217106 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Dec 09 12:25:21 crc kubenswrapper[4703]: I1209 12:25:21.217383 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-jzkfm" Dec 09 12:25:21 crc kubenswrapper[4703]: I1209 12:25:21.222681 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Dec 09 12:25:21 crc kubenswrapper[4703]: I1209 12:25:21.259973 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-lsj78"] Dec 09 12:25:21 crc kubenswrapper[4703]: I1209 12:25:21.324575 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-b4tvr"] Dec 09 12:25:21 crc kubenswrapper[4703]: I1209 12:25:21.327417 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-b4tvr" Dec 09 12:25:21 crc kubenswrapper[4703]: I1209 12:25:21.336928 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-b4tvr"] Dec 09 12:25:21 crc kubenswrapper[4703]: I1209 12:25:21.344063 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4222b5eb-89d5-41be-ab08-6f3f3f4dab42-var-log-ovn\") pod \"ovn-controller-lsj78\" (UID: \"4222b5eb-89d5-41be-ab08-6f3f3f4dab42\") " pod="openstack/ovn-controller-lsj78" Dec 09 12:25:21 crc kubenswrapper[4703]: I1209 12:25:21.344320 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pn225\" (UniqueName: \"kubernetes.io/projected/4222b5eb-89d5-41be-ab08-6f3f3f4dab42-kube-api-access-pn225\") pod \"ovn-controller-lsj78\" (UID: \"4222b5eb-89d5-41be-ab08-6f3f3f4dab42\") " pod="openstack/ovn-controller-lsj78" Dec 09 12:25:21 crc kubenswrapper[4703]: I1209 12:25:21.344378 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4222b5eb-89d5-41be-ab08-6f3f3f4dab42-var-run-ovn\") pod \"ovn-controller-lsj78\" (UID: \"4222b5eb-89d5-41be-ab08-6f3f3f4dab42\") " pod="openstack/ovn-controller-lsj78" Dec 09 12:25:21 crc kubenswrapper[4703]: I1209 12:25:21.344402 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4222b5eb-89d5-41be-ab08-6f3f3f4dab42-var-run\") pod \"ovn-controller-lsj78\" (UID: \"4222b5eb-89d5-41be-ab08-6f3f3f4dab42\") " pod="openstack/ovn-controller-lsj78" Dec 09 12:25:21 crc kubenswrapper[4703]: I1209 12:25:21.344445 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/4222b5eb-89d5-41be-ab08-6f3f3f4dab42-ovn-controller-tls-certs\") pod \"ovn-controller-lsj78\" (UID: \"4222b5eb-89d5-41be-ab08-6f3f3f4dab42\") " pod="openstack/ovn-controller-lsj78" Dec 09 12:25:21 crc kubenswrapper[4703]: I1209 12:25:21.344473 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4222b5eb-89d5-41be-ab08-6f3f3f4dab42-scripts\") pod \"ovn-controller-lsj78\" (UID: \"4222b5eb-89d5-41be-ab08-6f3f3f4dab42\") " pod="openstack/ovn-controller-lsj78" Dec 09 12:25:21 crc kubenswrapper[4703]: I1209 12:25:21.344496 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4222b5eb-89d5-41be-ab08-6f3f3f4dab42-combined-ca-bundle\") pod \"ovn-controller-lsj78\" (UID: \"4222b5eb-89d5-41be-ab08-6f3f3f4dab42\") " pod="openstack/ovn-controller-lsj78" Dec 09 12:25:21 crc kubenswrapper[4703]: I1209 12:25:21.446056 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4222b5eb-89d5-41be-ab08-6f3f3f4dab42-var-log-ovn\") pod \"ovn-controller-lsj78\" (UID: \"4222b5eb-89d5-41be-ab08-6f3f3f4dab42\") " pod="openstack/ovn-controller-lsj78" Dec 09 12:25:21 crc kubenswrapper[4703]: I1209 12:25:21.446144 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/8444467c-d711-4618-8518-1c45921e6493-etc-ovs\") pod \"ovn-controller-ovs-b4tvr\" (UID: \"8444467c-d711-4618-8518-1c45921e6493\") " pod="openstack/ovn-controller-ovs-b4tvr" Dec 09 12:25:21 crc kubenswrapper[4703]: I1209 12:25:21.446209 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pn225\" (UniqueName: \"kubernetes.io/projected/4222b5eb-89d5-41be-ab08-6f3f3f4dab42-kube-api-access-pn225\") pod \"ovn-controller-lsj78\" (UID: \"4222b5eb-89d5-41be-ab08-6f3f3f4dab42\") " pod="openstack/ovn-controller-lsj78" Dec 09 12:25:21 crc kubenswrapper[4703]: I1209 12:25:21.446255 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8444467c-d711-4618-8518-1c45921e6493-scripts\") pod \"ovn-controller-ovs-b4tvr\" (UID: \"8444467c-d711-4618-8518-1c45921e6493\") " pod="openstack/ovn-controller-ovs-b4tvr" Dec 09 12:25:21 crc kubenswrapper[4703]: I1209 12:25:21.446282 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4222b5eb-89d5-41be-ab08-6f3f3f4dab42-var-run-ovn\") pod \"ovn-controller-lsj78\" (UID: \"4222b5eb-89d5-41be-ab08-6f3f3f4dab42\") " pod="openstack/ovn-controller-lsj78" Dec 09 12:25:21 crc kubenswrapper[4703]: I1209 12:25:21.446305 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4222b5eb-89d5-41be-ab08-6f3f3f4dab42-var-run\") pod \"ovn-controller-lsj78\" (UID: \"4222b5eb-89d5-41be-ab08-6f3f3f4dab42\") " pod="openstack/ovn-controller-lsj78" Dec 09 12:25:21 crc kubenswrapper[4703]: I1209 12:25:21.446323 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwncx\" (UniqueName: \"kubernetes.io/projected/8444467c-d711-4618-8518-1c45921e6493-kube-api-access-qwncx\") pod \"ovn-controller-ovs-b4tvr\" (UID: \"8444467c-d711-4618-8518-1c45921e6493\") " pod="openstack/ovn-controller-ovs-b4tvr" Dec 09 12:25:21 crc kubenswrapper[4703]: I1209 12:25:21.446374 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/8444467c-d711-4618-8518-1c45921e6493-var-log\") pod \"ovn-controller-ovs-b4tvr\" (UID: \"8444467c-d711-4618-8518-1c45921e6493\") " pod="openstack/ovn-controller-ovs-b4tvr" Dec 09 12:25:21 crc kubenswrapper[4703]: I1209 12:25:21.446406 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/4222b5eb-89d5-41be-ab08-6f3f3f4dab42-ovn-controller-tls-certs\") pod \"ovn-controller-lsj78\" (UID: \"4222b5eb-89d5-41be-ab08-6f3f3f4dab42\") " pod="openstack/ovn-controller-lsj78" Dec 09 12:25:21 crc kubenswrapper[4703]: I1209 12:25:21.446462 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4222b5eb-89d5-41be-ab08-6f3f3f4dab42-scripts\") pod \"ovn-controller-lsj78\" (UID: \"4222b5eb-89d5-41be-ab08-6f3f3f4dab42\") " pod="openstack/ovn-controller-lsj78" Dec 09 12:25:21 crc kubenswrapper[4703]: I1209 12:25:21.446500 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4222b5eb-89d5-41be-ab08-6f3f3f4dab42-combined-ca-bundle\") pod \"ovn-controller-lsj78\" (UID: \"4222b5eb-89d5-41be-ab08-6f3f3f4dab42\") " pod="openstack/ovn-controller-lsj78" Dec 09 12:25:21 crc kubenswrapper[4703]: I1209 12:25:21.446527 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8444467c-d711-4618-8518-1c45921e6493-var-run\") pod \"ovn-controller-ovs-b4tvr\" (UID: \"8444467c-d711-4618-8518-1c45921e6493\") " pod="openstack/ovn-controller-ovs-b4tvr" Dec 09 12:25:21 crc kubenswrapper[4703]: I1209 12:25:21.446552 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/8444467c-d711-4618-8518-1c45921e6493-var-lib\") pod \"ovn-controller-ovs-b4tvr\" (UID: \"8444467c-d711-4618-8518-1c45921e6493\") " pod="openstack/ovn-controller-ovs-b4tvr" Dec 09 12:25:21 crc kubenswrapper[4703]: I1209 12:25:21.447218 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4222b5eb-89d5-41be-ab08-6f3f3f4dab42-var-log-ovn\") pod \"ovn-controller-lsj78\" (UID: \"4222b5eb-89d5-41be-ab08-6f3f3f4dab42\") " pod="openstack/ovn-controller-lsj78" Dec 09 12:25:21 crc kubenswrapper[4703]: I1209 12:25:21.447801 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4222b5eb-89d5-41be-ab08-6f3f3f4dab42-var-run-ovn\") pod \"ovn-controller-lsj78\" (UID: \"4222b5eb-89d5-41be-ab08-6f3f3f4dab42\") " pod="openstack/ovn-controller-lsj78" Dec 09 12:25:21 crc kubenswrapper[4703]: I1209 12:25:21.447948 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4222b5eb-89d5-41be-ab08-6f3f3f4dab42-var-run\") pod \"ovn-controller-lsj78\" (UID: \"4222b5eb-89d5-41be-ab08-6f3f3f4dab42\") " pod="openstack/ovn-controller-lsj78" Dec 09 12:25:21 crc kubenswrapper[4703]: I1209 12:25:21.451665 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4222b5eb-89d5-41be-ab08-6f3f3f4dab42-scripts\") pod \"ovn-controller-lsj78\" (UID: \"4222b5eb-89d5-41be-ab08-6f3f3f4dab42\") " pod="openstack/ovn-controller-lsj78" Dec 09 12:25:21 crc kubenswrapper[4703]: I1209 12:25:21.478886 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/4222b5eb-89d5-41be-ab08-6f3f3f4dab42-ovn-controller-tls-certs\") pod \"ovn-controller-lsj78\" (UID: \"4222b5eb-89d5-41be-ab08-6f3f3f4dab42\") " pod="openstack/ovn-controller-lsj78" Dec 09 12:25:21 crc kubenswrapper[4703]: I1209 12:25:21.514775 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pn225\" (UniqueName: \"kubernetes.io/projected/4222b5eb-89d5-41be-ab08-6f3f3f4dab42-kube-api-access-pn225\") pod \"ovn-controller-lsj78\" (UID: \"4222b5eb-89d5-41be-ab08-6f3f3f4dab42\") " pod="openstack/ovn-controller-lsj78" Dec 09 12:25:21 crc kubenswrapper[4703]: I1209 12:25:21.551980 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/8444467c-d711-4618-8518-1c45921e6493-var-log\") pod \"ovn-controller-ovs-b4tvr\" (UID: \"8444467c-d711-4618-8518-1c45921e6493\") " pod="openstack/ovn-controller-ovs-b4tvr" Dec 09 12:25:21 crc kubenswrapper[4703]: I1209 12:25:21.552354 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8444467c-d711-4618-8518-1c45921e6493-var-run\") pod \"ovn-controller-ovs-b4tvr\" (UID: \"8444467c-d711-4618-8518-1c45921e6493\") " pod="openstack/ovn-controller-ovs-b4tvr" Dec 09 12:25:21 crc kubenswrapper[4703]: I1209 12:25:21.552423 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/8444467c-d711-4618-8518-1c45921e6493-var-lib\") pod \"ovn-controller-ovs-b4tvr\" (UID: \"8444467c-d711-4618-8518-1c45921e6493\") " pod="openstack/ovn-controller-ovs-b4tvr" Dec 09 12:25:21 crc kubenswrapper[4703]: I1209 12:25:21.552515 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/8444467c-d711-4618-8518-1c45921e6493-var-log\") pod \"ovn-controller-ovs-b4tvr\" (UID: \"8444467c-d711-4618-8518-1c45921e6493\") " pod="openstack/ovn-controller-ovs-b4tvr" Dec 09 12:25:21 crc kubenswrapper[4703]: I1209 12:25:21.552654 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/8444467c-d711-4618-8518-1c45921e6493-etc-ovs\") pod \"ovn-controller-ovs-b4tvr\" (UID: \"8444467c-d711-4618-8518-1c45921e6493\") " pod="openstack/ovn-controller-ovs-b4tvr" Dec 09 12:25:21 crc kubenswrapper[4703]: I1209 12:25:21.552834 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/8444467c-d711-4618-8518-1c45921e6493-var-lib\") pod \"ovn-controller-ovs-b4tvr\" (UID: \"8444467c-d711-4618-8518-1c45921e6493\") " pod="openstack/ovn-controller-ovs-b4tvr" Dec 09 12:25:21 crc kubenswrapper[4703]: I1209 12:25:21.552900 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8444467c-d711-4618-8518-1c45921e6493-var-run\") pod \"ovn-controller-ovs-b4tvr\" (UID: \"8444467c-d711-4618-8518-1c45921e6493\") " pod="openstack/ovn-controller-ovs-b4tvr" Dec 09 12:25:21 crc kubenswrapper[4703]: I1209 12:25:21.552916 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8444467c-d711-4618-8518-1c45921e6493-scripts\") pod \"ovn-controller-ovs-b4tvr\" (UID: \"8444467c-d711-4618-8518-1c45921e6493\") " pod="openstack/ovn-controller-ovs-b4tvr" Dec 09 12:25:21 crc kubenswrapper[4703]: I1209 12:25:21.553121 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwncx\" (UniqueName: \"kubernetes.io/projected/8444467c-d711-4618-8518-1c45921e6493-kube-api-access-qwncx\") pod \"ovn-controller-ovs-b4tvr\" (UID: \"8444467c-d711-4618-8518-1c45921e6493\") " pod="openstack/ovn-controller-ovs-b4tvr" Dec 09 12:25:21 crc kubenswrapper[4703]: I1209 12:25:21.553404 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/8444467c-d711-4618-8518-1c45921e6493-etc-ovs\") pod \"ovn-controller-ovs-b4tvr\" (UID: \"8444467c-d711-4618-8518-1c45921e6493\") " pod="openstack/ovn-controller-ovs-b4tvr" Dec 09 12:25:21 crc kubenswrapper[4703]: I1209 12:25:21.561108 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8444467c-d711-4618-8518-1c45921e6493-scripts\") pod \"ovn-controller-ovs-b4tvr\" (UID: \"8444467c-d711-4618-8518-1c45921e6493\") " pod="openstack/ovn-controller-ovs-b4tvr" Dec 09 12:25:21 crc kubenswrapper[4703]: I1209 12:25:21.565235 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4222b5eb-89d5-41be-ab08-6f3f3f4dab42-combined-ca-bundle\") pod \"ovn-controller-lsj78\" (UID: \"4222b5eb-89d5-41be-ab08-6f3f3f4dab42\") " pod="openstack/ovn-controller-lsj78" Dec 09 12:25:21 crc kubenswrapper[4703]: I1209 12:25:21.572242 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-lsj78" Dec 09 12:25:21 crc kubenswrapper[4703]: I1209 12:25:21.612641 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwncx\" (UniqueName: \"kubernetes.io/projected/8444467c-d711-4618-8518-1c45921e6493-kube-api-access-qwncx\") pod \"ovn-controller-ovs-b4tvr\" (UID: \"8444467c-d711-4618-8518-1c45921e6493\") " pod="openstack/ovn-controller-ovs-b4tvr" Dec 09 12:25:21 crc kubenswrapper[4703]: I1209 12:25:21.664183 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-b4tvr" Dec 09 12:25:21 crc kubenswrapper[4703]: I1209 12:25:21.681890 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 09 12:25:21 crc kubenswrapper[4703]: I1209 12:25:21.683879 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Dec 09 12:25:21 crc kubenswrapper[4703]: I1209 12:25:21.700071 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Dec 09 12:25:21 crc kubenswrapper[4703]: I1209 12:25:21.700171 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Dec 09 12:25:21 crc kubenswrapper[4703]: I1209 12:25:21.700094 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Dec 09 12:25:21 crc kubenswrapper[4703]: I1209 12:25:21.703033 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-8h2k5" Dec 09 12:25:21 crc kubenswrapper[4703]: I1209 12:25:21.703439 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Dec 09 12:25:21 crc kubenswrapper[4703]: I1209 12:25:21.709144 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 09 12:25:21 crc kubenswrapper[4703]: I1209 12:25:21.861618 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d93546da-e35c-418f-a1e1-9d7b65c42829-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"d93546da-e35c-418f-a1e1-9d7b65c42829\") " pod="openstack/ovsdbserver-nb-0" Dec 09 12:25:21 crc kubenswrapper[4703]: I1209 12:25:21.861708 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqwrc\" (UniqueName: \"kubernetes.io/projected/d93546da-e35c-418f-a1e1-9d7b65c42829-kube-api-access-kqwrc\") pod \"ovsdbserver-nb-0\" (UID: \"d93546da-e35c-418f-a1e1-9d7b65c42829\") " pod="openstack/ovsdbserver-nb-0" Dec 09 12:25:21 crc kubenswrapper[4703]: I1209 12:25:21.861806 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d93546da-e35c-418f-a1e1-9d7b65c42829-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"d93546da-e35c-418f-a1e1-9d7b65c42829\") " pod="openstack/ovsdbserver-nb-0" Dec 09 12:25:21 crc kubenswrapper[4703]: I1209 12:25:21.861861 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d93546da-e35c-418f-a1e1-9d7b65c42829-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"d93546da-e35c-418f-a1e1-9d7b65c42829\") " pod="openstack/ovsdbserver-nb-0" Dec 09 12:25:21 crc kubenswrapper[4703]: I1209 12:25:21.862001 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d93546da-e35c-418f-a1e1-9d7b65c42829-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"d93546da-e35c-418f-a1e1-9d7b65c42829\") " pod="openstack/ovsdbserver-nb-0" Dec 09 12:25:21 crc kubenswrapper[4703]: I1209 12:25:21.862124 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d93546da-e35c-418f-a1e1-9d7b65c42829-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"d93546da-e35c-418f-a1e1-9d7b65c42829\") " pod="openstack/ovsdbserver-nb-0" Dec 09 12:25:21 crc kubenswrapper[4703]: I1209 12:25:21.862257 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a14977b7-4d0e-4dab-b028-6112fc009fcb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a14977b7-4d0e-4dab-b028-6112fc009fcb\") pod \"ovsdbserver-nb-0\" (UID: \"d93546da-e35c-418f-a1e1-9d7b65c42829\") " pod="openstack/ovsdbserver-nb-0" Dec 09 12:25:21 crc kubenswrapper[4703]: I1209 12:25:21.862303 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d93546da-e35c-418f-a1e1-9d7b65c42829-config\") pod \"ovsdbserver-nb-0\" (UID: \"d93546da-e35c-418f-a1e1-9d7b65c42829\") " pod="openstack/ovsdbserver-nb-0" Dec 09 12:25:21 crc kubenswrapper[4703]: I1209 12:25:21.963846 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqwrc\" (UniqueName: \"kubernetes.io/projected/d93546da-e35c-418f-a1e1-9d7b65c42829-kube-api-access-kqwrc\") pod \"ovsdbserver-nb-0\" (UID: \"d93546da-e35c-418f-a1e1-9d7b65c42829\") " pod="openstack/ovsdbserver-nb-0" Dec 09 12:25:21 crc kubenswrapper[4703]: I1209 12:25:21.963950 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d93546da-e35c-418f-a1e1-9d7b65c42829-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"d93546da-e35c-418f-a1e1-9d7b65c42829\") " pod="openstack/ovsdbserver-nb-0" Dec 09 12:25:21 crc kubenswrapper[4703]: I1209 12:25:21.964013 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d93546da-e35c-418f-a1e1-9d7b65c42829-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"d93546da-e35c-418f-a1e1-9d7b65c42829\") " pod="openstack/ovsdbserver-nb-0" Dec 09 12:25:21 crc kubenswrapper[4703]: I1209 12:25:21.964073 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d93546da-e35c-418f-a1e1-9d7b65c42829-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"d93546da-e35c-418f-a1e1-9d7b65c42829\") " pod="openstack/ovsdbserver-nb-0" Dec 09 12:25:21 crc kubenswrapper[4703]: I1209 12:25:21.964109 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d93546da-e35c-418f-a1e1-9d7b65c42829-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"d93546da-e35c-418f-a1e1-9d7b65c42829\") " pod="openstack/ovsdbserver-nb-0" Dec 09 12:25:21 crc kubenswrapper[4703]: I1209 12:25:21.964400 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a14977b7-4d0e-4dab-b028-6112fc009fcb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a14977b7-4d0e-4dab-b028-6112fc009fcb\") pod \"ovsdbserver-nb-0\" (UID: \"d93546da-e35c-418f-a1e1-9d7b65c42829\") " pod="openstack/ovsdbserver-nb-0" Dec 09 12:25:21 crc kubenswrapper[4703]: I1209 12:25:21.964442 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d93546da-e35c-418f-a1e1-9d7b65c42829-config\") pod \"ovsdbserver-nb-0\" (UID: \"d93546da-e35c-418f-a1e1-9d7b65c42829\") " pod="openstack/ovsdbserver-nb-0" Dec 09 12:25:21 crc kubenswrapper[4703]: I1209 12:25:21.964488 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d93546da-e35c-418f-a1e1-9d7b65c42829-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"d93546da-e35c-418f-a1e1-9d7b65c42829\") " pod="openstack/ovsdbserver-nb-0" Dec 09 12:25:21 crc kubenswrapper[4703]: I1209 12:25:21.964776 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d93546da-e35c-418f-a1e1-9d7b65c42829-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"d93546da-e35c-418f-a1e1-9d7b65c42829\") " pod="openstack/ovsdbserver-nb-0" Dec 09 12:25:21 crc kubenswrapper[4703]: I1209 12:25:21.965628 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d93546da-e35c-418f-a1e1-9d7b65c42829-config\") pod \"ovsdbserver-nb-0\" (UID: \"d93546da-e35c-418f-a1e1-9d7b65c42829\") " pod="openstack/ovsdbserver-nb-0" Dec 09 12:25:21 crc kubenswrapper[4703]: I1209 12:25:21.970500 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d93546da-e35c-418f-a1e1-9d7b65c42829-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"d93546da-e35c-418f-a1e1-9d7b65c42829\") " pod="openstack/ovsdbserver-nb-0" Dec 09 12:25:21 crc kubenswrapper[4703]: I1209 12:25:21.979684 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d93546da-e35c-418f-a1e1-9d7b65c42829-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"d93546da-e35c-418f-a1e1-9d7b65c42829\") " pod="openstack/ovsdbserver-nb-0" Dec 09 12:25:21 crc kubenswrapper[4703]: I1209 12:25:21.980460 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d93546da-e35c-418f-a1e1-9d7b65c42829-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"d93546da-e35c-418f-a1e1-9d7b65c42829\") " pod="openstack/ovsdbserver-nb-0" Dec 09 12:25:21 crc kubenswrapper[4703]: I1209 12:25:21.980746 4703 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 09 12:25:21 crc kubenswrapper[4703]: I1209 12:25:21.980798 4703 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a14977b7-4d0e-4dab-b028-6112fc009fcb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a14977b7-4d0e-4dab-b028-6112fc009fcb\") pod \"ovsdbserver-nb-0\" (UID: \"d93546da-e35c-418f-a1e1-9d7b65c42829\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/c8abda16c936247f0cd9d07c3e7e8dccbcd8ba56626815d5c509cf849a82f01e/globalmount\"" pod="openstack/ovsdbserver-nb-0" Dec 09 12:25:21 crc kubenswrapper[4703]: I1209 12:25:21.989663 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqwrc\" (UniqueName: \"kubernetes.io/projected/d93546da-e35c-418f-a1e1-9d7b65c42829-kube-api-access-kqwrc\") pod \"ovsdbserver-nb-0\" (UID: \"d93546da-e35c-418f-a1e1-9d7b65c42829\") " pod="openstack/ovsdbserver-nb-0" Dec 09 12:25:21 crc kubenswrapper[4703]: I1209 12:25:21.994788 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d93546da-e35c-418f-a1e1-9d7b65c42829-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"d93546da-e35c-418f-a1e1-9d7b65c42829\") " pod="openstack/ovsdbserver-nb-0" Dec 09 12:25:22 crc kubenswrapper[4703]: I1209 12:25:22.026088 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a14977b7-4d0e-4dab-b028-6112fc009fcb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a14977b7-4d0e-4dab-b028-6112fc009fcb\") pod \"ovsdbserver-nb-0\" (UID: \"d93546da-e35c-418f-a1e1-9d7b65c42829\") " pod="openstack/ovsdbserver-nb-0" Dec 09 12:25:22 crc kubenswrapper[4703]: I1209 12:25:22.328667 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Dec 09 12:25:25 crc kubenswrapper[4703]: I1209 12:25:25.908814 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 09 12:25:25 crc kubenswrapper[4703]: I1209 12:25:25.911514 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Dec 09 12:25:25 crc kubenswrapper[4703]: I1209 12:25:25.916434 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Dec 09 12:25:25 crc kubenswrapper[4703]: I1209 12:25:25.916839 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Dec 09 12:25:25 crc kubenswrapper[4703]: I1209 12:25:25.916977 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-mzscb" Dec 09 12:25:25 crc kubenswrapper[4703]: I1209 12:25:25.917622 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Dec 09 12:25:25 crc kubenswrapper[4703]: I1209 12:25:25.928177 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 09 12:25:26 crc kubenswrapper[4703]: I1209 12:25:26.095871 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d85de252-1b8c-45f0-a143-eaa5f2d52fcb-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"d85de252-1b8c-45f0-a143-eaa5f2d52fcb\") " pod="openstack/ovsdbserver-sb-0" Dec 09 12:25:26 crc kubenswrapper[4703]: I1209 12:25:26.095965 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d85de252-1b8c-45f0-a143-eaa5f2d52fcb-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"d85de252-1b8c-45f0-a143-eaa5f2d52fcb\") " pod="openstack/ovsdbserver-sb-0" Dec 09 12:25:26 crc kubenswrapper[4703]: I1209 12:25:26.095992 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jk6l\" (UniqueName: \"kubernetes.io/projected/d85de252-1b8c-45f0-a143-eaa5f2d52fcb-kube-api-access-9jk6l\") pod \"ovsdbserver-sb-0\" (UID: \"d85de252-1b8c-45f0-a143-eaa5f2d52fcb\") " pod="openstack/ovsdbserver-sb-0" Dec 09 12:25:26 crc kubenswrapper[4703]: I1209 12:25:26.096042 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d85de252-1b8c-45f0-a143-eaa5f2d52fcb-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"d85de252-1b8c-45f0-a143-eaa5f2d52fcb\") " pod="openstack/ovsdbserver-sb-0" Dec 09 12:25:26 crc kubenswrapper[4703]: I1209 12:25:26.096071 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d85de252-1b8c-45f0-a143-eaa5f2d52fcb-config\") pod \"ovsdbserver-sb-0\" (UID: \"d85de252-1b8c-45f0-a143-eaa5f2d52fcb\") " pod="openstack/ovsdbserver-sb-0" Dec 09 12:25:26 crc kubenswrapper[4703]: I1209 12:25:26.096147 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-87a55052-6743-4111-b2a2-225dd0408e36\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-87a55052-6743-4111-b2a2-225dd0408e36\") pod \"ovsdbserver-sb-0\" (UID: \"d85de252-1b8c-45f0-a143-eaa5f2d52fcb\") " pod="openstack/ovsdbserver-sb-0" Dec 09 12:25:26 crc kubenswrapper[4703]: I1209 12:25:26.096178 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d85de252-1b8c-45f0-a143-eaa5f2d52fcb-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"d85de252-1b8c-45f0-a143-eaa5f2d52fcb\") " pod="openstack/ovsdbserver-sb-0" Dec 09 12:25:26 crc kubenswrapper[4703]: I1209 12:25:26.096256 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d85de252-1b8c-45f0-a143-eaa5f2d52fcb-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"d85de252-1b8c-45f0-a143-eaa5f2d52fcb\") " pod="openstack/ovsdbserver-sb-0" Dec 09 12:25:26 crc kubenswrapper[4703]: I1209 12:25:26.199143 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-87a55052-6743-4111-b2a2-225dd0408e36\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-87a55052-6743-4111-b2a2-225dd0408e36\") pod \"ovsdbserver-sb-0\" (UID: \"d85de252-1b8c-45f0-a143-eaa5f2d52fcb\") " pod="openstack/ovsdbserver-sb-0" Dec 09 12:25:26 crc kubenswrapper[4703]: I1209 12:25:26.199277 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d85de252-1b8c-45f0-a143-eaa5f2d52fcb-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"d85de252-1b8c-45f0-a143-eaa5f2d52fcb\") " pod="openstack/ovsdbserver-sb-0" Dec 09 12:25:26 crc kubenswrapper[4703]: I1209 12:25:26.200694 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d85de252-1b8c-45f0-a143-eaa5f2d52fcb-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"d85de252-1b8c-45f0-a143-eaa5f2d52fcb\") " pod="openstack/ovsdbserver-sb-0" Dec 09 12:25:26 crc kubenswrapper[4703]: I1209 12:25:26.200808 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d85de252-1b8c-45f0-a143-eaa5f2d52fcb-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"d85de252-1b8c-45f0-a143-eaa5f2d52fcb\") " pod="openstack/ovsdbserver-sb-0" Dec 09 12:25:26 crc kubenswrapper[4703]: I1209 12:25:26.200907 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d85de252-1b8c-45f0-a143-eaa5f2d52fcb-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"d85de252-1b8c-45f0-a143-eaa5f2d52fcb\") " pod="openstack/ovsdbserver-sb-0" Dec 09 12:25:26 crc kubenswrapper[4703]: I1209 12:25:26.200932 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jk6l\" (UniqueName: \"kubernetes.io/projected/d85de252-1b8c-45f0-a143-eaa5f2d52fcb-kube-api-access-9jk6l\") pod \"ovsdbserver-sb-0\" (UID: \"d85de252-1b8c-45f0-a143-eaa5f2d52fcb\") " pod="openstack/ovsdbserver-sb-0" Dec 09 12:25:26 crc kubenswrapper[4703]: I1209 12:25:26.200964 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d85de252-1b8c-45f0-a143-eaa5f2d52fcb-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"d85de252-1b8c-45f0-a143-eaa5f2d52fcb\") " pod="openstack/ovsdbserver-sb-0" Dec 09 12:25:26 crc kubenswrapper[4703]: I1209 12:25:26.201005 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d85de252-1b8c-45f0-a143-eaa5f2d52fcb-config\") pod \"ovsdbserver-sb-0\" (UID: \"d85de252-1b8c-45f0-a143-eaa5f2d52fcb\") " pod="openstack/ovsdbserver-sb-0" Dec 09 12:25:26 crc kubenswrapper[4703]: I1209 12:25:26.202359 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d85de252-1b8c-45f0-a143-eaa5f2d52fcb-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"d85de252-1b8c-45f0-a143-eaa5f2d52fcb\") " pod="openstack/ovsdbserver-sb-0" Dec 09 12:25:26 crc kubenswrapper[4703]: I1209 12:25:26.202823 4703 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 09 12:25:26 crc kubenswrapper[4703]: I1209 12:25:26.202842 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d85de252-1b8c-45f0-a143-eaa5f2d52fcb-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"d85de252-1b8c-45f0-a143-eaa5f2d52fcb\") " pod="openstack/ovsdbserver-sb-0" Dec 09 12:25:26 crc kubenswrapper[4703]: I1209 12:25:26.202862 4703 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-87a55052-6743-4111-b2a2-225dd0408e36\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-87a55052-6743-4111-b2a2-225dd0408e36\") pod \"ovsdbserver-sb-0\" (UID: \"d85de252-1b8c-45f0-a143-eaa5f2d52fcb\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b75f0b86548e43adabb00684e112cb261723436bbab4da0e17ef189ce6dad639/globalmount\"" pod="openstack/ovsdbserver-sb-0" Dec 09 12:25:26 crc kubenswrapper[4703]: I1209 12:25:26.203500 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d85de252-1b8c-45f0-a143-eaa5f2d52fcb-config\") pod \"ovsdbserver-sb-0\" (UID: \"d85de252-1b8c-45f0-a143-eaa5f2d52fcb\") " pod="openstack/ovsdbserver-sb-0" Dec 09 12:25:26 crc kubenswrapper[4703]: I1209 12:25:26.207732 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d85de252-1b8c-45f0-a143-eaa5f2d52fcb-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"d85de252-1b8c-45f0-a143-eaa5f2d52fcb\") " pod="openstack/ovsdbserver-sb-0" Dec 09 12:25:26 crc kubenswrapper[4703]: I1209 12:25:26.210210 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d85de252-1b8c-45f0-a143-eaa5f2d52fcb-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"d85de252-1b8c-45f0-a143-eaa5f2d52fcb\") " pod="openstack/ovsdbserver-sb-0" Dec 09 12:25:26 crc kubenswrapper[4703]: I1209 12:25:26.221378 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jk6l\" (UniqueName: \"kubernetes.io/projected/d85de252-1b8c-45f0-a143-eaa5f2d52fcb-kube-api-access-9jk6l\") pod \"ovsdbserver-sb-0\" (UID: \"d85de252-1b8c-45f0-a143-eaa5f2d52fcb\") " pod="openstack/ovsdbserver-sb-0" Dec 09 12:25:26 crc kubenswrapper[4703]: I1209 12:25:26.231341 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d85de252-1b8c-45f0-a143-eaa5f2d52fcb-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"d85de252-1b8c-45f0-a143-eaa5f2d52fcb\") " pod="openstack/ovsdbserver-sb-0" Dec 09 12:25:26 crc kubenswrapper[4703]: I1209 12:25:26.255856 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-87a55052-6743-4111-b2a2-225dd0408e36\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-87a55052-6743-4111-b2a2-225dd0408e36\") pod \"ovsdbserver-sb-0\" (UID: \"d85de252-1b8c-45f0-a143-eaa5f2d52fcb\") " pod="openstack/ovsdbserver-sb-0" Dec 09 12:25:26 crc kubenswrapper[4703]: I1209 12:25:26.545722 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Dec 09 12:25:29 crc kubenswrapper[4703]: I1209 12:25:29.917790 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-distributor-664b687b54-cjj5n"] Dec 09 12:25:29 crc kubenswrapper[4703]: I1209 12:25:29.919298 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-distributor-664b687b54-cjj5n" Dec 09 12:25:29 crc kubenswrapper[4703]: I1209 12:25:29.922661 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-ca-bundle" Dec 09 12:25:29 crc kubenswrapper[4703]: I1209 12:25:29.923583 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-distributor-grpc" Dec 09 12:25:29 crc kubenswrapper[4703]: I1209 12:25:29.925326 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-dockercfg-wccf9" Dec 09 12:25:29 crc kubenswrapper[4703]: I1209 12:25:29.925403 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-config" Dec 09 12:25:29 crc kubenswrapper[4703]: I1209 12:25:29.925776 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-distributor-http" Dec 09 12:25:29 crc kubenswrapper[4703]: I1209 12:25:29.941291 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-distributor-664b687b54-cjj5n"] Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.074792 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/36c4cea6-8d9a-4979-83c0-28ba95bd7c7e-cloudkitty-lokistack-distributor-grpc\") pod \"cloudkitty-lokistack-distributor-664b687b54-cjj5n\" (UID: \"36c4cea6-8d9a-4979-83c0-28ba95bd7c7e\") " pod="openstack/cloudkitty-lokistack-distributor-664b687b54-cjj5n" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.076726 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v848h\" (UniqueName: \"kubernetes.io/projected/36c4cea6-8d9a-4979-83c0-28ba95bd7c7e-kube-api-access-v848h\") pod \"cloudkitty-lokistack-distributor-664b687b54-cjj5n\" (UID: \"36c4cea6-8d9a-4979-83c0-28ba95bd7c7e\") " pod="openstack/cloudkitty-lokistack-distributor-664b687b54-cjj5n" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.076971 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-distributor-http\" (UniqueName: \"kubernetes.io/secret/36c4cea6-8d9a-4979-83c0-28ba95bd7c7e-cloudkitty-lokistack-distributor-http\") pod \"cloudkitty-lokistack-distributor-664b687b54-cjj5n\" (UID: \"36c4cea6-8d9a-4979-83c0-28ba95bd7c7e\") " pod="openstack/cloudkitty-lokistack-distributor-664b687b54-cjj5n" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.077088 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36c4cea6-8d9a-4979-83c0-28ba95bd7c7e-config\") pod \"cloudkitty-lokistack-distributor-664b687b54-cjj5n\" (UID: \"36c4cea6-8d9a-4979-83c0-28ba95bd7c7e\") " pod="openstack/cloudkitty-lokistack-distributor-664b687b54-cjj5n" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.077128 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/36c4cea6-8d9a-4979-83c0-28ba95bd7c7e-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-distributor-664b687b54-cjj5n\" (UID: \"36c4cea6-8d9a-4979-83c0-28ba95bd7c7e\") " pod="openstack/cloudkitty-lokistack-distributor-664b687b54-cjj5n" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.130300 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-querier-5467947bf7-tfqqj"] Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.136935 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-querier-5467947bf7-tfqqj" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.140336 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-loki-s3" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.140410 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-querier-grpc" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.140543 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-querier-http" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.141647 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-querier-5467947bf7-tfqqj"] Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.180027 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36c4cea6-8d9a-4979-83c0-28ba95bd7c7e-config\") pod \"cloudkitty-lokistack-distributor-664b687b54-cjj5n\" (UID: \"36c4cea6-8d9a-4979-83c0-28ba95bd7c7e\") " pod="openstack/cloudkitty-lokistack-distributor-664b687b54-cjj5n" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.180091 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/36c4cea6-8d9a-4979-83c0-28ba95bd7c7e-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-distributor-664b687b54-cjj5n\" (UID: \"36c4cea6-8d9a-4979-83c0-28ba95bd7c7e\") " pod="openstack/cloudkitty-lokistack-distributor-664b687b54-cjj5n" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.180161 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/36c4cea6-8d9a-4979-83c0-28ba95bd7c7e-cloudkitty-lokistack-distributor-grpc\") pod \"cloudkitty-lokistack-distributor-664b687b54-cjj5n\" (UID: \"36c4cea6-8d9a-4979-83c0-28ba95bd7c7e\") " pod="openstack/cloudkitty-lokistack-distributor-664b687b54-cjj5n" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.180238 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v848h\" (UniqueName: \"kubernetes.io/projected/36c4cea6-8d9a-4979-83c0-28ba95bd7c7e-kube-api-access-v848h\") pod \"cloudkitty-lokistack-distributor-664b687b54-cjj5n\" (UID: \"36c4cea6-8d9a-4979-83c0-28ba95bd7c7e\") " pod="openstack/cloudkitty-lokistack-distributor-664b687b54-cjj5n" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.180328 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-distributor-http\" (UniqueName: \"kubernetes.io/secret/36c4cea6-8d9a-4979-83c0-28ba95bd7c7e-cloudkitty-lokistack-distributor-http\") pod \"cloudkitty-lokistack-distributor-664b687b54-cjj5n\" (UID: \"36c4cea6-8d9a-4979-83c0-28ba95bd7c7e\") " pod="openstack/cloudkitty-lokistack-distributor-664b687b54-cjj5n" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.181920 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36c4cea6-8d9a-4979-83c0-28ba95bd7c7e-config\") pod \"cloudkitty-lokistack-distributor-664b687b54-cjj5n\" (UID: \"36c4cea6-8d9a-4979-83c0-28ba95bd7c7e\") " pod="openstack/cloudkitty-lokistack-distributor-664b687b54-cjj5n" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.182775 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/36c4cea6-8d9a-4979-83c0-28ba95bd7c7e-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-distributor-664b687b54-cjj5n\" (UID: \"36c4cea6-8d9a-4979-83c0-28ba95bd7c7e\") " pod="openstack/cloudkitty-lokistack-distributor-664b687b54-cjj5n" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.206490 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-distributor-http\" (UniqueName: \"kubernetes.io/secret/36c4cea6-8d9a-4979-83c0-28ba95bd7c7e-cloudkitty-lokistack-distributor-http\") pod \"cloudkitty-lokistack-distributor-664b687b54-cjj5n\" (UID: \"36c4cea6-8d9a-4979-83c0-28ba95bd7c7e\") " pod="openstack/cloudkitty-lokistack-distributor-664b687b54-cjj5n" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.219967 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/36c4cea6-8d9a-4979-83c0-28ba95bd7c7e-cloudkitty-lokistack-distributor-grpc\") pod \"cloudkitty-lokistack-distributor-664b687b54-cjj5n\" (UID: \"36c4cea6-8d9a-4979-83c0-28ba95bd7c7e\") " pod="openstack/cloudkitty-lokistack-distributor-664b687b54-cjj5n" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.222238 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v848h\" (UniqueName: \"kubernetes.io/projected/36c4cea6-8d9a-4979-83c0-28ba95bd7c7e-kube-api-access-v848h\") pod \"cloudkitty-lokistack-distributor-664b687b54-cjj5n\" (UID: \"36c4cea6-8d9a-4979-83c0-28ba95bd7c7e\") " pod="openstack/cloudkitty-lokistack-distributor-664b687b54-cjj5n" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.241885 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-9rn7g"] Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.243571 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-9rn7g" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.245268 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-distributor-664b687b54-cjj5n" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.248612 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-query-frontend-http" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.248913 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-query-frontend-grpc" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.258645 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-9rn7g"] Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.281828 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-querier-http\" (UniqueName: \"kubernetes.io/secret/c05374ec-bb40-45f2-bc03-f84a6eb40f42-cloudkitty-lokistack-querier-http\") pod \"cloudkitty-lokistack-querier-5467947bf7-tfqqj\" (UID: \"c05374ec-bb40-45f2-bc03-f84a6eb40f42\") " pod="openstack/cloudkitty-lokistack-querier-5467947bf7-tfqqj" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.281876 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c05374ec-bb40-45f2-bc03-f84a6eb40f42-config\") pod \"cloudkitty-lokistack-querier-5467947bf7-tfqqj\" (UID: \"c05374ec-bb40-45f2-bc03-f84a6eb40f42\") " pod="openstack/cloudkitty-lokistack-querier-5467947bf7-tfqqj" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.281904 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sldcw\" (UniqueName: \"kubernetes.io/projected/c05374ec-bb40-45f2-bc03-f84a6eb40f42-kube-api-access-sldcw\") pod \"cloudkitty-lokistack-querier-5467947bf7-tfqqj\" (UID: \"c05374ec-bb40-45f2-bc03-f84a6eb40f42\") " pod="openstack/cloudkitty-lokistack-querier-5467947bf7-tfqqj" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.281923 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-querier-grpc\" (UniqueName: \"kubernetes.io/secret/c05374ec-bb40-45f2-bc03-f84a6eb40f42-cloudkitty-lokistack-querier-grpc\") pod \"cloudkitty-lokistack-querier-5467947bf7-tfqqj\" (UID: \"c05374ec-bb40-45f2-bc03-f84a6eb40f42\") " pod="openstack/cloudkitty-lokistack-querier-5467947bf7-tfqqj" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.281956 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c05374ec-bb40-45f2-bc03-f84a6eb40f42-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-querier-5467947bf7-tfqqj\" (UID: \"c05374ec-bb40-45f2-bc03-f84a6eb40f42\") " pod="openstack/cloudkitty-lokistack-querier-5467947bf7-tfqqj" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.281979 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/c05374ec-bb40-45f2-bc03-f84a6eb40f42-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-querier-5467947bf7-tfqqj\" (UID: \"c05374ec-bb40-45f2-bc03-f84a6eb40f42\") " pod="openstack/cloudkitty-lokistack-querier-5467947bf7-tfqqj" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.384523 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-querier-grpc\" (UniqueName: \"kubernetes.io/secret/c05374ec-bb40-45f2-bc03-f84a6eb40f42-cloudkitty-lokistack-querier-grpc\") pod \"cloudkitty-lokistack-querier-5467947bf7-tfqqj\" (UID: \"c05374ec-bb40-45f2-bc03-f84a6eb40f42\") " pod="openstack/cloudkitty-lokistack-querier-5467947bf7-tfqqj" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.384634 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/810fed49-b38e-4404-a03c-05dc5aa59ccb-cloudkitty-lokistack-query-frontend-http\") pod \"cloudkitty-lokistack-query-frontend-7c8cd744d9-9rn7g\" (UID: \"810fed49-b38e-4404-a03c-05dc5aa59ccb\") " pod="openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-9rn7g" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.384690 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c05374ec-bb40-45f2-bc03-f84a6eb40f42-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-querier-5467947bf7-tfqqj\" (UID: \"c05374ec-bb40-45f2-bc03-f84a6eb40f42\") " pod="openstack/cloudkitty-lokistack-querier-5467947bf7-tfqqj" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.384735 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/c05374ec-bb40-45f2-bc03-f84a6eb40f42-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-querier-5467947bf7-tfqqj\" (UID: \"c05374ec-bb40-45f2-bc03-f84a6eb40f42\") " pod="openstack/cloudkitty-lokistack-querier-5467947bf7-tfqqj" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.384892 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/810fed49-b38e-4404-a03c-05dc5aa59ccb-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-query-frontend-7c8cd744d9-9rn7g\" (UID: \"810fed49-b38e-4404-a03c-05dc5aa59ccb\") " pod="openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-9rn7g" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.385054 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/810fed49-b38e-4404-a03c-05dc5aa59ccb-cloudkitty-lokistack-query-frontend-grpc\") pod \"cloudkitty-lokistack-query-frontend-7c8cd744d9-9rn7g\" (UID: \"810fed49-b38e-4404-a03c-05dc5aa59ccb\") " pod="openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-9rn7g" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.385099 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlstc\" (UniqueName: \"kubernetes.io/projected/810fed49-b38e-4404-a03c-05dc5aa59ccb-kube-api-access-zlstc\") pod \"cloudkitty-lokistack-query-frontend-7c8cd744d9-9rn7g\" (UID: \"810fed49-b38e-4404-a03c-05dc5aa59ccb\") " pod="openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-9rn7g" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.385174 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/810fed49-b38e-4404-a03c-05dc5aa59ccb-config\") pod \"cloudkitty-lokistack-query-frontend-7c8cd744d9-9rn7g\" (UID: \"810fed49-b38e-4404-a03c-05dc5aa59ccb\") " pod="openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-9rn7g" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.385275 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-querier-http\" (UniqueName: \"kubernetes.io/secret/c05374ec-bb40-45f2-bc03-f84a6eb40f42-cloudkitty-lokistack-querier-http\") pod \"cloudkitty-lokistack-querier-5467947bf7-tfqqj\" (UID: \"c05374ec-bb40-45f2-bc03-f84a6eb40f42\") " pod="openstack/cloudkitty-lokistack-querier-5467947bf7-tfqqj" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.385331 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c05374ec-bb40-45f2-bc03-f84a6eb40f42-config\") pod \"cloudkitty-lokistack-querier-5467947bf7-tfqqj\" (UID: \"c05374ec-bb40-45f2-bc03-f84a6eb40f42\") " pod="openstack/cloudkitty-lokistack-querier-5467947bf7-tfqqj" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.385360 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sldcw\" (UniqueName: \"kubernetes.io/projected/c05374ec-bb40-45f2-bc03-f84a6eb40f42-kube-api-access-sldcw\") pod \"cloudkitty-lokistack-querier-5467947bf7-tfqqj\" (UID: \"c05374ec-bb40-45f2-bc03-f84a6eb40f42\") " pod="openstack/cloudkitty-lokistack-querier-5467947bf7-tfqqj" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.387083 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c05374ec-bb40-45f2-bc03-f84a6eb40f42-config\") pod \"cloudkitty-lokistack-querier-5467947bf7-tfqqj\" (UID: \"c05374ec-bb40-45f2-bc03-f84a6eb40f42\") " pod="openstack/cloudkitty-lokistack-querier-5467947bf7-tfqqj" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.387818 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c05374ec-bb40-45f2-bc03-f84a6eb40f42-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-querier-5467947bf7-tfqqj\" (UID: \"c05374ec-bb40-45f2-bc03-f84a6eb40f42\") " pod="openstack/cloudkitty-lokistack-querier-5467947bf7-tfqqj" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.420175 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-querier-http\" (UniqueName: \"kubernetes.io/secret/c05374ec-bb40-45f2-bc03-f84a6eb40f42-cloudkitty-lokistack-querier-http\") pod \"cloudkitty-lokistack-querier-5467947bf7-tfqqj\" (UID: \"c05374ec-bb40-45f2-bc03-f84a6eb40f42\") " pod="openstack/cloudkitty-lokistack-querier-5467947bf7-tfqqj" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.420422 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/c05374ec-bb40-45f2-bc03-f84a6eb40f42-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-querier-5467947bf7-tfqqj\" (UID: \"c05374ec-bb40-45f2-bc03-f84a6eb40f42\") " pod="openstack/cloudkitty-lokistack-querier-5467947bf7-tfqqj" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.421456 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-querier-grpc\" (UniqueName: \"kubernetes.io/secret/c05374ec-bb40-45f2-bc03-f84a6eb40f42-cloudkitty-lokistack-querier-grpc\") pod \"cloudkitty-lokistack-querier-5467947bf7-tfqqj\" (UID: \"c05374ec-bb40-45f2-bc03-f84a6eb40f42\") " pod="openstack/cloudkitty-lokistack-querier-5467947bf7-tfqqj" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.427703 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-gateway-bc75944f-sdssq"] Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.429247 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-gateway-bc75944f-sdssq" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.436260 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-gateway-http" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.436539 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-ca" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.437655 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-gateway" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.438221 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-gateway-client-http" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.438397 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-gateway-ca-bundle" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.438552 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-gateway" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.446368 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sldcw\" (UniqueName: \"kubernetes.io/projected/c05374ec-bb40-45f2-bc03-f84a6eb40f42-kube-api-access-sldcw\") pod \"cloudkitty-lokistack-querier-5467947bf7-tfqqj\" (UID: \"c05374ec-bb40-45f2-bc03-f84a6eb40f42\") " pod="openstack/cloudkitty-lokistack-querier-5467947bf7-tfqqj" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.459291 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-gateway-bc75944f-sdssq"] Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.474877 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-querier-5467947bf7-tfqqj" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.477160 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-gateway-bc75944f-5rr9c"] Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.479507 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-gateway-bc75944f-5rr9c" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.487358 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/810fed49-b38e-4404-a03c-05dc5aa59ccb-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-query-frontend-7c8cd744d9-9rn7g\" (UID: \"810fed49-b38e-4404-a03c-05dc5aa59ccb\") " pod="openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-9rn7g" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.487457 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/810fed49-b38e-4404-a03c-05dc5aa59ccb-cloudkitty-lokistack-query-frontend-grpc\") pod \"cloudkitty-lokistack-query-frontend-7c8cd744d9-9rn7g\" (UID: \"810fed49-b38e-4404-a03c-05dc5aa59ccb\") " pod="openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-9rn7g" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.487493 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlstc\" (UniqueName: \"kubernetes.io/projected/810fed49-b38e-4404-a03c-05dc5aa59ccb-kube-api-access-zlstc\") pod \"cloudkitty-lokistack-query-frontend-7c8cd744d9-9rn7g\" (UID: \"810fed49-b38e-4404-a03c-05dc5aa59ccb\") " pod="openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-9rn7g" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.487543 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/810fed49-b38e-4404-a03c-05dc5aa59ccb-config\") pod \"cloudkitty-lokistack-query-frontend-7c8cd744d9-9rn7g\" (UID: \"810fed49-b38e-4404-a03c-05dc5aa59ccb\") " pod="openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-9rn7g" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.487626 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/810fed49-b38e-4404-a03c-05dc5aa59ccb-cloudkitty-lokistack-query-frontend-http\") pod \"cloudkitty-lokistack-query-frontend-7c8cd744d9-9rn7g\" (UID: \"810fed49-b38e-4404-a03c-05dc5aa59ccb\") " pod="openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-9rn7g" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.488509 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/810fed49-b38e-4404-a03c-05dc5aa59ccb-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-query-frontend-7c8cd744d9-9rn7g\" (UID: \"810fed49-b38e-4404-a03c-05dc5aa59ccb\") " pod="openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-9rn7g" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.489444 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/810fed49-b38e-4404-a03c-05dc5aa59ccb-config\") pod \"cloudkitty-lokistack-query-frontend-7c8cd744d9-9rn7g\" (UID: \"810fed49-b38e-4404-a03c-05dc5aa59ccb\") " pod="openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-9rn7g" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.495128 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/810fed49-b38e-4404-a03c-05dc5aa59ccb-cloudkitty-lokistack-query-frontend-grpc\") pod \"cloudkitty-lokistack-query-frontend-7c8cd744d9-9rn7g\" (UID: \"810fed49-b38e-4404-a03c-05dc5aa59ccb\") " pod="openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-9rn7g" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.495766 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/810fed49-b38e-4404-a03c-05dc5aa59ccb-cloudkitty-lokistack-query-frontend-http\") pod \"cloudkitty-lokistack-query-frontend-7c8cd744d9-9rn7g\" (UID: \"810fed49-b38e-4404-a03c-05dc5aa59ccb\") " pod="openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-9rn7g" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.495959 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-gateway-dockercfg-gwl8m" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.523689 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlstc\" (UniqueName: \"kubernetes.io/projected/810fed49-b38e-4404-a03c-05dc5aa59ccb-kube-api-access-zlstc\") pod \"cloudkitty-lokistack-query-frontend-7c8cd744d9-9rn7g\" (UID: \"810fed49-b38e-4404-a03c-05dc5aa59ccb\") " pod="openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-9rn7g" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.546210 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-gateway-bc75944f-5rr9c"] Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.589580 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/b56f6841-bd74-4321-bd6d-a2478a62a8de-tenants\") pod \"cloudkitty-lokistack-gateway-bc75944f-5rr9c\" (UID: \"b56f6841-bd74-4321-bd6d-a2478a62a8de\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-5rr9c" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.589719 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/378ce3ef-fa33-4466-afa9-cc57b84fed76-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-bc75944f-sdssq\" (UID: \"378ce3ef-fa33-4466-afa9-cc57b84fed76\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-sdssq" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.590039 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-9rn7g" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.590249 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/378ce3ef-fa33-4466-afa9-cc57b84fed76-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-bc75944f-sdssq\" (UID: \"378ce3ef-fa33-4466-afa9-cc57b84fed76\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-sdssq" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.590348 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/378ce3ef-fa33-4466-afa9-cc57b84fed76-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-bc75944f-sdssq\" (UID: \"378ce3ef-fa33-4466-afa9-cc57b84fed76\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-sdssq" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.590424 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/b56f6841-bd74-4321-bd6d-a2478a62a8de-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-bc75944f-5rr9c\" (UID: \"b56f6841-bd74-4321-bd6d-a2478a62a8de\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-5rr9c" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.590517 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/378ce3ef-fa33-4466-afa9-cc57b84fed76-rbac\") pod \"cloudkitty-lokistack-gateway-bc75944f-sdssq\" (UID: \"378ce3ef-fa33-4466-afa9-cc57b84fed76\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-sdssq" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.590623 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/378ce3ef-fa33-4466-afa9-cc57b84fed76-tenants\") pod \"cloudkitty-lokistack-gateway-bc75944f-sdssq\" (UID: \"378ce3ef-fa33-4466-afa9-cc57b84fed76\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-sdssq" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.590676 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/378ce3ef-fa33-4466-afa9-cc57b84fed76-tls-secret\") pod \"cloudkitty-lokistack-gateway-bc75944f-sdssq\" (UID: \"378ce3ef-fa33-4466-afa9-cc57b84fed76\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-sdssq" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.590739 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b56f6841-bd74-4321-bd6d-a2478a62a8de-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-bc75944f-5rr9c\" (UID: \"b56f6841-bd74-4321-bd6d-a2478a62a8de\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-5rr9c" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.590870 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b56f6841-bd74-4321-bd6d-a2478a62a8de-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-bc75944f-5rr9c\" (UID: \"b56f6841-bd74-4321-bd6d-a2478a62a8de\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-5rr9c" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.591111 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/b56f6841-bd74-4321-bd6d-a2478a62a8de-rbac\") pod \"cloudkitty-lokistack-gateway-bc75944f-5rr9c\" (UID: \"b56f6841-bd74-4321-bd6d-a2478a62a8de\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-5rr9c" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.591174 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/378ce3ef-fa33-4466-afa9-cc57b84fed76-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-bc75944f-sdssq\" (UID: \"378ce3ef-fa33-4466-afa9-cc57b84fed76\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-sdssq" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.591212 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/b56f6841-bd74-4321-bd6d-a2478a62a8de-tls-secret\") pod \"cloudkitty-lokistack-gateway-bc75944f-5rr9c\" (UID: \"b56f6841-bd74-4321-bd6d-a2478a62a8de\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-5rr9c" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.591252 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5mjh\" (UniqueName: \"kubernetes.io/projected/378ce3ef-fa33-4466-afa9-cc57b84fed76-kube-api-access-m5mjh\") pod \"cloudkitty-lokistack-gateway-bc75944f-sdssq\" (UID: \"378ce3ef-fa33-4466-afa9-cc57b84fed76\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-sdssq" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.591286 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/378ce3ef-fa33-4466-afa9-cc57b84fed76-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-bc75944f-sdssq\" (UID: \"378ce3ef-fa33-4466-afa9-cc57b84fed76\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-sdssq" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.591316 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5q9zd\" (UniqueName: \"kubernetes.io/projected/b56f6841-bd74-4321-bd6d-a2478a62a8de-kube-api-access-5q9zd\") pod \"cloudkitty-lokistack-gateway-bc75944f-5rr9c\" (UID: \"b56f6841-bd74-4321-bd6d-a2478a62a8de\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-5rr9c" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.591344 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/b56f6841-bd74-4321-bd6d-a2478a62a8de-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-bc75944f-5rr9c\" (UID: \"b56f6841-bd74-4321-bd6d-a2478a62a8de\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-5rr9c" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.591366 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b56f6841-bd74-4321-bd6d-a2478a62a8de-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-bc75944f-5rr9c\" (UID: \"b56f6841-bd74-4321-bd6d-a2478a62a8de\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-5rr9c" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.693312 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/b56f6841-bd74-4321-bd6d-a2478a62a8de-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-bc75944f-5rr9c\" (UID: \"b56f6841-bd74-4321-bd6d-a2478a62a8de\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-5rr9c" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.693366 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/378ce3ef-fa33-4466-afa9-cc57b84fed76-rbac\") pod \"cloudkitty-lokistack-gateway-bc75944f-sdssq\" (UID: \"378ce3ef-fa33-4466-afa9-cc57b84fed76\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-sdssq" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.693400 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/378ce3ef-fa33-4466-afa9-cc57b84fed76-tenants\") pod \"cloudkitty-lokistack-gateway-bc75944f-sdssq\" (UID: \"378ce3ef-fa33-4466-afa9-cc57b84fed76\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-sdssq" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.693422 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/378ce3ef-fa33-4466-afa9-cc57b84fed76-tls-secret\") pod \"cloudkitty-lokistack-gateway-bc75944f-sdssq\" (UID: \"378ce3ef-fa33-4466-afa9-cc57b84fed76\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-sdssq" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.693449 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b56f6841-bd74-4321-bd6d-a2478a62a8de-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-bc75944f-5rr9c\" (UID: \"b56f6841-bd74-4321-bd6d-a2478a62a8de\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-5rr9c" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.693486 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b56f6841-bd74-4321-bd6d-a2478a62a8de-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-bc75944f-5rr9c\" (UID: \"b56f6841-bd74-4321-bd6d-a2478a62a8de\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-5rr9c" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.693527 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/b56f6841-bd74-4321-bd6d-a2478a62a8de-rbac\") pod \"cloudkitty-lokistack-gateway-bc75944f-5rr9c\" (UID: \"b56f6841-bd74-4321-bd6d-a2478a62a8de\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-5rr9c" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.693549 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/378ce3ef-fa33-4466-afa9-cc57b84fed76-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-bc75944f-sdssq\" (UID: \"378ce3ef-fa33-4466-afa9-cc57b84fed76\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-sdssq" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.693565 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/b56f6841-bd74-4321-bd6d-a2478a62a8de-tls-secret\") pod \"cloudkitty-lokistack-gateway-bc75944f-5rr9c\" (UID: \"b56f6841-bd74-4321-bd6d-a2478a62a8de\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-5rr9c" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.693581 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5mjh\" (UniqueName: \"kubernetes.io/projected/378ce3ef-fa33-4466-afa9-cc57b84fed76-kube-api-access-m5mjh\") pod \"cloudkitty-lokistack-gateway-bc75944f-sdssq\" (UID: \"378ce3ef-fa33-4466-afa9-cc57b84fed76\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-sdssq" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.693601 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/378ce3ef-fa33-4466-afa9-cc57b84fed76-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-bc75944f-sdssq\" (UID: \"378ce3ef-fa33-4466-afa9-cc57b84fed76\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-sdssq" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.693621 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5q9zd\" (UniqueName: \"kubernetes.io/projected/b56f6841-bd74-4321-bd6d-a2478a62a8de-kube-api-access-5q9zd\") pod \"cloudkitty-lokistack-gateway-bc75944f-5rr9c\" (UID: \"b56f6841-bd74-4321-bd6d-a2478a62a8de\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-5rr9c" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.693639 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/b56f6841-bd74-4321-bd6d-a2478a62a8de-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-bc75944f-5rr9c\" (UID: \"b56f6841-bd74-4321-bd6d-a2478a62a8de\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-5rr9c" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.693656 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b56f6841-bd74-4321-bd6d-a2478a62a8de-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-bc75944f-5rr9c\" (UID: \"b56f6841-bd74-4321-bd6d-a2478a62a8de\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-5rr9c" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.693692 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/b56f6841-bd74-4321-bd6d-a2478a62a8de-tenants\") pod \"cloudkitty-lokistack-gateway-bc75944f-5rr9c\" (UID: \"b56f6841-bd74-4321-bd6d-a2478a62a8de\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-5rr9c" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.693716 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/378ce3ef-fa33-4466-afa9-cc57b84fed76-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-bc75944f-sdssq\" (UID: \"378ce3ef-fa33-4466-afa9-cc57b84fed76\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-sdssq" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.693741 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/378ce3ef-fa33-4466-afa9-cc57b84fed76-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-bc75944f-sdssq\" (UID: \"378ce3ef-fa33-4466-afa9-cc57b84fed76\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-sdssq" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.693766 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/378ce3ef-fa33-4466-afa9-cc57b84fed76-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-bc75944f-sdssq\" (UID: \"378ce3ef-fa33-4466-afa9-cc57b84fed76\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-sdssq" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.694658 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/378ce3ef-fa33-4466-afa9-cc57b84fed76-rbac\") pod \"cloudkitty-lokistack-gateway-bc75944f-sdssq\" (UID: \"378ce3ef-fa33-4466-afa9-cc57b84fed76\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-sdssq" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.694765 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b56f6841-bd74-4321-bd6d-a2478a62a8de-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-bc75944f-5rr9c\" (UID: \"b56f6841-bd74-4321-bd6d-a2478a62a8de\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-5rr9c" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.694764 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b56f6841-bd74-4321-bd6d-a2478a62a8de-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-bc75944f-5rr9c\" (UID: \"b56f6841-bd74-4321-bd6d-a2478a62a8de\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-5rr9c" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.695559 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/378ce3ef-fa33-4466-afa9-cc57b84fed76-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-bc75944f-sdssq\" (UID: \"378ce3ef-fa33-4466-afa9-cc57b84fed76\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-sdssq" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.695822 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/378ce3ef-fa33-4466-afa9-cc57b84fed76-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-bc75944f-sdssq\" (UID: \"378ce3ef-fa33-4466-afa9-cc57b84fed76\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-sdssq" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.695839 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/378ce3ef-fa33-4466-afa9-cc57b84fed76-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-bc75944f-sdssq\" (UID: \"378ce3ef-fa33-4466-afa9-cc57b84fed76\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-sdssq" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.696544 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/378ce3ef-fa33-4466-afa9-cc57b84fed76-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-bc75944f-sdssq\" (UID: \"378ce3ef-fa33-4466-afa9-cc57b84fed76\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-sdssq" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.697699 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/b56f6841-bd74-4321-bd6d-a2478a62a8de-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-bc75944f-5rr9c\" (UID: \"b56f6841-bd74-4321-bd6d-a2478a62a8de\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-5rr9c" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.698534 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/b56f6841-bd74-4321-bd6d-a2478a62a8de-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-bc75944f-5rr9c\" (UID: \"b56f6841-bd74-4321-bd6d-a2478a62a8de\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-5rr9c" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.698748 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/b56f6841-bd74-4321-bd6d-a2478a62a8de-rbac\") pod \"cloudkitty-lokistack-gateway-bc75944f-5rr9c\" (UID: \"b56f6841-bd74-4321-bd6d-a2478a62a8de\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-5rr9c" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.699187 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b56f6841-bd74-4321-bd6d-a2478a62a8de-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-bc75944f-5rr9c\" (UID: \"b56f6841-bd74-4321-bd6d-a2478a62a8de\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-5rr9c" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.699918 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/378ce3ef-fa33-4466-afa9-cc57b84fed76-tls-secret\") pod \"cloudkitty-lokistack-gateway-bc75944f-sdssq\" (UID: \"378ce3ef-fa33-4466-afa9-cc57b84fed76\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-sdssq" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.700099 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/378ce3ef-fa33-4466-afa9-cc57b84fed76-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-bc75944f-sdssq\" (UID: \"378ce3ef-fa33-4466-afa9-cc57b84fed76\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-sdssq" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.701832 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/b56f6841-bd74-4321-bd6d-a2478a62a8de-tenants\") pod \"cloudkitty-lokistack-gateway-bc75944f-5rr9c\" (UID: \"b56f6841-bd74-4321-bd6d-a2478a62a8de\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-5rr9c" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.704084 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/378ce3ef-fa33-4466-afa9-cc57b84fed76-tenants\") pod \"cloudkitty-lokistack-gateway-bc75944f-sdssq\" (UID: \"378ce3ef-fa33-4466-afa9-cc57b84fed76\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-sdssq" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.710263 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/b56f6841-bd74-4321-bd6d-a2478a62a8de-tls-secret\") pod \"cloudkitty-lokistack-gateway-bc75944f-5rr9c\" (UID: \"b56f6841-bd74-4321-bd6d-a2478a62a8de\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-5rr9c" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.714290 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5mjh\" (UniqueName: \"kubernetes.io/projected/378ce3ef-fa33-4466-afa9-cc57b84fed76-kube-api-access-m5mjh\") pod \"cloudkitty-lokistack-gateway-bc75944f-sdssq\" (UID: \"378ce3ef-fa33-4466-afa9-cc57b84fed76\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-sdssq" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.717404 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5q9zd\" (UniqueName: \"kubernetes.io/projected/b56f6841-bd74-4321-bd6d-a2478a62a8de-kube-api-access-5q9zd\") pod \"cloudkitty-lokistack-gateway-bc75944f-5rr9c\" (UID: \"b56f6841-bd74-4321-bd6d-a2478a62a8de\") " pod="openstack/cloudkitty-lokistack-gateway-bc75944f-5rr9c" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.838870 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-gateway-bc75944f-sdssq" Dec 09 12:25:30 crc kubenswrapper[4703]: I1209 12:25:30.858407 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-gateway-bc75944f-5rr9c" Dec 09 12:25:31 crc kubenswrapper[4703]: I1209 12:25:31.143684 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-ingester-0"] Dec 09 12:25:31 crc kubenswrapper[4703]: I1209 12:25:31.145549 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-ingester-0" Dec 09 12:25:31 crc kubenswrapper[4703]: I1209 12:25:31.153485 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-ingester-grpc" Dec 09 12:25:31 crc kubenswrapper[4703]: I1209 12:25:31.169629 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-ingester-http" Dec 09 12:25:31 crc kubenswrapper[4703]: I1209 12:25:31.173534 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-ingester-0"] Dec 09 12:25:31 crc kubenswrapper[4703]: I1209 12:25:31.226256 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-compactor-0"] Dec 09 12:25:31 crc kubenswrapper[4703]: I1209 12:25:31.227585 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-compactor-0" Dec 09 12:25:31 crc kubenswrapper[4703]: I1209 12:25:31.236632 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-compactor-http" Dec 09 12:25:31 crc kubenswrapper[4703]: I1209 12:25:31.236937 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-compactor-grpc" Dec 09 12:25:31 crc kubenswrapper[4703]: I1209 12:25:31.275260 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-compactor-0"] Dec 09 12:25:31 crc kubenswrapper[4703]: I1209 12:25:31.315174 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92ca14de-b3f5-4f21-96d6-a71281b49c5c-config\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"92ca14de-b3f5-4f21-96d6-a71281b49c5c\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 09 12:25:31 crc kubenswrapper[4703]: I1209 12:25:31.315275 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/be3c1046-2c78-46ab-a62f-f4270561ca1c-cloudkitty-lokistack-ingester-grpc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"be3c1046-2c78-46ab-a62f-f4270561ca1c\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 09 12:25:31 crc kubenswrapper[4703]: I1209 12:25:31.315308 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"be3c1046-2c78-46ab-a62f-f4270561ca1c\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 09 12:25:31 crc kubenswrapper[4703]: I1209 12:25:31.315332 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"be3c1046-2c78-46ab-a62f-f4270561ca1c\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 09 12:25:31 crc kubenswrapper[4703]: I1209 12:25:31.315357 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/be3c1046-2c78-46ab-a62f-f4270561ca1c-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"be3c1046-2c78-46ab-a62f-f4270561ca1c\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 09 12:25:31 crc kubenswrapper[4703]: I1209 12:25:31.315612 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/92ca14de-b3f5-4f21-96d6-a71281b49c5c-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"92ca14de-b3f5-4f21-96d6-a71281b49c5c\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 09 12:25:31 crc kubenswrapper[4703]: I1209 12:25:31.315707 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqfg7\" (UniqueName: \"kubernetes.io/projected/be3c1046-2c78-46ab-a62f-f4270561ca1c-kube-api-access-zqfg7\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"be3c1046-2c78-46ab-a62f-f4270561ca1c\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 09 12:25:31 crc kubenswrapper[4703]: I1209 12:25:31.315760 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjrbj\" (UniqueName: \"kubernetes.io/projected/92ca14de-b3f5-4f21-96d6-a71281b49c5c-kube-api-access-hjrbj\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"92ca14de-b3f5-4f21-96d6-a71281b49c5c\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 09 12:25:31 crc kubenswrapper[4703]: I1209 12:25:31.315795 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/92ca14de-b3f5-4f21-96d6-a71281b49c5c-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"92ca14de-b3f5-4f21-96d6-a71281b49c5c\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 09 12:25:31 crc kubenswrapper[4703]: I1209 12:25:31.315904 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/be3c1046-2c78-46ab-a62f-f4270561ca1c-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"be3c1046-2c78-46ab-a62f-f4270561ca1c\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 09 12:25:31 crc kubenswrapper[4703]: I1209 12:25:31.315997 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be3c1046-2c78-46ab-a62f-f4270561ca1c-config\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"be3c1046-2c78-46ab-a62f-f4270561ca1c\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 09 12:25:31 crc kubenswrapper[4703]: I1209 12:25:31.316136 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ingester-http\" (UniqueName: \"kubernetes.io/secret/be3c1046-2c78-46ab-a62f-f4270561ca1c-cloudkitty-lokistack-ingester-http\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"be3c1046-2c78-46ab-a62f-f4270561ca1c\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 09 12:25:31 crc kubenswrapper[4703]: I1209 12:25:31.316228 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"92ca14de-b3f5-4f21-96d6-a71281b49c5c\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 09 12:25:31 crc kubenswrapper[4703]: I1209 12:25:31.316261 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-compactor-http\" (UniqueName: \"kubernetes.io/secret/92ca14de-b3f5-4f21-96d6-a71281b49c5c-cloudkitty-lokistack-compactor-http\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"92ca14de-b3f5-4f21-96d6-a71281b49c5c\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 09 12:25:31 crc kubenswrapper[4703]: I1209 12:25:31.316474 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/92ca14de-b3f5-4f21-96d6-a71281b49c5c-cloudkitty-lokistack-compactor-grpc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"92ca14de-b3f5-4f21-96d6-a71281b49c5c\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 09 12:25:31 crc kubenswrapper[4703]: I1209 12:25:31.336104 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-index-gateway-0"] Dec 09 12:25:31 crc kubenswrapper[4703]: I1209 12:25:31.337581 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 09 12:25:31 crc kubenswrapper[4703]: I1209 12:25:31.342990 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-index-gateway-grpc" Dec 09 12:25:31 crc kubenswrapper[4703]: I1209 12:25:31.344310 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-index-gateway-http" Dec 09 12:25:31 crc kubenswrapper[4703]: I1209 12:25:31.372465 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-index-gateway-0"] Dec 09 12:25:31 crc kubenswrapper[4703]: I1209 12:25:31.417978 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/be3c1046-2c78-46ab-a62f-f4270561ca1c-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"be3c1046-2c78-46ab-a62f-f4270561ca1c\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 09 12:25:31 crc kubenswrapper[4703]: I1209 12:25:31.418054 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be3c1046-2c78-46ab-a62f-f4270561ca1c-config\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"be3c1046-2c78-46ab-a62f-f4270561ca1c\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 09 12:25:31 crc kubenswrapper[4703]: I1209 12:25:31.418113 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/ffe1d3a3-3faf-4228-b28b-fcfb12cba786-cloudkitty-lokistack-index-gateway-http\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"ffe1d3a3-3faf-4228-b28b-fcfb12cba786\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 09 12:25:31 crc kubenswrapper[4703]: I1209 12:25:31.418155 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffe1d3a3-3faf-4228-b28b-fcfb12cba786-config\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"ffe1d3a3-3faf-4228-b28b-fcfb12cba786\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 09 12:25:31 crc kubenswrapper[4703]: I1209 12:25:31.418183 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ingester-http\" (UniqueName: \"kubernetes.io/secret/be3c1046-2c78-46ab-a62f-f4270561ca1c-cloudkitty-lokistack-ingester-http\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"be3c1046-2c78-46ab-a62f-f4270561ca1c\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 09 12:25:31 crc kubenswrapper[4703]: I1209 12:25:31.418232 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ffe1d3a3-3faf-4228-b28b-fcfb12cba786-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"ffe1d3a3-3faf-4228-b28b-fcfb12cba786\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 09 12:25:31 crc kubenswrapper[4703]: I1209 12:25:31.418260 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/ffe1d3a3-3faf-4228-b28b-fcfb12cba786-cloudkitty-lokistack-index-gateway-grpc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"ffe1d3a3-3faf-4228-b28b-fcfb12cba786\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 09 12:25:31 crc kubenswrapper[4703]: I1209 12:25:31.418295 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"92ca14de-b3f5-4f21-96d6-a71281b49c5c\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 09 12:25:31 crc kubenswrapper[4703]: I1209 12:25:31.418384 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-compactor-http\" (UniqueName: \"kubernetes.io/secret/92ca14de-b3f5-4f21-96d6-a71281b49c5c-cloudkitty-lokistack-compactor-http\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"92ca14de-b3f5-4f21-96d6-a71281b49c5c\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 09 12:25:31 crc kubenswrapper[4703]: I1209 12:25:31.418512 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/92ca14de-b3f5-4f21-96d6-a71281b49c5c-cloudkitty-lokistack-compactor-grpc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"92ca14de-b3f5-4f21-96d6-a71281b49c5c\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 09 12:25:31 crc kubenswrapper[4703]: I1209 12:25:31.418572 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/ffe1d3a3-3faf-4228-b28b-fcfb12cba786-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"ffe1d3a3-3faf-4228-b28b-fcfb12cba786\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 09 12:25:31 crc kubenswrapper[4703]: I1209 12:25:31.418624 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92ca14de-b3f5-4f21-96d6-a71281b49c5c-config\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"92ca14de-b3f5-4f21-96d6-a71281b49c5c\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 09 12:25:31 crc kubenswrapper[4703]: I1209 12:25:31.418728 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/be3c1046-2c78-46ab-a62f-f4270561ca1c-cloudkitty-lokistack-ingester-grpc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"be3c1046-2c78-46ab-a62f-f4270561ca1c\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 09 12:25:31 crc kubenswrapper[4703]: I1209 12:25:31.418795 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"be3c1046-2c78-46ab-a62f-f4270561ca1c\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 09 12:25:31 crc kubenswrapper[4703]: I1209 12:25:31.418831 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"be3c1046-2c78-46ab-a62f-f4270561ca1c\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 09 12:25:31 crc kubenswrapper[4703]: I1209 12:25:31.418873 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"ffe1d3a3-3faf-4228-b28b-fcfb12cba786\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 09 12:25:31 crc kubenswrapper[4703]: I1209 12:25:31.418909 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/be3c1046-2c78-46ab-a62f-f4270561ca1c-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"be3c1046-2c78-46ab-a62f-f4270561ca1c\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 09 12:25:31 crc kubenswrapper[4703]: I1209 12:25:31.418943 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/92ca14de-b3f5-4f21-96d6-a71281b49c5c-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"92ca14de-b3f5-4f21-96d6-a71281b49c5c\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 09 12:25:31 crc kubenswrapper[4703]: I1209 12:25:31.418975 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8nbv\" (UniqueName: \"kubernetes.io/projected/ffe1d3a3-3faf-4228-b28b-fcfb12cba786-kube-api-access-x8nbv\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"ffe1d3a3-3faf-4228-b28b-fcfb12cba786\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 09 12:25:31 crc kubenswrapper[4703]: I1209 12:25:31.418999 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqfg7\" (UniqueName: \"kubernetes.io/projected/be3c1046-2c78-46ab-a62f-f4270561ca1c-kube-api-access-zqfg7\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"be3c1046-2c78-46ab-a62f-f4270561ca1c\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 09 12:25:31 crc kubenswrapper[4703]: I1209 12:25:31.419036 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjrbj\" (UniqueName: \"kubernetes.io/projected/92ca14de-b3f5-4f21-96d6-a71281b49c5c-kube-api-access-hjrbj\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"92ca14de-b3f5-4f21-96d6-a71281b49c5c\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 09 12:25:31 crc kubenswrapper[4703]: I1209 12:25:31.419069 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/92ca14de-b3f5-4f21-96d6-a71281b49c5c-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"92ca14de-b3f5-4f21-96d6-a71281b49c5c\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 09 12:25:31 crc kubenswrapper[4703]: I1209 12:25:31.419064 4703 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"be3c1046-2c78-46ab-a62f-f4270561ca1c\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/cloudkitty-lokistack-ingester-0" Dec 09 12:25:31 crc kubenswrapper[4703]: I1209 12:25:31.419258 4703 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"92ca14de-b3f5-4f21-96d6-a71281b49c5c\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/cloudkitty-lokistack-compactor-0" Dec 09 12:25:31 crc kubenswrapper[4703]: I1209 12:25:31.419351 4703 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"be3c1046-2c78-46ab-a62f-f4270561ca1c\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/cloudkitty-lokistack-ingester-0" Dec 09 12:25:31 crc kubenswrapper[4703]: I1209 12:25:31.419458 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be3c1046-2c78-46ab-a62f-f4270561ca1c-config\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"be3c1046-2c78-46ab-a62f-f4270561ca1c\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 09 12:25:31 crc kubenswrapper[4703]: I1209 12:25:31.421909 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/92ca14de-b3f5-4f21-96d6-a71281b49c5c-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"92ca14de-b3f5-4f21-96d6-a71281b49c5c\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 09 12:25:31 crc kubenswrapper[4703]: I1209 12:25:31.423782 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/be3c1046-2c78-46ab-a62f-f4270561ca1c-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"be3c1046-2c78-46ab-a62f-f4270561ca1c\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 09 12:25:31 crc kubenswrapper[4703]: I1209 12:25:31.431333 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-compactor-http\" (UniqueName: \"kubernetes.io/secret/92ca14de-b3f5-4f21-96d6-a71281b49c5c-cloudkitty-lokistack-compactor-http\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"92ca14de-b3f5-4f21-96d6-a71281b49c5c\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 09 12:25:31 crc kubenswrapper[4703]: I1209 12:25:31.433909 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/92ca14de-b3f5-4f21-96d6-a71281b49c5c-cloudkitty-lokistack-compactor-grpc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"92ca14de-b3f5-4f21-96d6-a71281b49c5c\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 09 12:25:31 crc kubenswrapper[4703]: I1209 12:25:31.438081 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/be3c1046-2c78-46ab-a62f-f4270561ca1c-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"be3c1046-2c78-46ab-a62f-f4270561ca1c\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 09 12:25:31 crc kubenswrapper[4703]: I1209 12:25:31.439545 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ingester-http\" (UniqueName: \"kubernetes.io/secret/be3c1046-2c78-46ab-a62f-f4270561ca1c-cloudkitty-lokistack-ingester-http\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"be3c1046-2c78-46ab-a62f-f4270561ca1c\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 09 12:25:31 crc kubenswrapper[4703]: I1209 12:25:31.443066 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/be3c1046-2c78-46ab-a62f-f4270561ca1c-cloudkitty-lokistack-ingester-grpc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"be3c1046-2c78-46ab-a62f-f4270561ca1c\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 09 12:25:31 crc kubenswrapper[4703]: I1209 12:25:31.458747 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjrbj\" (UniqueName: \"kubernetes.io/projected/92ca14de-b3f5-4f21-96d6-a71281b49c5c-kube-api-access-hjrbj\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"92ca14de-b3f5-4f21-96d6-a71281b49c5c\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 09 12:25:31 crc kubenswrapper[4703]: I1209 12:25:31.468574 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"be3c1046-2c78-46ab-a62f-f4270561ca1c\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 09 12:25:31 crc kubenswrapper[4703]: I1209 12:25:31.471717 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqfg7\" (UniqueName: \"kubernetes.io/projected/be3c1046-2c78-46ab-a62f-f4270561ca1c-kube-api-access-zqfg7\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"be3c1046-2c78-46ab-a62f-f4270561ca1c\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 09 12:25:31 crc kubenswrapper[4703]: I1209 12:25:31.472739 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"be3c1046-2c78-46ab-a62f-f4270561ca1c\") " pod="openstack/cloudkitty-lokistack-ingester-0" Dec 09 12:25:31 crc kubenswrapper[4703]: I1209 12:25:31.483069 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"92ca14de-b3f5-4f21-96d6-a71281b49c5c\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 09 12:25:31 crc kubenswrapper[4703]: I1209 12:25:31.499984 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/92ca14de-b3f5-4f21-96d6-a71281b49c5c-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"92ca14de-b3f5-4f21-96d6-a71281b49c5c\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 09 12:25:31 crc kubenswrapper[4703]: I1209 12:25:31.520671 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8nbv\" (UniqueName: \"kubernetes.io/projected/ffe1d3a3-3faf-4228-b28b-fcfb12cba786-kube-api-access-x8nbv\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"ffe1d3a3-3faf-4228-b28b-fcfb12cba786\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 09 12:25:31 crc kubenswrapper[4703]: I1209 12:25:31.520782 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/ffe1d3a3-3faf-4228-b28b-fcfb12cba786-cloudkitty-lokistack-index-gateway-http\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"ffe1d3a3-3faf-4228-b28b-fcfb12cba786\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 09 12:25:31 crc kubenswrapper[4703]: I1209 12:25:31.520825 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffe1d3a3-3faf-4228-b28b-fcfb12cba786-config\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"ffe1d3a3-3faf-4228-b28b-fcfb12cba786\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 09 12:25:31 crc kubenswrapper[4703]: I1209 12:25:31.520850 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ffe1d3a3-3faf-4228-b28b-fcfb12cba786-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"ffe1d3a3-3faf-4228-b28b-fcfb12cba786\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 09 12:25:31 crc kubenswrapper[4703]: I1209 12:25:31.520870 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/ffe1d3a3-3faf-4228-b28b-fcfb12cba786-cloudkitty-lokistack-index-gateway-grpc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"ffe1d3a3-3faf-4228-b28b-fcfb12cba786\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 09 12:25:31 crc kubenswrapper[4703]: I1209 12:25:31.520916 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/ffe1d3a3-3faf-4228-b28b-fcfb12cba786-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"ffe1d3a3-3faf-4228-b28b-fcfb12cba786\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 09 12:25:31 crc kubenswrapper[4703]: I1209 12:25:31.520978 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"ffe1d3a3-3faf-4228-b28b-fcfb12cba786\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 09 12:25:31 crc kubenswrapper[4703]: I1209 12:25:31.521256 4703 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"ffe1d3a3-3faf-4228-b28b-fcfb12cba786\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 09 12:25:31 crc kubenswrapper[4703]: I1209 12:25:31.522282 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ffe1d3a3-3faf-4228-b28b-fcfb12cba786-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"ffe1d3a3-3faf-4228-b28b-fcfb12cba786\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 09 12:25:31 crc kubenswrapper[4703]: I1209 12:25:31.523260 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffe1d3a3-3faf-4228-b28b-fcfb12cba786-config\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"ffe1d3a3-3faf-4228-b28b-fcfb12cba786\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 09 12:25:31 crc kubenswrapper[4703]: I1209 12:25:31.527100 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/ffe1d3a3-3faf-4228-b28b-fcfb12cba786-cloudkitty-lokistack-index-gateway-http\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"ffe1d3a3-3faf-4228-b28b-fcfb12cba786\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 09 12:25:31 crc kubenswrapper[4703]: I1209 12:25:31.528041 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/ffe1d3a3-3faf-4228-b28b-fcfb12cba786-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"ffe1d3a3-3faf-4228-b28b-fcfb12cba786\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 09 12:25:31 crc kubenswrapper[4703]: I1209 12:25:31.530087 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/ffe1d3a3-3faf-4228-b28b-fcfb12cba786-cloudkitty-lokistack-index-gateway-grpc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"ffe1d3a3-3faf-4228-b28b-fcfb12cba786\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 09 12:25:31 crc kubenswrapper[4703]: I1209 12:25:31.541056 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8nbv\" (UniqueName: \"kubernetes.io/projected/ffe1d3a3-3faf-4228-b28b-fcfb12cba786-kube-api-access-x8nbv\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"ffe1d3a3-3faf-4228-b28b-fcfb12cba786\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 09 12:25:31 crc kubenswrapper[4703]: I1209 12:25:31.552552 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"ffe1d3a3-3faf-4228-b28b-fcfb12cba786\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 09 12:25:31 crc kubenswrapper[4703]: I1209 12:25:31.656333 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 09 12:25:31 crc kubenswrapper[4703]: I1209 12:25:31.775860 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-ingester-0" Dec 09 12:25:33 crc kubenswrapper[4703]: I1209 12:25:33.641544 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92ca14de-b3f5-4f21-96d6-a71281b49c5c-config\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"92ca14de-b3f5-4f21-96d6-a71281b49c5c\") " pod="openstack/cloudkitty-lokistack-compactor-0" Dec 09 12:25:33 crc kubenswrapper[4703]: I1209 12:25:33.672364 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-compactor-0" Dec 09 12:25:38 crc kubenswrapper[4703]: I1209 12:25:38.088769 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 09 12:25:38 crc kubenswrapper[4703]: E1209 12:25:38.913638 4703 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-prometheus-config-reloader-rhel9@sha256:1133c973c7472c665f910a722e19c8e2e27accb34b90fab67f14548627ce9c62" Dec 09 12:25:38 crc kubenswrapper[4703]: E1209 12:25:38.914312 4703 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init-config-reloader,Image:registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-prometheus-config-reloader-rhel9@sha256:1133c973c7472c665f910a722e19c8e2e27accb34b90fab67f14548627ce9c62,Command:[/bin/prometheus-config-reloader],Args:[--watch-interval=0 --listen-address=:8081 --config-file=/etc/alertmanager/config/alertmanager.yaml.gz --config-envsubst-file=/etc/alertmanager/config_out/alertmanager.env.yaml --watched-dir=/etc/alertmanager/config],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:reloader-init,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:SHARD,Value:-1,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-volume,ReadOnly:true,MountPath:/etc/alertmanager/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-out,ReadOnly:false,MountPath:/etc/alertmanager/config_out,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:web-config,ReadOnly:true,MountPath:/etc/alertmanager/web_config/web-config.yaml,SubPath:web-config.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qkqdx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod alertmanager-metric-storage-0_openstack(fe0eb656-a6e6-4b2e-9caf-53ef2d0a3f2b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 09 12:25:38 crc kubenswrapper[4703]: E1209 12:25:38.917440 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init-config-reloader\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/alertmanager-metric-storage-0" podUID="fe0eb656-a6e6-4b2e-9caf-53ef2d0a3f2b" Dec 09 12:25:38 crc kubenswrapper[4703]: E1209 12:25:38.927011 4703 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-prometheus-config-reloader-rhel9@sha256:1133c973c7472c665f910a722e19c8e2e27accb34b90fab67f14548627ce9c62" Dec 09 12:25:38 crc kubenswrapper[4703]: E1209 12:25:38.927544 4703 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init-config-reloader,Image:registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-prometheus-config-reloader-rhel9@sha256:1133c973c7472c665f910a722e19c8e2e27accb34b90fab67f14548627ce9c62,Command:[/bin/prometheus-config-reloader],Args:[--watch-interval=0 --listen-address=:8081 --config-file=/etc/prometheus/config/prometheus.yaml.gz --config-envsubst-file=/etc/prometheus/config_out/prometheus.env.yaml --watched-dir=/etc/prometheus/rules/prometheus-metric-storage-rulefiles-0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:reloader-init,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:SHARD,Value:0,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/etc/prometheus/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-out,ReadOnly:false,MountPath:/etc/prometheus/config_out,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-rulefiles-0,ReadOnly:false,MountPath:/etc/prometheus/rules/prometheus-metric-storage-rulefiles-0,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ltxxx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod prometheus-metric-storage-0_openstack(4ff9ddab-6c86-431b-b31b-3ec7372b7144): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 09 12:25:38 crc kubenswrapper[4703]: E1209 12:25:38.928884 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init-config-reloader\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/prometheus-metric-storage-0" podUID="4ff9ddab-6c86-431b-b31b-3ec7372b7144" Dec 09 12:25:39 crc kubenswrapper[4703]: E1209 12:25:39.014862 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init-config-reloader\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-prometheus-config-reloader-rhel9@sha256:1133c973c7472c665f910a722e19c8e2e27accb34b90fab67f14548627ce9c62\\\"\"" pod="openstack/alertmanager-metric-storage-0" podUID="fe0eb656-a6e6-4b2e-9caf-53ef2d0a3f2b" Dec 09 12:25:39 crc kubenswrapper[4703]: E1209 12:25:39.015002 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init-config-reloader\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-prometheus-config-reloader-rhel9@sha256:1133c973c7472c665f910a722e19c8e2e27accb34b90fab67f14548627ce9c62\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="4ff9ddab-6c86-431b-b31b-3ec7372b7144" Dec 09 12:25:40 crc kubenswrapper[4703]: I1209 12:25:40.019780 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"d93546da-e35c-418f-a1e1-9d7b65c42829","Type":"ContainerStarted","Data":"02fda434ebc40528f3ce3834e6df0f6a57eb1d069363c6cd46652bfe2cc9a156"} Dec 09 12:25:45 crc kubenswrapper[4703]: E1209 12:25:45.202995 4703 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Dec 09 12:25:45 crc kubenswrapper[4703]: E1209 12:25:45.203776 4703 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dv4zt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(0b084a1a-44b8-439b-ad26-d1ead9d2f225): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 12:25:45 crc kubenswrapper[4703]: E1209 12:25:45.205111 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="0b084a1a-44b8-439b-ad26-d1ead9d2f225" Dec 09 12:25:46 crc kubenswrapper[4703]: E1209 12:25:46.095738 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-0" podUID="0b084a1a-44b8-439b-ad26-d1ead9d2f225" Dec 09 12:25:49 crc kubenswrapper[4703]: E1209 12:25:49.600789 4703 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Dec 09 12:25:49 crc kubenswrapper[4703]: E1209 12:25:49.601626 4703 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qcq8d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(00872775-af9b-49e8-9a6e-08baa2171c88): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 12:25:49 crc kubenswrapper[4703]: E1209 12:25:49.602821 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="00872775-af9b-49e8-9a6e-08baa2171c88" Dec 09 12:25:49 crc kubenswrapper[4703]: E1209 12:25:49.642486 4703 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Dec 09 12:25:49 crc kubenswrapper[4703]: E1209 12:25:49.642701 4703 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hntq2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-cell1-galera-0_openstack(9bdf302a-4c2d-41c3-b1be-c08e52c5244c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 12:25:49 crc kubenswrapper[4703]: E1209 12:25:49.644501 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-cell1-galera-0" podUID="9bdf302a-4c2d-41c3-b1be-c08e52c5244c" Dec 09 12:25:49 crc kubenswrapper[4703]: E1209 12:25:49.682391 4703 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Dec 09 12:25:49 crc kubenswrapper[4703]: E1209 12:25:49.682639 4703 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qlc6p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 12:25:49 crc kubenswrapper[4703]: E1209 12:25:49.683825 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692" Dec 09 12:25:50 crc kubenswrapper[4703]: E1209 12:25:50.126801 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-galera-0" podUID="00872775-af9b-49e8-9a6e-08baa2171c88" Dec 09 12:25:50 crc kubenswrapper[4703]: E1209 12:25:50.129354 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-cell1-galera-0" podUID="9bdf302a-4c2d-41c3-b1be-c08e52c5244c" Dec 09 12:25:50 crc kubenswrapper[4703]: E1209 12:25:50.129430 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692" Dec 09 12:25:50 crc kubenswrapper[4703]: E1209 12:25:50.816666 4703 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-memcached:current-podified" Dec 09 12:25:50 crc kubenswrapper[4703]: E1209 12:25:50.816871 4703 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:memcached,Image:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,Command:[/usr/bin/dumb-init -- /usr/local/bin/kolla_start],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:memcached,HostPort:0,ContainerPort:11211,Protocol:TCP,HostIP:,},ContainerPort{Name:memcached-tls,HostPort:0,ContainerPort:11212,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:POD_IPS,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIPs,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CONFIG_HASH,Value:n76h5d4h64bhb6h86h686h579h68fh59fhd8h67fh85h695h5fdh5cdh597h9h554h545h5dbh55fh665hdbh546h4hdbh598h577h5b7h644h9bh5b5q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/src,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/certs/memcached.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/private/memcached.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cdm4d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42457,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42457,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod memcached-0_openstack(dad57d2e-6021-4515-9075-243ab3ce4aec): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 12:25:50 crc kubenswrapper[4703]: E1209 12:25:50.818089 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/memcached-0" podUID="dad57d2e-6021-4515-9075-243ab3ce4aec" Dec 09 12:25:51 crc kubenswrapper[4703]: E1209 12:25:51.133765 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-memcached:current-podified\\\"\"" pod="openstack/memcached-0" podUID="dad57d2e-6021-4515-9075-243ab3ce4aec" Dec 09 12:25:51 crc kubenswrapper[4703]: I1209 12:25:51.283348 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-distributor-664b687b54-cjj5n"] Dec 09 12:25:57 crc kubenswrapper[4703]: I1209 12:25:57.179838 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-lsj78"] Dec 09 12:25:57 crc kubenswrapper[4703]: I1209 12:25:57.347636 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-gateway-bc75944f-5rr9c"] Dec 09 12:25:57 crc kubenswrapper[4703]: I1209 12:25:57.802475 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-b4tvr"] Dec 09 12:25:59 crc kubenswrapper[4703]: I1209 12:25:59.220619 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-distributor-664b687b54-cjj5n" event={"ID":"36c4cea6-8d9a-4979-83c0-28ba95bd7c7e","Type":"ContainerStarted","Data":"299f98a17e0e1041cf39d39d8c304f69aa460d89f0f63bc04842f696429ef1c0"} Dec 09 12:26:00 crc kubenswrapper[4703]: W1209 12:26:00.886491 4703 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8444467c_d711_4618_8518_1c45921e6493.slice/crio-0956d94bb4afbf3c58e70c704269f8ed2488432fb7a0403ddc6898d4de357ba6 WatchSource:0}: Error finding container 0956d94bb4afbf3c58e70c704269f8ed2488432fb7a0403ddc6898d4de357ba6: Status 404 returned error can't find the container with id 0956d94bb4afbf3c58e70c704269f8ed2488432fb7a0403ddc6898d4de357ba6 Dec 09 12:26:01 crc kubenswrapper[4703]: E1209 12:26:01.189894 4703 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Dec 09 12:26:01 crc kubenswrapper[4703]: E1209 12:26:01.190378 4703 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zc6sm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-kmmrj_openstack(1670b15c-6a1f-429f-adba-1f4c9e61fc58): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 12:26:01 crc kubenswrapper[4703]: E1209 12:26:01.191695 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-kmmrj" podUID="1670b15c-6a1f-429f-adba-1f4c9e61fc58" Dec 09 12:26:01 crc kubenswrapper[4703]: I1209 12:26:01.241456 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-lsj78" event={"ID":"4222b5eb-89d5-41be-ab08-6f3f3f4dab42","Type":"ContainerStarted","Data":"b104e47764ebcef00782c524371795afce219d53c31288f6430397ee24e1f13c"} Dec 09 12:26:01 crc kubenswrapper[4703]: I1209 12:26:01.244016 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-b4tvr" event={"ID":"8444467c-d711-4618-8518-1c45921e6493","Type":"ContainerStarted","Data":"0956d94bb4afbf3c58e70c704269f8ed2488432fb7a0403ddc6898d4de357ba6"} Dec 09 12:26:01 crc kubenswrapper[4703]: I1209 12:26:01.246577 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-gateway-bc75944f-5rr9c" event={"ID":"b56f6841-bd74-4321-bd6d-a2478a62a8de","Type":"ContainerStarted","Data":"135587f76eeb2cc5735625c585560cb3e111bdbaf65bf645ecc7cba12bb69899"} Dec 09 12:26:01 crc kubenswrapper[4703]: E1209 12:26:01.326170 4703 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Dec 09 12:26:01 crc kubenswrapper[4703]: E1209 12:26:01.326373 4703 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bz8qf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-rxj84_openstack(2da8660a-e38a-41f1-8c85-e262a4ad191a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 12:26:01 crc kubenswrapper[4703]: E1209 12:26:01.328061 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-rxj84" podUID="2da8660a-e38a-41f1-8c85-e262a4ad191a" Dec 09 12:26:01 crc kubenswrapper[4703]: I1209 12:26:01.344279 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-querier-5467947bf7-tfqqj"] Dec 09 12:26:01 crc kubenswrapper[4703]: I1209 12:26:01.477920 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-index-gateway-0"] Dec 09 12:26:01 crc kubenswrapper[4703]: E1209 12:26:01.604144 4703 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Dec 09 12:26:01 crc kubenswrapper[4703]: E1209 12:26:01.604493 4703 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wd89f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-tbdjp_openstack(a733df4a-37e3-4c6a-825e-cb27ba720b71): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 12:26:01 crc kubenswrapper[4703]: E1209 12:26:01.605785 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-tbdjp" podUID="a733df4a-37e3-4c6a-825e-cb27ba720b71" Dec 09 12:26:01 crc kubenswrapper[4703]: E1209 12:26:01.623182 4703 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Dec 09 12:26:01 crc kubenswrapper[4703]: E1209 12:26:01.623440 4703 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nt4xk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-qmw2d_openstack(750b23e7-4a88-413d-89f8-9bb42e8f48a6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 12:26:01 crc kubenswrapper[4703]: E1209 12:26:01.624904 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-qmw2d" podUID="750b23e7-4a88-413d-89f8-9bb42e8f48a6" Dec 09 12:26:01 crc kubenswrapper[4703]: E1209 12:26:01.818243 4703 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified" Dec 09 12:26:01 crc kubenswrapper[4703]: E1209 12:26:01.818732 4703 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovsdbserver-nb,Image:quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified,Command:[/usr/bin/dumb-init],Args:[/usr/local/bin/container-scripts/setup.sh],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n64dh85h8bh69h99h54h67bh86h66h5bdh658hfbh595h56dhdh58dh597h56h5f7h5fbh5fch8ch65ch54dh5ddh565h66bh74h58h5f9h9dh578q,ValueFrom:nil,},EnvVar{Name:OVN_LOGDIR,Value:/tmp,ValueFrom:nil,},EnvVar{Name:OVN_RUNDIR,Value:/tmp,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovndbcluster-nb-etc-ovn,ReadOnly:false,MountPath:/etc/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdb-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kqwrc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/cleanup.sh],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:20,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovsdbserver-nb-0_openstack(d93546da-e35c-418f-a1e1-9d7b65c42829): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 12:26:01 crc kubenswrapper[4703]: W1209 12:26:01.829584 4703 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podffe1d3a3_3faf_4228_b28b_fcfb12cba786.slice/crio-022e0ba4c91ee31737ebe864defe7c8476cf48618169b0f5257cb5828af4fd47 WatchSource:0}: Error finding container 022e0ba4c91ee31737ebe864defe7c8476cf48618169b0f5257cb5828af4fd47: Status 404 returned error can't find the container with id 022e0ba4c91ee31737ebe864defe7c8476cf48618169b0f5257cb5828af4fd47 Dec 09 12:26:02 crc kubenswrapper[4703]: I1209 12:26:02.166650 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-kmmrj" Dec 09 12:26:02 crc kubenswrapper[4703]: I1209 12:26:02.267292 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-kmmrj" Dec 09 12:26:02 crc kubenswrapper[4703]: I1209 12:26:02.267279 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-kmmrj" event={"ID":"1670b15c-6a1f-429f-adba-1f4c9e61fc58","Type":"ContainerDied","Data":"054c5cb01548155af78490d72fd9fd77989388740c6be0fd7243dc969f143778"} Dec 09 12:26:02 crc kubenswrapper[4703]: I1209 12:26:02.270316 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-querier-5467947bf7-tfqqj" event={"ID":"c05374ec-bb40-45f2-bc03-f84a6eb40f42","Type":"ContainerStarted","Data":"dd9531f699f1060da21a5d603bae7b676097209dd2a921214c85dc30841bdd26"} Dec 09 12:26:02 crc kubenswrapper[4703]: I1209 12:26:02.274180 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-index-gateway-0" event={"ID":"ffe1d3a3-3faf-4228-b28b-fcfb12cba786","Type":"ContainerStarted","Data":"022e0ba4c91ee31737ebe864defe7c8476cf48618169b0f5257cb5828af4fd47"} Dec 09 12:26:02 crc kubenswrapper[4703]: E1209 12:26:02.282647 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-tbdjp" podUID="a733df4a-37e3-4c6a-825e-cb27ba720b71" Dec 09 12:26:02 crc kubenswrapper[4703]: I1209 12:26:02.282690 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zc6sm\" (UniqueName: \"kubernetes.io/projected/1670b15c-6a1f-429f-adba-1f4c9e61fc58-kube-api-access-zc6sm\") pod \"1670b15c-6a1f-429f-adba-1f4c9e61fc58\" (UID: \"1670b15c-6a1f-429f-adba-1f4c9e61fc58\") " Dec 09 12:26:02 crc kubenswrapper[4703]: I1209 12:26:02.282866 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1670b15c-6a1f-429f-adba-1f4c9e61fc58-config\") pod \"1670b15c-6a1f-429f-adba-1f4c9e61fc58\" (UID: \"1670b15c-6a1f-429f-adba-1f4c9e61fc58\") " Dec 09 12:26:02 crc kubenswrapper[4703]: I1209 12:26:02.283837 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1670b15c-6a1f-429f-adba-1f4c9e61fc58-config" (OuterVolumeSpecName: "config") pod "1670b15c-6a1f-429f-adba-1f4c9e61fc58" (UID: "1670b15c-6a1f-429f-adba-1f4c9e61fc58"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:26:02 crc kubenswrapper[4703]: E1209 12:26:02.281963 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-rxj84" podUID="2da8660a-e38a-41f1-8c85-e262a4ad191a" Dec 09 12:26:02 crc kubenswrapper[4703]: I1209 12:26:02.293606 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1670b15c-6a1f-429f-adba-1f4c9e61fc58-kube-api-access-zc6sm" (OuterVolumeSpecName: "kube-api-access-zc6sm") pod "1670b15c-6a1f-429f-adba-1f4c9e61fc58" (UID: "1670b15c-6a1f-429f-adba-1f4c9e61fc58"). InnerVolumeSpecName "kube-api-access-zc6sm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:26:02 crc kubenswrapper[4703]: I1209 12:26:02.388915 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zc6sm\" (UniqueName: \"kubernetes.io/projected/1670b15c-6a1f-429f-adba-1f4c9e61fc58-kube-api-access-zc6sm\") on node \"crc\" DevicePath \"\"" Dec 09 12:26:02 crc kubenswrapper[4703]: I1209 12:26:02.389368 4703 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1670b15c-6a1f-429f-adba-1f4c9e61fc58-config\") on node \"crc\" DevicePath \"\"" Dec 09 12:26:02 crc kubenswrapper[4703]: I1209 12:26:02.398494 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-9rn7g"] Dec 09 12:26:02 crc kubenswrapper[4703]: I1209 12:26:02.691047 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-kmmrj"] Dec 09 12:26:02 crc kubenswrapper[4703]: I1209 12:26:02.706530 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-kmmrj"] Dec 09 12:26:02 crc kubenswrapper[4703]: I1209 12:26:02.828129 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-ingester-0"] Dec 09 12:26:02 crc kubenswrapper[4703]: I1209 12:26:02.845059 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-compactor-0"] Dec 09 12:26:02 crc kubenswrapper[4703]: I1209 12:26:02.866826 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-gateway-bc75944f-sdssq"] Dec 09 12:26:03 crc kubenswrapper[4703]: I1209 12:26:03.092029 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1670b15c-6a1f-429f-adba-1f4c9e61fc58" path="/var/lib/kubelet/pods/1670b15c-6a1f-429f-adba-1f4c9e61fc58/volumes" Dec 09 12:26:03 crc kubenswrapper[4703]: I1209 12:26:03.149952 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 09 12:26:03 crc kubenswrapper[4703]: I1209 12:26:03.233737 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-qmw2d" Dec 09 12:26:03 crc kubenswrapper[4703]: I1209 12:26:03.286280 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-compactor-0" event={"ID":"92ca14de-b3f5-4f21-96d6-a71281b49c5c","Type":"ContainerStarted","Data":"bf1ac3815b62e501a1a84b18cfd0c99d52c4d68e25dc6caffedaa7453e7085c7"} Dec 09 12:26:03 crc kubenswrapper[4703]: I1209 12:26:03.290767 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-9rn7g" event={"ID":"810fed49-b38e-4404-a03c-05dc5aa59ccb","Type":"ContainerStarted","Data":"a7250232bc12a79d80f6b0f17af38ff8b2b0ddd91970fe03a3b834bf437164e4"} Dec 09 12:26:03 crc kubenswrapper[4703]: I1209 12:26:03.293370 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-qmw2d" event={"ID":"750b23e7-4a88-413d-89f8-9bb42e8f48a6","Type":"ContainerDied","Data":"6292c4d37f3634d5028306e1127f544dd88d52b018bffa73b3aa3bf9ed634207"} Dec 09 12:26:03 crc kubenswrapper[4703]: I1209 12:26:03.293402 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-qmw2d" Dec 09 12:26:03 crc kubenswrapper[4703]: I1209 12:26:03.298342 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-gateway-bc75944f-sdssq" event={"ID":"378ce3ef-fa33-4466-afa9-cc57b84fed76","Type":"ContainerStarted","Data":"0ed7a790e4d812a94cb722e79da62025053cc600889874408e587a966b27b1b9"} Dec 09 12:26:03 crc kubenswrapper[4703]: I1209 12:26:03.301853 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-ingester-0" event={"ID":"be3c1046-2c78-46ab-a62f-f4270561ca1c","Type":"ContainerStarted","Data":"95e251f61ab12ff2a6488a4dc2db9699dc08677d60d4c507a1c0c277b1f108ff"} Dec 09 12:26:03 crc kubenswrapper[4703]: I1209 12:26:03.317982 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nt4xk\" (UniqueName: \"kubernetes.io/projected/750b23e7-4a88-413d-89f8-9bb42e8f48a6-kube-api-access-nt4xk\") pod \"750b23e7-4a88-413d-89f8-9bb42e8f48a6\" (UID: \"750b23e7-4a88-413d-89f8-9bb42e8f48a6\") " Dec 09 12:26:03 crc kubenswrapper[4703]: I1209 12:26:03.318931 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/750b23e7-4a88-413d-89f8-9bb42e8f48a6-config\") pod \"750b23e7-4a88-413d-89f8-9bb42e8f48a6\" (UID: \"750b23e7-4a88-413d-89f8-9bb42e8f48a6\") " Dec 09 12:26:03 crc kubenswrapper[4703]: I1209 12:26:03.319148 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/750b23e7-4a88-413d-89f8-9bb42e8f48a6-dns-svc\") pod \"750b23e7-4a88-413d-89f8-9bb42e8f48a6\" (UID: \"750b23e7-4a88-413d-89f8-9bb42e8f48a6\") " Dec 09 12:26:03 crc kubenswrapper[4703]: I1209 12:26:03.320006 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/750b23e7-4a88-413d-89f8-9bb42e8f48a6-config" (OuterVolumeSpecName: "config") pod "750b23e7-4a88-413d-89f8-9bb42e8f48a6" (UID: "750b23e7-4a88-413d-89f8-9bb42e8f48a6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:26:03 crc kubenswrapper[4703]: I1209 12:26:03.320750 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/750b23e7-4a88-413d-89f8-9bb42e8f48a6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "750b23e7-4a88-413d-89f8-9bb42e8f48a6" (UID: "750b23e7-4a88-413d-89f8-9bb42e8f48a6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:26:03 crc kubenswrapper[4703]: I1209 12:26:03.320832 4703 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/750b23e7-4a88-413d-89f8-9bb42e8f48a6-config\") on node \"crc\" DevicePath \"\"" Dec 09 12:26:03 crc kubenswrapper[4703]: I1209 12:26:03.344361 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/750b23e7-4a88-413d-89f8-9bb42e8f48a6-kube-api-access-nt4xk" (OuterVolumeSpecName: "kube-api-access-nt4xk") pod "750b23e7-4a88-413d-89f8-9bb42e8f48a6" (UID: "750b23e7-4a88-413d-89f8-9bb42e8f48a6"). InnerVolumeSpecName "kube-api-access-nt4xk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:26:03 crc kubenswrapper[4703]: I1209 12:26:03.422619 4703 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/750b23e7-4a88-413d-89f8-9bb42e8f48a6-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 09 12:26:03 crc kubenswrapper[4703]: I1209 12:26:03.422666 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nt4xk\" (UniqueName: \"kubernetes.io/projected/750b23e7-4a88-413d-89f8-9bb42e8f48a6-kube-api-access-nt4xk\") on node \"crc\" DevicePath \"\"" Dec 09 12:26:03 crc kubenswrapper[4703]: I1209 12:26:03.671605 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-qmw2d"] Dec 09 12:26:03 crc kubenswrapper[4703]: I1209 12:26:03.674409 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-qmw2d"] Dec 09 12:26:03 crc kubenswrapper[4703]: E1209 12:26:03.932996 4703 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Dec 09 12:26:03 crc kubenswrapper[4703]: E1209 12:26:03.933058 4703 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Dec 09 12:26:03 crc kubenswrapper[4703]: E1209 12:26:03.933394 4703 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-state-metrics,Image:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,Command:[],Args:[--resources=pods --namespaces=openstack],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},ContainerPort{Name:telemetry,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-v28sm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-state-metrics-0_openstack(ab5733f2-517c-433a-bf6c-f1cd26dde97b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 09 12:26:03 crc kubenswrapper[4703]: E1209 12:26:03.934545 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/kube-state-metrics-0" podUID="ab5733f2-517c-433a-bf6c-f1cd26dde97b" Dec 09 12:26:04 crc kubenswrapper[4703]: I1209 12:26:04.313943 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"d85de252-1b8c-45f0-a143-eaa5f2d52fcb","Type":"ContainerStarted","Data":"f57c5ca162a213f7da6aaf359e8cafd0cd3f5f7ed8ce3bdda8d1d25c6b462636"} Dec 09 12:26:04 crc kubenswrapper[4703]: E1209 12:26:04.316768 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0\\\"\"" pod="openstack/kube-state-metrics-0" podUID="ab5733f2-517c-433a-bf6c-f1cd26dde97b" Dec 09 12:26:05 crc kubenswrapper[4703]: I1209 12:26:05.085626 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="750b23e7-4a88-413d-89f8-9bb42e8f48a6" path="/var/lib/kubelet/pods/750b23e7-4a88-413d-89f8-9bb42e8f48a6/volumes" Dec 09 12:26:12 crc kubenswrapper[4703]: I1209 12:26:12.394642 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-distributor-664b687b54-cjj5n" event={"ID":"36c4cea6-8d9a-4979-83c0-28ba95bd7c7e","Type":"ContainerStarted","Data":"a6a67fd5e96f67534b46b1f00fcdaead10de1ec3fac6ea30909cfc15857d1f95"} Dec 09 12:26:12 crc kubenswrapper[4703]: I1209 12:26:12.395251 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-distributor-664b687b54-cjj5n" Dec 09 12:26:12 crc kubenswrapper[4703]: I1209 12:26:12.396952 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"00872775-af9b-49e8-9a6e-08baa2171c88","Type":"ContainerStarted","Data":"6a99b44f3009bb7f2f75baa467a842e33224d0c0bcb44c79112b4ac63aa699fb"} Dec 09 12:26:12 crc kubenswrapper[4703]: I1209 12:26:12.399070 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-b4tvr" event={"ID":"8444467c-d711-4618-8518-1c45921e6493","Type":"ContainerStarted","Data":"a86886c543089e76e91a66a6dc27e431092e63b52897c9939c9f0af82d746005"} Dec 09 12:26:12 crc kubenswrapper[4703]: I1209 12:26:12.422136 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-distributor-664b687b54-cjj5n" podStartSLOduration=32.028003523 podStartE2EDuration="43.422104643s" podCreationTimestamp="2025-12-09 12:25:29 +0000 UTC" firstStartedPulling="2025-12-09 12:25:58.879585726 +0000 UTC m=+1258.128349245" lastFinishedPulling="2025-12-09 12:26:10.273686846 +0000 UTC m=+1269.522450365" observedRunningTime="2025-12-09 12:26:12.418181611 +0000 UTC m=+1271.666945140" watchObservedRunningTime="2025-12-09 12:26:12.422104643 +0000 UTC m=+1271.670868162" Dec 09 12:26:13 crc kubenswrapper[4703]: I1209 12:26:13.407857 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-gateway-bc75944f-5rr9c" event={"ID":"b56f6841-bd74-4321-bd6d-a2478a62a8de","Type":"ContainerStarted","Data":"3fc29b24964c43528a3d41f1ab2e2c726ad2748492b27e0f354994f3647cd5da"} Dec 09 12:26:13 crc kubenswrapper[4703]: I1209 12:26:13.408162 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-gateway-bc75944f-5rr9c" Dec 09 12:26:13 crc kubenswrapper[4703]: I1209 12:26:13.409779 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"9bdf302a-4c2d-41c3-b1be-c08e52c5244c","Type":"ContainerStarted","Data":"827ee28e664477f9f171ae359aefa4d1eff653db77a3880710e6ac773d2af17e"} Dec 09 12:26:13 crc kubenswrapper[4703]: I1209 12:26:13.419261 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-gateway-bc75944f-5rr9c" Dec 09 12:26:13 crc kubenswrapper[4703]: I1209 12:26:13.435488 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-gateway-bc75944f-5rr9c" podStartSLOduration=34.584473547 podStartE2EDuration="43.435462673s" podCreationTimestamp="2025-12-09 12:25:30 +0000 UTC" firstStartedPulling="2025-12-09 12:26:00.923057795 +0000 UTC m=+1260.171821314" lastFinishedPulling="2025-12-09 12:26:09.774046921 +0000 UTC m=+1269.022810440" observedRunningTime="2025-12-09 12:26:13.425526452 +0000 UTC m=+1272.674289981" watchObservedRunningTime="2025-12-09 12:26:13.435462673 +0000 UTC m=+1272.684226192" Dec 09 12:26:14 crc kubenswrapper[4703]: E1209 12:26:14.159680 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-nb\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovsdbserver-nb-0" podUID="d93546da-e35c-418f-a1e1-9d7b65c42829" Dec 09 12:26:14 crc kubenswrapper[4703]: I1209 12:26:14.427897 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-gateway-bc75944f-sdssq" event={"ID":"378ce3ef-fa33-4466-afa9-cc57b84fed76","Type":"ContainerStarted","Data":"15564e9d5b15224b88de13eac0f8bece3172f1e3e724fa0d751606c8d02e7b12"} Dec 09 12:26:14 crc kubenswrapper[4703]: I1209 12:26:14.428337 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-gateway-bc75944f-sdssq" Dec 09 12:26:14 crc kubenswrapper[4703]: I1209 12:26:14.437023 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-querier-5467947bf7-tfqqj" event={"ID":"c05374ec-bb40-45f2-bc03-f84a6eb40f42","Type":"ContainerStarted","Data":"20576aa18214abb233cdef21cb4c4f624ea40f4ffe0e17e643e82e4aeb8e9054"} Dec 09 12:26:14 crc kubenswrapper[4703]: I1209 12:26:14.437127 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-querier-5467947bf7-tfqqj" Dec 09 12:26:14 crc kubenswrapper[4703]: I1209 12:26:14.460444 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-ingester-0" event={"ID":"be3c1046-2c78-46ab-a62f-f4270561ca1c","Type":"ContainerStarted","Data":"b6d56194e69ff58cb822fecaeac7e4e97c122813981feeefe0cf640be69ff697"} Dec 09 12:26:14 crc kubenswrapper[4703]: I1209 12:26:14.461262 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-ingester-0" Dec 09 12:26:14 crc kubenswrapper[4703]: I1209 12:26:14.461322 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-gateway-bc75944f-sdssq" Dec 09 12:26:14 crc kubenswrapper[4703]: I1209 12:26:14.464569 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-compactor-0" event={"ID":"92ca14de-b3f5-4f21-96d6-a71281b49c5c","Type":"ContainerStarted","Data":"a417b99def5f850b632d23142fa53f88e5425970fa367359859bd2e88dced970"} Dec 09 12:26:14 crc kubenswrapper[4703]: I1209 12:26:14.465266 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-compactor-0" Dec 09 12:26:14 crc kubenswrapper[4703]: I1209 12:26:14.478879 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-lsj78" event={"ID":"4222b5eb-89d5-41be-ab08-6f3f3f4dab42","Type":"ContainerStarted","Data":"287a051644e5eeda0f22cae8c0306557eacc834b6b76b1b053d669c66b318c88"} Dec 09 12:26:14 crc kubenswrapper[4703]: I1209 12:26:14.479033 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-lsj78" Dec 09 12:26:14 crc kubenswrapper[4703]: I1209 12:26:14.484346 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692","Type":"ContainerStarted","Data":"224e6199fe19659e824e440bbf703885352f49c9d09a917c672c84d49e959a75"} Dec 09 12:26:14 crc kubenswrapper[4703]: I1209 12:26:14.488051 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-9rn7g" event={"ID":"810fed49-b38e-4404-a03c-05dc5aa59ccb","Type":"ContainerStarted","Data":"ac1bdb21f7a3b5df25b2e3e450d58b6d7e27ff54daa25d33c09fb243a8976962"} Dec 09 12:26:14 crc kubenswrapper[4703]: I1209 12:26:14.488256 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-9rn7g" Dec 09 12:26:14 crc kubenswrapper[4703]: I1209 12:26:14.492449 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-gateway-bc75944f-sdssq" podStartSLOduration=36.903951743 podStartE2EDuration="44.492430529s" podCreationTimestamp="2025-12-09 12:25:30 +0000 UTC" firstStartedPulling="2025-12-09 12:26:03.160300979 +0000 UTC m=+1262.409064498" lastFinishedPulling="2025-12-09 12:26:10.748779755 +0000 UTC m=+1269.997543284" observedRunningTime="2025-12-09 12:26:14.488577707 +0000 UTC m=+1273.737341226" watchObservedRunningTime="2025-12-09 12:26:14.492430529 +0000 UTC m=+1273.741194048" Dec 09 12:26:14 crc kubenswrapper[4703]: I1209 12:26:14.510322 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"d93546da-e35c-418f-a1e1-9d7b65c42829","Type":"ContainerStarted","Data":"d00479f56a787bed89f48393c0da8cb648901792f869cc2a2bdcec3368cc2609"} Dec 09 12:26:14 crc kubenswrapper[4703]: I1209 12:26:14.527013 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"d85de252-1b8c-45f0-a143-eaa5f2d52fcb","Type":"ContainerStarted","Data":"c97ca85c1c05cd43bba6abc2a0f0604f6a1ed951d832877753f8ee1e420153bc"} Dec 09 12:26:14 crc kubenswrapper[4703]: I1209 12:26:14.548058 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-index-gateway-0" event={"ID":"ffe1d3a3-3faf-4228-b28b-fcfb12cba786","Type":"ContainerStarted","Data":"bab24d466912639a1698e8cb07e5e80884a51316bf5c724f83dd24759eb172e7"} Dec 09 12:26:14 crc kubenswrapper[4703]: I1209 12:26:14.548420 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 09 12:26:14 crc kubenswrapper[4703]: I1209 12:26:14.552404 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0b084a1a-44b8-439b-ad26-d1ead9d2f225","Type":"ContainerStarted","Data":"7004fbf8033811bf800c6568cb5bc504bf11e5f90793c8b4a4ad5e5198027033"} Dec 09 12:26:14 crc kubenswrapper[4703]: I1209 12:26:14.556097 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"dad57d2e-6021-4515-9075-243ab3ce4aec","Type":"ContainerStarted","Data":"b99b8466f14aae2abbffbe615e70afcb746e995e1baf98a5d220e816920f86d9"} Dec 09 12:26:14 crc kubenswrapper[4703]: I1209 12:26:14.560513 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Dec 09 12:26:14 crc kubenswrapper[4703]: I1209 12:26:14.623643 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-9rn7g" podStartSLOduration=36.496047631 podStartE2EDuration="44.623617398s" podCreationTimestamp="2025-12-09 12:25:30 +0000 UTC" firstStartedPulling="2025-12-09 12:26:02.621353971 +0000 UTC m=+1261.870117490" lastFinishedPulling="2025-12-09 12:26:10.748923738 +0000 UTC m=+1269.997687257" observedRunningTime="2025-12-09 12:26:14.573616433 +0000 UTC m=+1273.822379962" watchObservedRunningTime="2025-12-09 12:26:14.623617398 +0000 UTC m=+1273.872380917" Dec 09 12:26:14 crc kubenswrapper[4703]: I1209 12:26:14.650173 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-querier-5467947bf7-tfqqj" podStartSLOduration=36.077787364 podStartE2EDuration="44.650150716s" podCreationTimestamp="2025-12-09 12:25:30 +0000 UTC" firstStartedPulling="2025-12-09 12:26:02.203483145 +0000 UTC m=+1261.452246664" lastFinishedPulling="2025-12-09 12:26:10.775846497 +0000 UTC m=+1270.024610016" observedRunningTime="2025-12-09 12:26:14.634402571 +0000 UTC m=+1273.883166090" watchObservedRunningTime="2025-12-09 12:26:14.650150716 +0000 UTC m=+1273.898914235" Dec 09 12:26:14 crc kubenswrapper[4703]: I1209 12:26:14.660953 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-lsj78" podStartSLOduration=44.238535841 podStartE2EDuration="53.660932868s" podCreationTimestamp="2025-12-09 12:25:21 +0000 UTC" firstStartedPulling="2025-12-09 12:26:00.920729274 +0000 UTC m=+1260.169492793" lastFinishedPulling="2025-12-09 12:26:10.343126301 +0000 UTC m=+1269.591889820" observedRunningTime="2025-12-09 12:26:14.658770452 +0000 UTC m=+1273.907533971" watchObservedRunningTime="2025-12-09 12:26:14.660932868 +0000 UTC m=+1273.909696387" Dec 09 12:26:14 crc kubenswrapper[4703]: I1209 12:26:14.685445 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-compactor-0" podStartSLOduration=37.092559501 podStartE2EDuration="44.685424383s" podCreationTimestamp="2025-12-09 12:25:30 +0000 UTC" firstStartedPulling="2025-12-09 12:26:03.155926973 +0000 UTC m=+1262.404690502" lastFinishedPulling="2025-12-09 12:26:10.748791865 +0000 UTC m=+1269.997555384" observedRunningTime="2025-12-09 12:26:14.684756815 +0000 UTC m=+1273.933520334" watchObservedRunningTime="2025-12-09 12:26:14.685424383 +0000 UTC m=+1273.934187902" Dec 09 12:26:14 crc kubenswrapper[4703]: I1209 12:26:14.757937 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-ingester-0" podStartSLOduration=37.141520919 podStartE2EDuration="44.757896728s" podCreationTimestamp="2025-12-09 12:25:30 +0000 UTC" firstStartedPulling="2025-12-09 12:26:03.160310479 +0000 UTC m=+1262.409074008" lastFinishedPulling="2025-12-09 12:26:10.776686298 +0000 UTC m=+1270.025449817" observedRunningTime="2025-12-09 12:26:14.743099139 +0000 UTC m=+1273.991862658" watchObservedRunningTime="2025-12-09 12:26:14.757896728 +0000 UTC m=+1274.006660247" Dec 09 12:26:14 crc kubenswrapper[4703]: I1209 12:26:14.775006 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=5.768310172 podStartE2EDuration="59.774782522s" podCreationTimestamp="2025-12-09 12:25:15 +0000 UTC" firstStartedPulling="2025-12-09 12:25:16.769339786 +0000 UTC m=+1216.018103305" lastFinishedPulling="2025-12-09 12:26:10.775812136 +0000 UTC m=+1270.024575655" observedRunningTime="2025-12-09 12:26:14.766768481 +0000 UTC m=+1274.015532000" watchObservedRunningTime="2025-12-09 12:26:14.774782522 +0000 UTC m=+1274.023546041" Dec 09 12:26:14 crc kubenswrapper[4703]: I1209 12:26:14.789821 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-index-gateway-0" podStartSLOduration=35.851589518 podStartE2EDuration="44.789796666s" podCreationTimestamp="2025-12-09 12:25:30 +0000 UTC" firstStartedPulling="2025-12-09 12:26:01.837980027 +0000 UTC m=+1261.086743546" lastFinishedPulling="2025-12-09 12:26:10.776187175 +0000 UTC m=+1270.024950694" observedRunningTime="2025-12-09 12:26:14.786831928 +0000 UTC m=+1274.035595457" watchObservedRunningTime="2025-12-09 12:26:14.789796666 +0000 UTC m=+1274.038560185" Dec 09 12:26:15 crc kubenswrapper[4703]: I1209 12:26:15.568477 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"4ff9ddab-6c86-431b-b31b-3ec7372b7144","Type":"ContainerStarted","Data":"c6e7abdf451223dfd615035d786a66424b733ba4e4b0836f75e40f06e3986a33"} Dec 09 12:26:15 crc kubenswrapper[4703]: I1209 12:26:15.572762 4703 generic.go:334] "Generic (PLEG): container finished" podID="8444467c-d711-4618-8518-1c45921e6493" containerID="a86886c543089e76e91a66a6dc27e431092e63b52897c9939c9f0af82d746005" exitCode=0 Dec 09 12:26:15 crc kubenswrapper[4703]: I1209 12:26:15.572889 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-b4tvr" event={"ID":"8444467c-d711-4618-8518-1c45921e6493","Type":"ContainerDied","Data":"a86886c543089e76e91a66a6dc27e431092e63b52897c9939c9f0af82d746005"} Dec 09 12:26:15 crc kubenswrapper[4703]: I1209 12:26:15.578612 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"fe0eb656-a6e6-4b2e-9caf-53ef2d0a3f2b","Type":"ContainerStarted","Data":"39c9baa6a9e7c6bd77a82098bf0ff90c4c8cffed7e49e4d684f27fe4ea1c02b7"} Dec 09 12:26:15 crc kubenswrapper[4703]: I1209 12:26:15.581747 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"d93546da-e35c-418f-a1e1-9d7b65c42829","Type":"ContainerStarted","Data":"b8e976bc146353567aef8c67dbfe091652449c1fb72d41b2a8ddc5a6e53d83a9"} Dec 09 12:26:15 crc kubenswrapper[4703]: I1209 12:26:15.583543 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"d85de252-1b8c-45f0-a143-eaa5f2d52fcb","Type":"ContainerStarted","Data":"2055d7f7e822fb7c84ff72218eff6c2a89306d5cfd843215a2bf23127e1da2d3"} Dec 09 12:26:15 crc kubenswrapper[4703]: I1209 12:26:15.684081 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=44.835457228 podStartE2EDuration="51.684058125s" podCreationTimestamp="2025-12-09 12:25:24 +0000 UTC" firstStartedPulling="2025-12-09 12:26:03.924781735 +0000 UTC m=+1263.173545254" lastFinishedPulling="2025-12-09 12:26:10.773382632 +0000 UTC m=+1270.022146151" observedRunningTime="2025-12-09 12:26:15.676869136 +0000 UTC m=+1274.925632655" watchObservedRunningTime="2025-12-09 12:26:15.684058125 +0000 UTC m=+1274.932821644" Dec 09 12:26:15 crc kubenswrapper[4703]: I1209 12:26:15.704997 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=19.77336942 podStartE2EDuration="55.704976975s" podCreationTimestamp="2025-12-09 12:25:20 +0000 UTC" firstStartedPulling="2025-12-09 12:25:39.022519571 +0000 UTC m=+1238.271283090" lastFinishedPulling="2025-12-09 12:26:14.954127126 +0000 UTC m=+1274.202890645" observedRunningTime="2025-12-09 12:26:15.699637784 +0000 UTC m=+1274.948401303" watchObservedRunningTime="2025-12-09 12:26:15.704976975 +0000 UTC m=+1274.953740494" Dec 09 12:26:16 crc kubenswrapper[4703]: I1209 12:26:16.330123 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Dec 09 12:26:16 crc kubenswrapper[4703]: I1209 12:26:16.546484 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Dec 09 12:26:16 crc kubenswrapper[4703]: I1209 12:26:16.593876 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"ab5733f2-517c-433a-bf6c-f1cd26dde97b","Type":"ContainerStarted","Data":"63db45493e255bade9b7208be7c74215ad6faec4f8719a73d20c5aab428154fb"} Dec 09 12:26:16 crc kubenswrapper[4703]: I1209 12:26:16.594110 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Dec 09 12:26:16 crc kubenswrapper[4703]: I1209 12:26:16.600356 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-b4tvr" event={"ID":"8444467c-d711-4618-8518-1c45921e6493","Type":"ContainerStarted","Data":"ed7acd86038e9e04f9a1103c9966772a32581a20e8eb79a73c814925925c443b"} Dec 09 12:26:16 crc kubenswrapper[4703]: I1209 12:26:16.600461 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-b4tvr" event={"ID":"8444467c-d711-4618-8518-1c45921e6493","Type":"ContainerStarted","Data":"3db719d96415d3572745fb1071ebc620a6d0982b7f957b59858e44b13b47762d"} Dec 09 12:26:16 crc kubenswrapper[4703]: I1209 12:26:16.603663 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-b4tvr" Dec 09 12:26:16 crc kubenswrapper[4703]: I1209 12:26:16.654543 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-b4tvr" Dec 09 12:26:16 crc kubenswrapper[4703]: I1209 12:26:16.710950 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=3.247332294 podStartE2EDuration="59.71092567s" podCreationTimestamp="2025-12-09 12:25:17 +0000 UTC" firstStartedPulling="2025-12-09 12:25:18.976373012 +0000 UTC m=+1218.225136531" lastFinishedPulling="2025-12-09 12:26:15.439966388 +0000 UTC m=+1274.688729907" observedRunningTime="2025-12-09 12:26:16.645480849 +0000 UTC m=+1275.894244368" watchObservedRunningTime="2025-12-09 12:26:16.71092567 +0000 UTC m=+1275.959689189" Dec 09 12:26:16 crc kubenswrapper[4703]: I1209 12:26:16.730094 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-b4tvr" podStartSLOduration=52.70050481 podStartE2EDuration="55.730069742s" podCreationTimestamp="2025-12-09 12:25:21 +0000 UTC" firstStartedPulling="2025-12-09 12:26:00.891147566 +0000 UTC m=+1260.139911105" lastFinishedPulling="2025-12-09 12:26:03.920712518 +0000 UTC m=+1263.169476037" observedRunningTime="2025-12-09 12:26:16.702795606 +0000 UTC m=+1275.951559125" watchObservedRunningTime="2025-12-09 12:26:16.730069742 +0000 UTC m=+1275.978833261" Dec 09 12:26:17 crc kubenswrapper[4703]: I1209 12:26:17.329702 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Dec 09 12:26:17 crc kubenswrapper[4703]: I1209 12:26:17.546129 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Dec 09 12:26:17 crc kubenswrapper[4703]: I1209 12:26:17.587678 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Dec 09 12:26:19 crc kubenswrapper[4703]: I1209 12:26:19.382891 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Dec 09 12:26:20 crc kubenswrapper[4703]: I1209 12:26:20.482779 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Dec 09 12:26:20 crc kubenswrapper[4703]: I1209 12:26:20.638058 4703 generic.go:334] "Generic (PLEG): container finished" podID="2da8660a-e38a-41f1-8c85-e262a4ad191a" containerID="6560bf96592ee1001e0bf5f98a53325917caab41ae68d70bfc706997e9bce0df" exitCode=0 Dec 09 12:26:20 crc kubenswrapper[4703]: I1209 12:26:20.638149 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-rxj84" event={"ID":"2da8660a-e38a-41f1-8c85-e262a4ad191a","Type":"ContainerDied","Data":"6560bf96592ee1001e0bf5f98a53325917caab41ae68d70bfc706997e9bce0df"} Dec 09 12:26:20 crc kubenswrapper[4703]: I1209 12:26:20.641550 4703 generic.go:334] "Generic (PLEG): container finished" podID="a733df4a-37e3-4c6a-825e-cb27ba720b71" containerID="c6edbd978869406dc3962547a1050bf06c13f5f38bbcf5671dba1ceff04a4eaa" exitCode=0 Dec 09 12:26:20 crc kubenswrapper[4703]: I1209 12:26:20.641593 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-tbdjp" event={"ID":"a733df4a-37e3-4c6a-825e-cb27ba720b71","Type":"ContainerDied","Data":"c6edbd978869406dc3962547a1050bf06c13f5f38bbcf5671dba1ceff04a4eaa"} Dec 09 12:26:21 crc kubenswrapper[4703]: I1209 12:26:21.592288 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Dec 09 12:26:21 crc kubenswrapper[4703]: I1209 12:26:21.652442 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-tbdjp" event={"ID":"a733df4a-37e3-4c6a-825e-cb27ba720b71","Type":"ContainerStarted","Data":"5523e2eb831a61dcc646abfa3ecd58299a9a0e4bc254e987d0b4b8b061cb60ac"} Dec 09 12:26:21 crc kubenswrapper[4703]: I1209 12:26:21.652772 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-tbdjp" Dec 09 12:26:21 crc kubenswrapper[4703]: I1209 12:26:21.654437 4703 generic.go:334] "Generic (PLEG): container finished" podID="4ff9ddab-6c86-431b-b31b-3ec7372b7144" containerID="c6e7abdf451223dfd615035d786a66424b733ba4e4b0836f75e40f06e3986a33" exitCode=0 Dec 09 12:26:21 crc kubenswrapper[4703]: I1209 12:26:21.654492 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"4ff9ddab-6c86-431b-b31b-3ec7372b7144","Type":"ContainerDied","Data":"c6e7abdf451223dfd615035d786a66424b733ba4e4b0836f75e40f06e3986a33"} Dec 09 12:26:21 crc kubenswrapper[4703]: I1209 12:26:21.656636 4703 generic.go:334] "Generic (PLEG): container finished" podID="00872775-af9b-49e8-9a6e-08baa2171c88" containerID="6a99b44f3009bb7f2f75baa467a842e33224d0c0bcb44c79112b4ac63aa699fb" exitCode=0 Dec 09 12:26:21 crc kubenswrapper[4703]: I1209 12:26:21.656686 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"00872775-af9b-49e8-9a6e-08baa2171c88","Type":"ContainerDied","Data":"6a99b44f3009bb7f2f75baa467a842e33224d0c0bcb44c79112b4ac63aa699fb"} Dec 09 12:26:21 crc kubenswrapper[4703]: I1209 12:26:21.662664 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-rxj84" event={"ID":"2da8660a-e38a-41f1-8c85-e262a4ad191a","Type":"ContainerStarted","Data":"4b76b7f6c0afa7a317f0875c6ba20387b7a32163577d289e75476f00ad74b7fd"} Dec 09 12:26:21 crc kubenswrapper[4703]: I1209 12:26:21.662929 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-rxj84" Dec 09 12:26:21 crc kubenswrapper[4703]: I1209 12:26:21.674509 4703 generic.go:334] "Generic (PLEG): container finished" podID="9bdf302a-4c2d-41c3-b1be-c08e52c5244c" containerID="827ee28e664477f9f171ae359aefa4d1eff653db77a3880710e6ac773d2af17e" exitCode=0 Dec 09 12:26:21 crc kubenswrapper[4703]: I1209 12:26:21.674572 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"9bdf302a-4c2d-41c3-b1be-c08e52c5244c","Type":"ContainerDied","Data":"827ee28e664477f9f171ae359aefa4d1eff653db77a3880710e6ac773d2af17e"} Dec 09 12:26:21 crc kubenswrapper[4703]: I1209 12:26:21.686820 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-tbdjp" podStartSLOduration=3.060679666 podStartE2EDuration="1m10.686797625s" podCreationTimestamp="2025-12-09 12:25:11 +0000 UTC" firstStartedPulling="2025-12-09 12:25:12.590389715 +0000 UTC m=+1211.839153234" lastFinishedPulling="2025-12-09 12:26:20.216507674 +0000 UTC m=+1279.465271193" observedRunningTime="2025-12-09 12:26:21.678695793 +0000 UTC m=+1280.927459312" watchObservedRunningTime="2025-12-09 12:26:21.686797625 +0000 UTC m=+1280.935561144" Dec 09 12:26:21 crc kubenswrapper[4703]: I1209 12:26:21.814454 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-rxj84" podStartSLOduration=4.006039033 podStartE2EDuration="1m10.814423491s" podCreationTimestamp="2025-12-09 12:25:11 +0000 UTC" firstStartedPulling="2025-12-09 12:25:13.051240374 +0000 UTC m=+1212.300003893" lastFinishedPulling="2025-12-09 12:26:19.859624832 +0000 UTC m=+1279.108388351" observedRunningTime="2025-12-09 12:26:21.789309641 +0000 UTC m=+1281.038073160" watchObservedRunningTime="2025-12-09 12:26:21.814423491 +0000 UTC m=+1281.063187010" Dec 09 12:26:21 crc kubenswrapper[4703]: I1209 12:26:21.943551 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-tbdjp"] Dec 09 12:26:21 crc kubenswrapper[4703]: I1209 12:26:21.980245 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-5594z"] Dec 09 12:26:21 crc kubenswrapper[4703]: I1209 12:26:21.982133 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-5594z" Dec 09 12:26:21 crc kubenswrapper[4703]: I1209 12:26:21.986432 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Dec 09 12:26:22 crc kubenswrapper[4703]: I1209 12:26:22.003106 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-5594z"] Dec 09 12:26:22 crc kubenswrapper[4703]: I1209 12:26:22.055078 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pk6c\" (UniqueName: \"kubernetes.io/projected/185ccfc8-6550-4ac1-89c2-b366fe27bbf2-kube-api-access-5pk6c\") pod \"dnsmasq-dns-7f896c8c65-5594z\" (UID: \"185ccfc8-6550-4ac1-89c2-b366fe27bbf2\") " pod="openstack/dnsmasq-dns-7f896c8c65-5594z" Dec 09 12:26:22 crc kubenswrapper[4703]: I1209 12:26:22.055127 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/185ccfc8-6550-4ac1-89c2-b366fe27bbf2-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-5594z\" (UID: \"185ccfc8-6550-4ac1-89c2-b366fe27bbf2\") " pod="openstack/dnsmasq-dns-7f896c8c65-5594z" Dec 09 12:26:22 crc kubenswrapper[4703]: I1209 12:26:22.055165 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/185ccfc8-6550-4ac1-89c2-b366fe27bbf2-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-5594z\" (UID: \"185ccfc8-6550-4ac1-89c2-b366fe27bbf2\") " pod="openstack/dnsmasq-dns-7f896c8c65-5594z" Dec 09 12:26:22 crc kubenswrapper[4703]: I1209 12:26:22.055235 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/185ccfc8-6550-4ac1-89c2-b366fe27bbf2-config\") pod \"dnsmasq-dns-7f896c8c65-5594z\" (UID: \"185ccfc8-6550-4ac1-89c2-b366fe27bbf2\") " pod="openstack/dnsmasq-dns-7f896c8c65-5594z" Dec 09 12:26:22 crc kubenswrapper[4703]: I1209 12:26:22.112099 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-x2pt9"] Dec 09 12:26:22 crc kubenswrapper[4703]: I1209 12:26:22.125771 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-x2pt9" Dec 09 12:26:22 crc kubenswrapper[4703]: I1209 12:26:22.131301 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Dec 09 12:26:22 crc kubenswrapper[4703]: I1209 12:26:22.132503 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-x2pt9"] Dec 09 12:26:22 crc kubenswrapper[4703]: I1209 12:26:22.161208 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pk6c\" (UniqueName: \"kubernetes.io/projected/185ccfc8-6550-4ac1-89c2-b366fe27bbf2-kube-api-access-5pk6c\") pod \"dnsmasq-dns-7f896c8c65-5594z\" (UID: \"185ccfc8-6550-4ac1-89c2-b366fe27bbf2\") " pod="openstack/dnsmasq-dns-7f896c8c65-5594z" Dec 09 12:26:22 crc kubenswrapper[4703]: I1209 12:26:22.161267 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/185ccfc8-6550-4ac1-89c2-b366fe27bbf2-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-5594z\" (UID: \"185ccfc8-6550-4ac1-89c2-b366fe27bbf2\") " pod="openstack/dnsmasq-dns-7f896c8c65-5594z" Dec 09 12:26:22 crc kubenswrapper[4703]: I1209 12:26:22.161305 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/185ccfc8-6550-4ac1-89c2-b366fe27bbf2-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-5594z\" (UID: \"185ccfc8-6550-4ac1-89c2-b366fe27bbf2\") " pod="openstack/dnsmasq-dns-7f896c8c65-5594z" Dec 09 12:26:22 crc kubenswrapper[4703]: I1209 12:26:22.161374 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/185ccfc8-6550-4ac1-89c2-b366fe27bbf2-config\") pod \"dnsmasq-dns-7f896c8c65-5594z\" (UID: \"185ccfc8-6550-4ac1-89c2-b366fe27bbf2\") " pod="openstack/dnsmasq-dns-7f896c8c65-5594z" Dec 09 12:26:22 crc kubenswrapper[4703]: I1209 12:26:22.164669 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/185ccfc8-6550-4ac1-89c2-b366fe27bbf2-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-5594z\" (UID: \"185ccfc8-6550-4ac1-89c2-b366fe27bbf2\") " pod="openstack/dnsmasq-dns-7f896c8c65-5594z" Dec 09 12:26:22 crc kubenswrapper[4703]: I1209 12:26:22.165953 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/185ccfc8-6550-4ac1-89c2-b366fe27bbf2-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-5594z\" (UID: \"185ccfc8-6550-4ac1-89c2-b366fe27bbf2\") " pod="openstack/dnsmasq-dns-7f896c8c65-5594z" Dec 09 12:26:22 crc kubenswrapper[4703]: I1209 12:26:22.167564 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/185ccfc8-6550-4ac1-89c2-b366fe27bbf2-config\") pod \"dnsmasq-dns-7f896c8c65-5594z\" (UID: \"185ccfc8-6550-4ac1-89c2-b366fe27bbf2\") " pod="openstack/dnsmasq-dns-7f896c8c65-5594z" Dec 09 12:26:22 crc kubenswrapper[4703]: I1209 12:26:22.208997 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pk6c\" (UniqueName: \"kubernetes.io/projected/185ccfc8-6550-4ac1-89c2-b366fe27bbf2-kube-api-access-5pk6c\") pod \"dnsmasq-dns-7f896c8c65-5594z\" (UID: \"185ccfc8-6550-4ac1-89c2-b366fe27bbf2\") " pod="openstack/dnsmasq-dns-7f896c8c65-5594z" Dec 09 12:26:22 crc kubenswrapper[4703]: I1209 12:26:22.269281 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da4d90d1-a4e1-44cf-9fd7-b6fd623ebe70-config\") pod \"ovn-controller-metrics-x2pt9\" (UID: \"da4d90d1-a4e1-44cf-9fd7-b6fd623ebe70\") " pod="openstack/ovn-controller-metrics-x2pt9" Dec 09 12:26:22 crc kubenswrapper[4703]: I1209 12:26:22.269445 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/da4d90d1-a4e1-44cf-9fd7-b6fd623ebe70-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-x2pt9\" (UID: \"da4d90d1-a4e1-44cf-9fd7-b6fd623ebe70\") " pod="openstack/ovn-controller-metrics-x2pt9" Dec 09 12:26:22 crc kubenswrapper[4703]: I1209 12:26:22.269624 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/da4d90d1-a4e1-44cf-9fd7-b6fd623ebe70-ovn-rundir\") pod \"ovn-controller-metrics-x2pt9\" (UID: \"da4d90d1-a4e1-44cf-9fd7-b6fd623ebe70\") " pod="openstack/ovn-controller-metrics-x2pt9" Dec 09 12:26:22 crc kubenswrapper[4703]: I1209 12:26:22.269675 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/da4d90d1-a4e1-44cf-9fd7-b6fd623ebe70-ovs-rundir\") pod \"ovn-controller-metrics-x2pt9\" (UID: \"da4d90d1-a4e1-44cf-9fd7-b6fd623ebe70\") " pod="openstack/ovn-controller-metrics-x2pt9" Dec 09 12:26:22 crc kubenswrapper[4703]: I1209 12:26:22.269767 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lcdh\" (UniqueName: \"kubernetes.io/projected/da4d90d1-a4e1-44cf-9fd7-b6fd623ebe70-kube-api-access-8lcdh\") pod \"ovn-controller-metrics-x2pt9\" (UID: \"da4d90d1-a4e1-44cf-9fd7-b6fd623ebe70\") " pod="openstack/ovn-controller-metrics-x2pt9" Dec 09 12:26:22 crc kubenswrapper[4703]: I1209 12:26:22.269805 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da4d90d1-a4e1-44cf-9fd7-b6fd623ebe70-combined-ca-bundle\") pod \"ovn-controller-metrics-x2pt9\" (UID: \"da4d90d1-a4e1-44cf-9fd7-b6fd623ebe70\") " pod="openstack/ovn-controller-metrics-x2pt9" Dec 09 12:26:22 crc kubenswrapper[4703]: I1209 12:26:22.371757 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/da4d90d1-a4e1-44cf-9fd7-b6fd623ebe70-ovn-rundir\") pod \"ovn-controller-metrics-x2pt9\" (UID: \"da4d90d1-a4e1-44cf-9fd7-b6fd623ebe70\") " pod="openstack/ovn-controller-metrics-x2pt9" Dec 09 12:26:22 crc kubenswrapper[4703]: I1209 12:26:22.372125 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/da4d90d1-a4e1-44cf-9fd7-b6fd623ebe70-ovs-rundir\") pod \"ovn-controller-metrics-x2pt9\" (UID: \"da4d90d1-a4e1-44cf-9fd7-b6fd623ebe70\") " pod="openstack/ovn-controller-metrics-x2pt9" Dec 09 12:26:22 crc kubenswrapper[4703]: I1209 12:26:22.372208 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8lcdh\" (UniqueName: \"kubernetes.io/projected/da4d90d1-a4e1-44cf-9fd7-b6fd623ebe70-kube-api-access-8lcdh\") pod \"ovn-controller-metrics-x2pt9\" (UID: \"da4d90d1-a4e1-44cf-9fd7-b6fd623ebe70\") " pod="openstack/ovn-controller-metrics-x2pt9" Dec 09 12:26:22 crc kubenswrapper[4703]: I1209 12:26:22.372239 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da4d90d1-a4e1-44cf-9fd7-b6fd623ebe70-combined-ca-bundle\") pod \"ovn-controller-metrics-x2pt9\" (UID: \"da4d90d1-a4e1-44cf-9fd7-b6fd623ebe70\") " pod="openstack/ovn-controller-metrics-x2pt9" Dec 09 12:26:22 crc kubenswrapper[4703]: I1209 12:26:22.372251 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-rxj84"] Dec 09 12:26:22 crc kubenswrapper[4703]: I1209 12:26:22.372308 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da4d90d1-a4e1-44cf-9fd7-b6fd623ebe70-config\") pod \"ovn-controller-metrics-x2pt9\" (UID: \"da4d90d1-a4e1-44cf-9fd7-b6fd623ebe70\") " pod="openstack/ovn-controller-metrics-x2pt9" Dec 09 12:26:22 crc kubenswrapper[4703]: I1209 12:26:22.372370 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/da4d90d1-a4e1-44cf-9fd7-b6fd623ebe70-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-x2pt9\" (UID: \"da4d90d1-a4e1-44cf-9fd7-b6fd623ebe70\") " pod="openstack/ovn-controller-metrics-x2pt9" Dec 09 12:26:22 crc kubenswrapper[4703]: I1209 12:26:22.372586 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/da4d90d1-a4e1-44cf-9fd7-b6fd623ebe70-ovs-rundir\") pod \"ovn-controller-metrics-x2pt9\" (UID: \"da4d90d1-a4e1-44cf-9fd7-b6fd623ebe70\") " pod="openstack/ovn-controller-metrics-x2pt9" Dec 09 12:26:22 crc kubenswrapper[4703]: I1209 12:26:22.372656 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/da4d90d1-a4e1-44cf-9fd7-b6fd623ebe70-ovn-rundir\") pod \"ovn-controller-metrics-x2pt9\" (UID: \"da4d90d1-a4e1-44cf-9fd7-b6fd623ebe70\") " pod="openstack/ovn-controller-metrics-x2pt9" Dec 09 12:26:22 crc kubenswrapper[4703]: I1209 12:26:22.381557 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da4d90d1-a4e1-44cf-9fd7-b6fd623ebe70-config\") pod \"ovn-controller-metrics-x2pt9\" (UID: \"da4d90d1-a4e1-44cf-9fd7-b6fd623ebe70\") " pod="openstack/ovn-controller-metrics-x2pt9" Dec 09 12:26:22 crc kubenswrapper[4703]: I1209 12:26:22.383654 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/da4d90d1-a4e1-44cf-9fd7-b6fd623ebe70-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-x2pt9\" (UID: \"da4d90d1-a4e1-44cf-9fd7-b6fd623ebe70\") " pod="openstack/ovn-controller-metrics-x2pt9" Dec 09 12:26:22 crc kubenswrapper[4703]: I1209 12:26:22.383779 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da4d90d1-a4e1-44cf-9fd7-b6fd623ebe70-combined-ca-bundle\") pod \"ovn-controller-metrics-x2pt9\" (UID: \"da4d90d1-a4e1-44cf-9fd7-b6fd623ebe70\") " pod="openstack/ovn-controller-metrics-x2pt9" Dec 09 12:26:22 crc kubenswrapper[4703]: I1209 12:26:22.397262 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8lcdh\" (UniqueName: \"kubernetes.io/projected/da4d90d1-a4e1-44cf-9fd7-b6fd623ebe70-kube-api-access-8lcdh\") pod \"ovn-controller-metrics-x2pt9\" (UID: \"da4d90d1-a4e1-44cf-9fd7-b6fd623ebe70\") " pod="openstack/ovn-controller-metrics-x2pt9" Dec 09 12:26:22 crc kubenswrapper[4703]: I1209 12:26:22.430340 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-zjvxl"] Dec 09 12:26:22 crc kubenswrapper[4703]: I1209 12:26:22.432693 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-zjvxl" Dec 09 12:26:22 crc kubenswrapper[4703]: I1209 12:26:22.434981 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Dec 09 12:26:22 crc kubenswrapper[4703]: I1209 12:26:22.439659 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-5594z" Dec 09 12:26:22 crc kubenswrapper[4703]: I1209 12:26:22.463487 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-zjvxl"] Dec 09 12:26:22 crc kubenswrapper[4703]: I1209 12:26:22.464319 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Dec 09 12:26:22 crc kubenswrapper[4703]: I1209 12:26:22.479145 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-x2pt9" Dec 09 12:26:22 crc kubenswrapper[4703]: I1209 12:26:22.581699 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8a105fed-4e9c-4db9-972b-0d9f087f8ebb-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-zjvxl\" (UID: \"8a105fed-4e9c-4db9-972b-0d9f087f8ebb\") " pod="openstack/dnsmasq-dns-86db49b7ff-zjvxl" Dec 09 12:26:22 crc kubenswrapper[4703]: I1209 12:26:22.581999 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmth2\" (UniqueName: \"kubernetes.io/projected/8a105fed-4e9c-4db9-972b-0d9f087f8ebb-kube-api-access-pmth2\") pod \"dnsmasq-dns-86db49b7ff-zjvxl\" (UID: \"8a105fed-4e9c-4db9-972b-0d9f087f8ebb\") " pod="openstack/dnsmasq-dns-86db49b7ff-zjvxl" Dec 09 12:26:22 crc kubenswrapper[4703]: I1209 12:26:22.582278 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8a105fed-4e9c-4db9-972b-0d9f087f8ebb-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-zjvxl\" (UID: \"8a105fed-4e9c-4db9-972b-0d9f087f8ebb\") " pod="openstack/dnsmasq-dns-86db49b7ff-zjvxl" Dec 09 12:26:22 crc kubenswrapper[4703]: I1209 12:26:22.582390 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a105fed-4e9c-4db9-972b-0d9f087f8ebb-config\") pod \"dnsmasq-dns-86db49b7ff-zjvxl\" (UID: \"8a105fed-4e9c-4db9-972b-0d9f087f8ebb\") " pod="openstack/dnsmasq-dns-86db49b7ff-zjvxl" Dec 09 12:26:22 crc kubenswrapper[4703]: I1209 12:26:22.582553 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8a105fed-4e9c-4db9-972b-0d9f087f8ebb-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-zjvxl\" (UID: \"8a105fed-4e9c-4db9-972b-0d9f087f8ebb\") " pod="openstack/dnsmasq-dns-86db49b7ff-zjvxl" Dec 09 12:26:22 crc kubenswrapper[4703]: I1209 12:26:22.686556 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8a105fed-4e9c-4db9-972b-0d9f087f8ebb-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-zjvxl\" (UID: \"8a105fed-4e9c-4db9-972b-0d9f087f8ebb\") " pod="openstack/dnsmasq-dns-86db49b7ff-zjvxl" Dec 09 12:26:22 crc kubenswrapper[4703]: I1209 12:26:22.686614 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a105fed-4e9c-4db9-972b-0d9f087f8ebb-config\") pod \"dnsmasq-dns-86db49b7ff-zjvxl\" (UID: \"8a105fed-4e9c-4db9-972b-0d9f087f8ebb\") " pod="openstack/dnsmasq-dns-86db49b7ff-zjvxl" Dec 09 12:26:22 crc kubenswrapper[4703]: I1209 12:26:22.686671 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8a105fed-4e9c-4db9-972b-0d9f087f8ebb-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-zjvxl\" (UID: \"8a105fed-4e9c-4db9-972b-0d9f087f8ebb\") " pod="openstack/dnsmasq-dns-86db49b7ff-zjvxl" Dec 09 12:26:22 crc kubenswrapper[4703]: I1209 12:26:22.686765 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8a105fed-4e9c-4db9-972b-0d9f087f8ebb-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-zjvxl\" (UID: \"8a105fed-4e9c-4db9-972b-0d9f087f8ebb\") " pod="openstack/dnsmasq-dns-86db49b7ff-zjvxl" Dec 09 12:26:22 crc kubenswrapper[4703]: I1209 12:26:22.686787 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmth2\" (UniqueName: \"kubernetes.io/projected/8a105fed-4e9c-4db9-972b-0d9f087f8ebb-kube-api-access-pmth2\") pod \"dnsmasq-dns-86db49b7ff-zjvxl\" (UID: \"8a105fed-4e9c-4db9-972b-0d9f087f8ebb\") " pod="openstack/dnsmasq-dns-86db49b7ff-zjvxl" Dec 09 12:26:22 crc kubenswrapper[4703]: I1209 12:26:22.688691 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8a105fed-4e9c-4db9-972b-0d9f087f8ebb-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-zjvxl\" (UID: \"8a105fed-4e9c-4db9-972b-0d9f087f8ebb\") " pod="openstack/dnsmasq-dns-86db49b7ff-zjvxl" Dec 09 12:26:22 crc kubenswrapper[4703]: I1209 12:26:22.688705 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8a105fed-4e9c-4db9-972b-0d9f087f8ebb-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-zjvxl\" (UID: \"8a105fed-4e9c-4db9-972b-0d9f087f8ebb\") " pod="openstack/dnsmasq-dns-86db49b7ff-zjvxl" Dec 09 12:26:22 crc kubenswrapper[4703]: I1209 12:26:22.689409 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8a105fed-4e9c-4db9-972b-0d9f087f8ebb-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-zjvxl\" (UID: \"8a105fed-4e9c-4db9-972b-0d9f087f8ebb\") " pod="openstack/dnsmasq-dns-86db49b7ff-zjvxl" Dec 09 12:26:22 crc kubenswrapper[4703]: I1209 12:26:22.689690 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a105fed-4e9c-4db9-972b-0d9f087f8ebb-config\") pod \"dnsmasq-dns-86db49b7ff-zjvxl\" (UID: \"8a105fed-4e9c-4db9-972b-0d9f087f8ebb\") " pod="openstack/dnsmasq-dns-86db49b7ff-zjvxl" Dec 09 12:26:22 crc kubenswrapper[4703]: I1209 12:26:22.714274 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmth2\" (UniqueName: \"kubernetes.io/projected/8a105fed-4e9c-4db9-972b-0d9f087f8ebb-kube-api-access-pmth2\") pod \"dnsmasq-dns-86db49b7ff-zjvxl\" (UID: \"8a105fed-4e9c-4db9-972b-0d9f087f8ebb\") " pod="openstack/dnsmasq-dns-86db49b7ff-zjvxl" Dec 09 12:26:22 crc kubenswrapper[4703]: I1209 12:26:22.739618 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"9bdf302a-4c2d-41c3-b1be-c08e52c5244c","Type":"ContainerStarted","Data":"b2470f64d0bd3ff7063d930f92e5d51bb1e4df2c0a13c00ecf1b678543c61015"} Dec 09 12:26:22 crc kubenswrapper[4703]: I1209 12:26:22.780065 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"00872775-af9b-49e8-9a6e-08baa2171c88","Type":"ContainerStarted","Data":"82ab95b3e294f7e0fcea50e1798fe28ba0e34d3dd4aaaa044cd75a2e26d6c129"} Dec 09 12:26:22 crc kubenswrapper[4703]: I1209 12:26:22.801603 4703 generic.go:334] "Generic (PLEG): container finished" podID="fe0eb656-a6e6-4b2e-9caf-53ef2d0a3f2b" containerID="39c9baa6a9e7c6bd77a82098bf0ff90c4c8cffed7e49e4d684f27fe4ea1c02b7" exitCode=0 Dec 09 12:26:22 crc kubenswrapper[4703]: I1209 12:26:22.802163 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"fe0eb656-a6e6-4b2e-9caf-53ef2d0a3f2b","Type":"ContainerDied","Data":"39c9baa6a9e7c6bd77a82098bf0ff90c4c8cffed7e49e4d684f27fe4ea1c02b7"} Dec 09 12:26:22 crc kubenswrapper[4703]: I1209 12:26:22.806124 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=16.462374072 podStartE2EDuration="1m9.8061071s" podCreationTimestamp="2025-12-09 12:25:13 +0000 UTC" firstStartedPulling="2025-12-09 12:25:16.906581953 +0000 UTC m=+1216.155345472" lastFinishedPulling="2025-12-09 12:26:10.250314981 +0000 UTC m=+1269.499078500" observedRunningTime="2025-12-09 12:26:22.801075938 +0000 UTC m=+1282.049839457" watchObservedRunningTime="2025-12-09 12:26:22.8061071 +0000 UTC m=+1282.054870619" Dec 09 12:26:22 crc kubenswrapper[4703]: I1209 12:26:22.844885 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-zjvxl" Dec 09 12:26:22 crc kubenswrapper[4703]: I1209 12:26:22.906946 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=15.158877788 podStartE2EDuration="1m10.90691614s" podCreationTimestamp="2025-12-09 12:25:12 +0000 UTC" firstStartedPulling="2025-12-09 12:25:14.838413875 +0000 UTC m=+1214.087177394" lastFinishedPulling="2025-12-09 12:26:10.586452227 +0000 UTC m=+1269.835215746" observedRunningTime="2025-12-09 12:26:22.854927974 +0000 UTC m=+1282.103691493" watchObservedRunningTime="2025-12-09 12:26:22.90691614 +0000 UTC m=+1282.155679659" Dec 09 12:26:22 crc kubenswrapper[4703]: I1209 12:26:22.983978 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Dec 09 12:26:22 crc kubenswrapper[4703]: I1209 12:26:22.986384 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Dec 09 12:26:22 crc kubenswrapper[4703]: I1209 12:26:22.992035 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Dec 09 12:26:22 crc kubenswrapper[4703]: I1209 12:26:22.992411 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Dec 09 12:26:22 crc kubenswrapper[4703]: I1209 12:26:22.992494 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-64lxf" Dec 09 12:26:22 crc kubenswrapper[4703]: I1209 12:26:22.992698 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Dec 09 12:26:23 crc kubenswrapper[4703]: I1209 12:26:23.009451 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Dec 09 12:26:23 crc kubenswrapper[4703]: I1209 12:26:23.111232 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa6ba9a6-c0d8-4260-867d-ad91464cc39b-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"fa6ba9a6-c0d8-4260-867d-ad91464cc39b\") " pod="openstack/ovn-northd-0" Dec 09 12:26:23 crc kubenswrapper[4703]: I1209 12:26:23.112058 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa6ba9a6-c0d8-4260-867d-ad91464cc39b-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"fa6ba9a6-c0d8-4260-867d-ad91464cc39b\") " pod="openstack/ovn-northd-0" Dec 09 12:26:23 crc kubenswrapper[4703]: I1209 12:26:23.112230 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa6ba9a6-c0d8-4260-867d-ad91464cc39b-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"fa6ba9a6-c0d8-4260-867d-ad91464cc39b\") " pod="openstack/ovn-northd-0" Dec 09 12:26:23 crc kubenswrapper[4703]: I1209 12:26:23.112341 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fa6ba9a6-c0d8-4260-867d-ad91464cc39b-scripts\") pod \"ovn-northd-0\" (UID: \"fa6ba9a6-c0d8-4260-867d-ad91464cc39b\") " pod="openstack/ovn-northd-0" Dec 09 12:26:23 crc kubenswrapper[4703]: I1209 12:26:23.112404 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa6ba9a6-c0d8-4260-867d-ad91464cc39b-config\") pod \"ovn-northd-0\" (UID: \"fa6ba9a6-c0d8-4260-867d-ad91464cc39b\") " pod="openstack/ovn-northd-0" Dec 09 12:26:23 crc kubenswrapper[4703]: I1209 12:26:23.112475 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/fa6ba9a6-c0d8-4260-867d-ad91464cc39b-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"fa6ba9a6-c0d8-4260-867d-ad91464cc39b\") " pod="openstack/ovn-northd-0" Dec 09 12:26:23 crc kubenswrapper[4703]: I1209 12:26:23.112547 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tc6xn\" (UniqueName: \"kubernetes.io/projected/fa6ba9a6-c0d8-4260-867d-ad91464cc39b-kube-api-access-tc6xn\") pod \"ovn-northd-0\" (UID: \"fa6ba9a6-c0d8-4260-867d-ad91464cc39b\") " pod="openstack/ovn-northd-0" Dec 09 12:26:23 crc kubenswrapper[4703]: I1209 12:26:23.217827 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fa6ba9a6-c0d8-4260-867d-ad91464cc39b-scripts\") pod \"ovn-northd-0\" (UID: \"fa6ba9a6-c0d8-4260-867d-ad91464cc39b\") " pod="openstack/ovn-northd-0" Dec 09 12:26:23 crc kubenswrapper[4703]: I1209 12:26:23.217901 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa6ba9a6-c0d8-4260-867d-ad91464cc39b-config\") pod \"ovn-northd-0\" (UID: \"fa6ba9a6-c0d8-4260-867d-ad91464cc39b\") " pod="openstack/ovn-northd-0" Dec 09 12:26:23 crc kubenswrapper[4703]: I1209 12:26:23.217944 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/fa6ba9a6-c0d8-4260-867d-ad91464cc39b-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"fa6ba9a6-c0d8-4260-867d-ad91464cc39b\") " pod="openstack/ovn-northd-0" Dec 09 12:26:23 crc kubenswrapper[4703]: I1209 12:26:23.217978 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tc6xn\" (UniqueName: \"kubernetes.io/projected/fa6ba9a6-c0d8-4260-867d-ad91464cc39b-kube-api-access-tc6xn\") pod \"ovn-northd-0\" (UID: \"fa6ba9a6-c0d8-4260-867d-ad91464cc39b\") " pod="openstack/ovn-northd-0" Dec 09 12:26:23 crc kubenswrapper[4703]: I1209 12:26:23.218039 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa6ba9a6-c0d8-4260-867d-ad91464cc39b-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"fa6ba9a6-c0d8-4260-867d-ad91464cc39b\") " pod="openstack/ovn-northd-0" Dec 09 12:26:23 crc kubenswrapper[4703]: I1209 12:26:23.218093 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa6ba9a6-c0d8-4260-867d-ad91464cc39b-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"fa6ba9a6-c0d8-4260-867d-ad91464cc39b\") " pod="openstack/ovn-northd-0" Dec 09 12:26:23 crc kubenswrapper[4703]: I1209 12:26:23.218164 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa6ba9a6-c0d8-4260-867d-ad91464cc39b-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"fa6ba9a6-c0d8-4260-867d-ad91464cc39b\") " pod="openstack/ovn-northd-0" Dec 09 12:26:23 crc kubenswrapper[4703]: I1209 12:26:23.220480 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/fa6ba9a6-c0d8-4260-867d-ad91464cc39b-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"fa6ba9a6-c0d8-4260-867d-ad91464cc39b\") " pod="openstack/ovn-northd-0" Dec 09 12:26:23 crc kubenswrapper[4703]: I1209 12:26:23.221154 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fa6ba9a6-c0d8-4260-867d-ad91464cc39b-scripts\") pod \"ovn-northd-0\" (UID: \"fa6ba9a6-c0d8-4260-867d-ad91464cc39b\") " pod="openstack/ovn-northd-0" Dec 09 12:26:23 crc kubenswrapper[4703]: I1209 12:26:23.221266 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa6ba9a6-c0d8-4260-867d-ad91464cc39b-config\") pod \"ovn-northd-0\" (UID: \"fa6ba9a6-c0d8-4260-867d-ad91464cc39b\") " pod="openstack/ovn-northd-0" Dec 09 12:26:23 crc kubenswrapper[4703]: I1209 12:26:23.229128 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa6ba9a6-c0d8-4260-867d-ad91464cc39b-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"fa6ba9a6-c0d8-4260-867d-ad91464cc39b\") " pod="openstack/ovn-northd-0" Dec 09 12:26:23 crc kubenswrapper[4703]: I1209 12:26:23.244053 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tc6xn\" (UniqueName: \"kubernetes.io/projected/fa6ba9a6-c0d8-4260-867d-ad91464cc39b-kube-api-access-tc6xn\") pod \"ovn-northd-0\" (UID: \"fa6ba9a6-c0d8-4260-867d-ad91464cc39b\") " pod="openstack/ovn-northd-0" Dec 09 12:26:23 crc kubenswrapper[4703]: I1209 12:26:23.248937 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa6ba9a6-c0d8-4260-867d-ad91464cc39b-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"fa6ba9a6-c0d8-4260-867d-ad91464cc39b\") " pod="openstack/ovn-northd-0" Dec 09 12:26:23 crc kubenswrapper[4703]: I1209 12:26:23.265637 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa6ba9a6-c0d8-4260-867d-ad91464cc39b-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"fa6ba9a6-c0d8-4260-867d-ad91464cc39b\") " pod="openstack/ovn-northd-0" Dec 09 12:26:23 crc kubenswrapper[4703]: I1209 12:26:23.454303 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-5594z"] Dec 09 12:26:23 crc kubenswrapper[4703]: I1209 12:26:23.455014 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Dec 09 12:26:23 crc kubenswrapper[4703]: W1209 12:26:23.481954 4703 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod185ccfc8_6550_4ac1_89c2_b366fe27bbf2.slice/crio-64962d000ada7c80a6a07b21fc904acda89410b82c2702e86a355b3e74673bc4 WatchSource:0}: Error finding container 64962d000ada7c80a6a07b21fc904acda89410b82c2702e86a355b3e74673bc4: Status 404 returned error can't find the container with id 64962d000ada7c80a6a07b21fc904acda89410b82c2702e86a355b3e74673bc4 Dec 09 12:26:23 crc kubenswrapper[4703]: I1209 12:26:23.720543 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-x2pt9"] Dec 09 12:26:23 crc kubenswrapper[4703]: I1209 12:26:23.741903 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-zjvxl"] Dec 09 12:26:23 crc kubenswrapper[4703]: W1209 12:26:23.750097 4703 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8a105fed_4e9c_4db9_972b_0d9f087f8ebb.slice/crio-00d4d919efd27bb785af6bd5f90561d3ef7d41b53525d7f217aae366d40790ec WatchSource:0}: Error finding container 00d4d919efd27bb785af6bd5f90561d3ef7d41b53525d7f217aae366d40790ec: Status 404 returned error can't find the container with id 00d4d919efd27bb785af6bd5f90561d3ef7d41b53525d7f217aae366d40790ec Dec 09 12:26:23 crc kubenswrapper[4703]: I1209 12:26:23.819671 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-x2pt9" event={"ID":"da4d90d1-a4e1-44cf-9fd7-b6fd623ebe70","Type":"ContainerStarted","Data":"cfba15907b9a9097168575c2bd38bae68bbf458d1f8d2412d352dbd5263b0cc2"} Dec 09 12:26:23 crc kubenswrapper[4703]: I1209 12:26:23.822890 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-5594z" event={"ID":"185ccfc8-6550-4ac1-89c2-b366fe27bbf2","Type":"ContainerStarted","Data":"64962d000ada7c80a6a07b21fc904acda89410b82c2702e86a355b3e74673bc4"} Dec 09 12:26:23 crc kubenswrapper[4703]: I1209 12:26:23.836334 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-rxj84" podUID="2da8660a-e38a-41f1-8c85-e262a4ad191a" containerName="dnsmasq-dns" containerID="cri-o://4b76b7f6c0afa7a317f0875c6ba20387b7a32163577d289e75476f00ad74b7fd" gracePeriod=10 Dec 09 12:26:23 crc kubenswrapper[4703]: I1209 12:26:23.836704 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-zjvxl" event={"ID":"8a105fed-4e9c-4db9-972b-0d9f087f8ebb","Type":"ContainerStarted","Data":"00d4d919efd27bb785af6bd5f90561d3ef7d41b53525d7f217aae366d40790ec"} Dec 09 12:26:23 crc kubenswrapper[4703]: I1209 12:26:23.836729 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-tbdjp" podUID="a733df4a-37e3-4c6a-825e-cb27ba720b71" containerName="dnsmasq-dns" containerID="cri-o://5523e2eb831a61dcc646abfa3ecd58299a9a0e4bc254e987d0b4b8b061cb60ac" gracePeriod=10 Dec 09 12:26:24 crc kubenswrapper[4703]: I1209 12:26:24.037532 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Dec 09 12:26:24 crc kubenswrapper[4703]: I1209 12:26:24.037591 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Dec 09 12:26:24 crc kubenswrapper[4703]: I1209 12:26:24.078226 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Dec 09 12:26:24 crc kubenswrapper[4703]: I1209 12:26:24.562454 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-tbdjp" Dec 09 12:26:24 crc kubenswrapper[4703]: I1209 12:26:24.586324 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a733df4a-37e3-4c6a-825e-cb27ba720b71-config\") pod \"a733df4a-37e3-4c6a-825e-cb27ba720b71\" (UID: \"a733df4a-37e3-4c6a-825e-cb27ba720b71\") " Dec 09 12:26:24 crc kubenswrapper[4703]: I1209 12:26:24.586676 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a733df4a-37e3-4c6a-825e-cb27ba720b71-dns-svc\") pod \"a733df4a-37e3-4c6a-825e-cb27ba720b71\" (UID: \"a733df4a-37e3-4c6a-825e-cb27ba720b71\") " Dec 09 12:26:24 crc kubenswrapper[4703]: I1209 12:26:24.586704 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wd89f\" (UniqueName: \"kubernetes.io/projected/a733df4a-37e3-4c6a-825e-cb27ba720b71-kube-api-access-wd89f\") pod \"a733df4a-37e3-4c6a-825e-cb27ba720b71\" (UID: \"a733df4a-37e3-4c6a-825e-cb27ba720b71\") " Dec 09 12:26:24 crc kubenswrapper[4703]: I1209 12:26:24.598182 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-rxj84" Dec 09 12:26:24 crc kubenswrapper[4703]: I1209 12:26:24.607157 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a733df4a-37e3-4c6a-825e-cb27ba720b71-kube-api-access-wd89f" (OuterVolumeSpecName: "kube-api-access-wd89f") pod "a733df4a-37e3-4c6a-825e-cb27ba720b71" (UID: "a733df4a-37e3-4c6a-825e-cb27ba720b71"). InnerVolumeSpecName "kube-api-access-wd89f". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:26:24 crc kubenswrapper[4703]: I1209 12:26:24.650127 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a733df4a-37e3-4c6a-825e-cb27ba720b71-config" (OuterVolumeSpecName: "config") pod "a733df4a-37e3-4c6a-825e-cb27ba720b71" (UID: "a733df4a-37e3-4c6a-825e-cb27ba720b71"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:26:24 crc kubenswrapper[4703]: I1209 12:26:24.655621 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a733df4a-37e3-4c6a-825e-cb27ba720b71-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a733df4a-37e3-4c6a-825e-cb27ba720b71" (UID: "a733df4a-37e3-4c6a-825e-cb27ba720b71"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:26:24 crc kubenswrapper[4703]: I1209 12:26:24.688473 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2da8660a-e38a-41f1-8c85-e262a4ad191a-dns-svc\") pod \"2da8660a-e38a-41f1-8c85-e262a4ad191a\" (UID: \"2da8660a-e38a-41f1-8c85-e262a4ad191a\") " Dec 09 12:26:24 crc kubenswrapper[4703]: I1209 12:26:24.688611 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2da8660a-e38a-41f1-8c85-e262a4ad191a-config\") pod \"2da8660a-e38a-41f1-8c85-e262a4ad191a\" (UID: \"2da8660a-e38a-41f1-8c85-e262a4ad191a\") " Dec 09 12:26:24 crc kubenswrapper[4703]: I1209 12:26:24.688637 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bz8qf\" (UniqueName: \"kubernetes.io/projected/2da8660a-e38a-41f1-8c85-e262a4ad191a-kube-api-access-bz8qf\") pod \"2da8660a-e38a-41f1-8c85-e262a4ad191a\" (UID: \"2da8660a-e38a-41f1-8c85-e262a4ad191a\") " Dec 09 12:26:24 crc kubenswrapper[4703]: I1209 12:26:24.689769 4703 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a733df4a-37e3-4c6a-825e-cb27ba720b71-config\") on node \"crc\" DevicePath \"\"" Dec 09 12:26:24 crc kubenswrapper[4703]: I1209 12:26:24.689798 4703 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a733df4a-37e3-4c6a-825e-cb27ba720b71-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 09 12:26:24 crc kubenswrapper[4703]: I1209 12:26:24.689814 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wd89f\" (UniqueName: \"kubernetes.io/projected/a733df4a-37e3-4c6a-825e-cb27ba720b71-kube-api-access-wd89f\") on node \"crc\" DevicePath \"\"" Dec 09 12:26:24 crc kubenswrapper[4703]: I1209 12:26:24.692478 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2da8660a-e38a-41f1-8c85-e262a4ad191a-kube-api-access-bz8qf" (OuterVolumeSpecName: "kube-api-access-bz8qf") pod "2da8660a-e38a-41f1-8c85-e262a4ad191a" (UID: "2da8660a-e38a-41f1-8c85-e262a4ad191a"). InnerVolumeSpecName "kube-api-access-bz8qf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:26:24 crc kubenswrapper[4703]: I1209 12:26:24.744086 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2da8660a-e38a-41f1-8c85-e262a4ad191a-config" (OuterVolumeSpecName: "config") pod "2da8660a-e38a-41f1-8c85-e262a4ad191a" (UID: "2da8660a-e38a-41f1-8c85-e262a4ad191a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:26:24 crc kubenswrapper[4703]: I1209 12:26:24.758971 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2da8660a-e38a-41f1-8c85-e262a4ad191a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2da8660a-e38a-41f1-8c85-e262a4ad191a" (UID: "2da8660a-e38a-41f1-8c85-e262a4ad191a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:26:24 crc kubenswrapper[4703]: I1209 12:26:24.791645 4703 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2da8660a-e38a-41f1-8c85-e262a4ad191a-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 09 12:26:24 crc kubenswrapper[4703]: I1209 12:26:24.791701 4703 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2da8660a-e38a-41f1-8c85-e262a4ad191a-config\") on node \"crc\" DevicePath \"\"" Dec 09 12:26:24 crc kubenswrapper[4703]: I1209 12:26:24.791717 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bz8qf\" (UniqueName: \"kubernetes.io/projected/2da8660a-e38a-41f1-8c85-e262a4ad191a-kube-api-access-bz8qf\") on node \"crc\" DevicePath \"\"" Dec 09 12:26:24 crc kubenswrapper[4703]: I1209 12:26:24.863378 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-x2pt9" event={"ID":"da4d90d1-a4e1-44cf-9fd7-b6fd623ebe70","Type":"ContainerStarted","Data":"398645053b55a61a52acc3166f09537cdc3bdd772e2069732ffa459992249c24"} Dec 09 12:26:24 crc kubenswrapper[4703]: I1209 12:26:24.867098 4703 generic.go:334] "Generic (PLEG): container finished" podID="185ccfc8-6550-4ac1-89c2-b366fe27bbf2" containerID="daa0f086da1aee6d24a61f07c1160311e958c469a38622e0d6a0eb7357f9cbe3" exitCode=0 Dec 09 12:26:24 crc kubenswrapper[4703]: I1209 12:26:24.867170 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-5594z" event={"ID":"185ccfc8-6550-4ac1-89c2-b366fe27bbf2","Type":"ContainerDied","Data":"daa0f086da1aee6d24a61f07c1160311e958c469a38622e0d6a0eb7357f9cbe3"} Dec 09 12:26:24 crc kubenswrapper[4703]: I1209 12:26:24.869990 4703 generic.go:334] "Generic (PLEG): container finished" podID="8a105fed-4e9c-4db9-972b-0d9f087f8ebb" containerID="8eb73e13671127b82b551c077e9e3c91cce7c658346ee217b6733639c1aa2bd2" exitCode=0 Dec 09 12:26:24 crc kubenswrapper[4703]: I1209 12:26:24.870117 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-zjvxl" event={"ID":"8a105fed-4e9c-4db9-972b-0d9f087f8ebb","Type":"ContainerDied","Data":"8eb73e13671127b82b551c077e9e3c91cce7c658346ee217b6733639c1aa2bd2"} Dec 09 12:26:24 crc kubenswrapper[4703]: I1209 12:26:24.876316 4703 generic.go:334] "Generic (PLEG): container finished" podID="2da8660a-e38a-41f1-8c85-e262a4ad191a" containerID="4b76b7f6c0afa7a317f0875c6ba20387b7a32163577d289e75476f00ad74b7fd" exitCode=0 Dec 09 12:26:24 crc kubenswrapper[4703]: I1209 12:26:24.876420 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-rxj84" event={"ID":"2da8660a-e38a-41f1-8c85-e262a4ad191a","Type":"ContainerDied","Data":"4b76b7f6c0afa7a317f0875c6ba20387b7a32163577d289e75476f00ad74b7fd"} Dec 09 12:26:24 crc kubenswrapper[4703]: I1209 12:26:24.876453 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-rxj84" event={"ID":"2da8660a-e38a-41f1-8c85-e262a4ad191a","Type":"ContainerDied","Data":"7af8c900f12a8946e9ff7f91472096311f7b1faaaaa4bb594e0e1dd00b39c024"} Dec 09 12:26:24 crc kubenswrapper[4703]: I1209 12:26:24.876472 4703 scope.go:117] "RemoveContainer" containerID="4b76b7f6c0afa7a317f0875c6ba20387b7a32163577d289e75476f00ad74b7fd" Dec 09 12:26:24 crc kubenswrapper[4703]: I1209 12:26:24.876604 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-rxj84" Dec 09 12:26:24 crc kubenswrapper[4703]: I1209 12:26:24.882027 4703 generic.go:334] "Generic (PLEG): container finished" podID="a733df4a-37e3-4c6a-825e-cb27ba720b71" containerID="5523e2eb831a61dcc646abfa3ecd58299a9a0e4bc254e987d0b4b8b061cb60ac" exitCode=0 Dec 09 12:26:24 crc kubenswrapper[4703]: I1209 12:26:24.882152 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-tbdjp" event={"ID":"a733df4a-37e3-4c6a-825e-cb27ba720b71","Type":"ContainerDied","Data":"5523e2eb831a61dcc646abfa3ecd58299a9a0e4bc254e987d0b4b8b061cb60ac"} Dec 09 12:26:24 crc kubenswrapper[4703]: I1209 12:26:24.882206 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-tbdjp" event={"ID":"a733df4a-37e3-4c6a-825e-cb27ba720b71","Type":"ContainerDied","Data":"11940062818d03432589dcbf1ae7095dfad66c0cb6365b38da50f98d0ab7c64d"} Dec 09 12:26:24 crc kubenswrapper[4703]: I1209 12:26:24.882265 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-tbdjp" Dec 09 12:26:24 crc kubenswrapper[4703]: I1209 12:26:24.891335 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-x2pt9" podStartSLOduration=2.891313016 podStartE2EDuration="2.891313016s" podCreationTimestamp="2025-12-09 12:26:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:26:24.88955039 +0000 UTC m=+1284.138313909" watchObservedRunningTime="2025-12-09 12:26:24.891313016 +0000 UTC m=+1284.140076535" Dec 09 12:26:24 crc kubenswrapper[4703]: I1209 12:26:24.910173 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"fa6ba9a6-c0d8-4260-867d-ad91464cc39b","Type":"ContainerStarted","Data":"10ec66a90dc7522c3eb245b498ad8eb71fb3151d553081748d182b0453e6f6eb"} Dec 09 12:26:24 crc kubenswrapper[4703]: I1209 12:26:24.991557 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-rxj84"] Dec 09 12:26:25 crc kubenswrapper[4703]: I1209 12:26:24.996901 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-rxj84"] Dec 09 12:26:25 crc kubenswrapper[4703]: I1209 12:26:25.092959 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2da8660a-e38a-41f1-8c85-e262a4ad191a" path="/var/lib/kubelet/pods/2da8660a-e38a-41f1-8c85-e262a4ad191a/volumes" Dec 09 12:26:25 crc kubenswrapper[4703]: I1209 12:26:25.140384 4703 scope.go:117] "RemoveContainer" containerID="6560bf96592ee1001e0bf5f98a53325917caab41ae68d70bfc706997e9bce0df" Dec 09 12:26:25 crc kubenswrapper[4703]: I1209 12:26:25.186001 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-tbdjp"] Dec 09 12:26:25 crc kubenswrapper[4703]: I1209 12:26:25.219817 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-tbdjp"] Dec 09 12:26:25 crc kubenswrapper[4703]: I1209 12:26:25.232578 4703 scope.go:117] "RemoveContainer" containerID="4b76b7f6c0afa7a317f0875c6ba20387b7a32163577d289e75476f00ad74b7fd" Dec 09 12:26:25 crc kubenswrapper[4703]: E1209 12:26:25.233591 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b76b7f6c0afa7a317f0875c6ba20387b7a32163577d289e75476f00ad74b7fd\": container with ID starting with 4b76b7f6c0afa7a317f0875c6ba20387b7a32163577d289e75476f00ad74b7fd not found: ID does not exist" containerID="4b76b7f6c0afa7a317f0875c6ba20387b7a32163577d289e75476f00ad74b7fd" Dec 09 12:26:25 crc kubenswrapper[4703]: I1209 12:26:25.233620 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b76b7f6c0afa7a317f0875c6ba20387b7a32163577d289e75476f00ad74b7fd"} err="failed to get container status \"4b76b7f6c0afa7a317f0875c6ba20387b7a32163577d289e75476f00ad74b7fd\": rpc error: code = NotFound desc = could not find container \"4b76b7f6c0afa7a317f0875c6ba20387b7a32163577d289e75476f00ad74b7fd\": container with ID starting with 4b76b7f6c0afa7a317f0875c6ba20387b7a32163577d289e75476f00ad74b7fd not found: ID does not exist" Dec 09 12:26:25 crc kubenswrapper[4703]: I1209 12:26:25.233725 4703 scope.go:117] "RemoveContainer" containerID="6560bf96592ee1001e0bf5f98a53325917caab41ae68d70bfc706997e9bce0df" Dec 09 12:26:25 crc kubenswrapper[4703]: E1209 12:26:25.234174 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6560bf96592ee1001e0bf5f98a53325917caab41ae68d70bfc706997e9bce0df\": container with ID starting with 6560bf96592ee1001e0bf5f98a53325917caab41ae68d70bfc706997e9bce0df not found: ID does not exist" containerID="6560bf96592ee1001e0bf5f98a53325917caab41ae68d70bfc706997e9bce0df" Dec 09 12:26:25 crc kubenswrapper[4703]: I1209 12:26:25.234213 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6560bf96592ee1001e0bf5f98a53325917caab41ae68d70bfc706997e9bce0df"} err="failed to get container status \"6560bf96592ee1001e0bf5f98a53325917caab41ae68d70bfc706997e9bce0df\": rpc error: code = NotFound desc = could not find container \"6560bf96592ee1001e0bf5f98a53325917caab41ae68d70bfc706997e9bce0df\": container with ID starting with 6560bf96592ee1001e0bf5f98a53325917caab41ae68d70bfc706997e9bce0df not found: ID does not exist" Dec 09 12:26:25 crc kubenswrapper[4703]: I1209 12:26:25.234226 4703 scope.go:117] "RemoveContainer" containerID="5523e2eb831a61dcc646abfa3ecd58299a9a0e4bc254e987d0b4b8b061cb60ac" Dec 09 12:26:25 crc kubenswrapper[4703]: I1209 12:26:25.297057 4703 scope.go:117] "RemoveContainer" containerID="c6edbd978869406dc3962547a1050bf06c13f5f38bbcf5671dba1ceff04a4eaa" Dec 09 12:26:25 crc kubenswrapper[4703]: I1209 12:26:25.353825 4703 scope.go:117] "RemoveContainer" containerID="5523e2eb831a61dcc646abfa3ecd58299a9a0e4bc254e987d0b4b8b061cb60ac" Dec 09 12:26:25 crc kubenswrapper[4703]: E1209 12:26:25.355061 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5523e2eb831a61dcc646abfa3ecd58299a9a0e4bc254e987d0b4b8b061cb60ac\": container with ID starting with 5523e2eb831a61dcc646abfa3ecd58299a9a0e4bc254e987d0b4b8b061cb60ac not found: ID does not exist" containerID="5523e2eb831a61dcc646abfa3ecd58299a9a0e4bc254e987d0b4b8b061cb60ac" Dec 09 12:26:25 crc kubenswrapper[4703]: I1209 12:26:25.355105 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5523e2eb831a61dcc646abfa3ecd58299a9a0e4bc254e987d0b4b8b061cb60ac"} err="failed to get container status \"5523e2eb831a61dcc646abfa3ecd58299a9a0e4bc254e987d0b4b8b061cb60ac\": rpc error: code = NotFound desc = could not find container \"5523e2eb831a61dcc646abfa3ecd58299a9a0e4bc254e987d0b4b8b061cb60ac\": container with ID starting with 5523e2eb831a61dcc646abfa3ecd58299a9a0e4bc254e987d0b4b8b061cb60ac not found: ID does not exist" Dec 09 12:26:25 crc kubenswrapper[4703]: I1209 12:26:25.355135 4703 scope.go:117] "RemoveContainer" containerID="c6edbd978869406dc3962547a1050bf06c13f5f38bbcf5671dba1ceff04a4eaa" Dec 09 12:26:25 crc kubenswrapper[4703]: E1209 12:26:25.355451 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6edbd978869406dc3962547a1050bf06c13f5f38bbcf5671dba1ceff04a4eaa\": container with ID starting with c6edbd978869406dc3962547a1050bf06c13f5f38bbcf5671dba1ceff04a4eaa not found: ID does not exist" containerID="c6edbd978869406dc3962547a1050bf06c13f5f38bbcf5671dba1ceff04a4eaa" Dec 09 12:26:25 crc kubenswrapper[4703]: I1209 12:26:25.355476 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6edbd978869406dc3962547a1050bf06c13f5f38bbcf5671dba1ceff04a4eaa"} err="failed to get container status \"c6edbd978869406dc3962547a1050bf06c13f5f38bbcf5671dba1ceff04a4eaa\": rpc error: code = NotFound desc = could not find container \"c6edbd978869406dc3962547a1050bf06c13f5f38bbcf5671dba1ceff04a4eaa\": container with ID starting with c6edbd978869406dc3962547a1050bf06c13f5f38bbcf5671dba1ceff04a4eaa not found: ID does not exist" Dec 09 12:26:25 crc kubenswrapper[4703]: I1209 12:26:25.556072 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Dec 09 12:26:25 crc kubenswrapper[4703]: I1209 12:26:25.556135 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Dec 09 12:26:26 crc kubenswrapper[4703]: I1209 12:26:26.948555 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-5594z" event={"ID":"185ccfc8-6550-4ac1-89c2-b366fe27bbf2","Type":"ContainerStarted","Data":"d20c9f9f914369b1f124ba71fdd0dcc538ef28caf7a2a29cdf8e381db9926a61"} Dec 09 12:26:26 crc kubenswrapper[4703]: I1209 12:26:26.950078 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7f896c8c65-5594z" Dec 09 12:26:26 crc kubenswrapper[4703]: I1209 12:26:26.951636 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-zjvxl" event={"ID":"8a105fed-4e9c-4db9-972b-0d9f087f8ebb","Type":"ContainerStarted","Data":"432ab5638ce787273d14ac821c05c332a7d9284e5741966ca1ed6d96b0957ad4"} Dec 09 12:26:26 crc kubenswrapper[4703]: I1209 12:26:26.952349 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-zjvxl" Dec 09 12:26:26 crc kubenswrapper[4703]: I1209 12:26:26.999287 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7f896c8c65-5594z" podStartSLOduration=5.999261961 podStartE2EDuration="5.999261961s" podCreationTimestamp="2025-12-09 12:26:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:26:26.970583866 +0000 UTC m=+1286.219347385" watchObservedRunningTime="2025-12-09 12:26:26.999261961 +0000 UTC m=+1286.248025490" Dec 09 12:26:27 crc kubenswrapper[4703]: I1209 12:26:27.000480 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-zjvxl" podStartSLOduration=5.000473032 podStartE2EDuration="5.000473032s" podCreationTimestamp="2025-12-09 12:26:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:26:26.988100397 +0000 UTC m=+1286.236863916" watchObservedRunningTime="2025-12-09 12:26:27.000473032 +0000 UTC m=+1286.249236571" Dec 09 12:26:27 crc kubenswrapper[4703]: I1209 12:26:27.085344 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a733df4a-37e3-4c6a-825e-cb27ba720b71" path="/var/lib/kubelet/pods/a733df4a-37e3-4c6a-825e-cb27ba720b71/volumes" Dec 09 12:26:27 crc kubenswrapper[4703]: I1209 12:26:27.468792 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-5594z"] Dec 09 12:26:27 crc kubenswrapper[4703]: I1209 12:26:27.511784 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-78wn9"] Dec 09 12:26:27 crc kubenswrapper[4703]: E1209 12:26:27.512150 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a733df4a-37e3-4c6a-825e-cb27ba720b71" containerName="init" Dec 09 12:26:27 crc kubenswrapper[4703]: I1209 12:26:27.512165 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="a733df4a-37e3-4c6a-825e-cb27ba720b71" containerName="init" Dec 09 12:26:27 crc kubenswrapper[4703]: E1209 12:26:27.512182 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2da8660a-e38a-41f1-8c85-e262a4ad191a" containerName="dnsmasq-dns" Dec 09 12:26:27 crc kubenswrapper[4703]: I1209 12:26:27.512203 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="2da8660a-e38a-41f1-8c85-e262a4ad191a" containerName="dnsmasq-dns" Dec 09 12:26:27 crc kubenswrapper[4703]: E1209 12:26:27.512215 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2da8660a-e38a-41f1-8c85-e262a4ad191a" containerName="init" Dec 09 12:26:27 crc kubenswrapper[4703]: I1209 12:26:27.512221 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="2da8660a-e38a-41f1-8c85-e262a4ad191a" containerName="init" Dec 09 12:26:27 crc kubenswrapper[4703]: E1209 12:26:27.512230 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a733df4a-37e3-4c6a-825e-cb27ba720b71" containerName="dnsmasq-dns" Dec 09 12:26:27 crc kubenswrapper[4703]: I1209 12:26:27.512235 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="a733df4a-37e3-4c6a-825e-cb27ba720b71" containerName="dnsmasq-dns" Dec 09 12:26:27 crc kubenswrapper[4703]: I1209 12:26:27.512469 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="2da8660a-e38a-41f1-8c85-e262a4ad191a" containerName="dnsmasq-dns" Dec 09 12:26:27 crc kubenswrapper[4703]: I1209 12:26:27.512487 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="a733df4a-37e3-4c6a-825e-cb27ba720b71" containerName="dnsmasq-dns" Dec 09 12:26:27 crc kubenswrapper[4703]: I1209 12:26:27.520468 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-78wn9" Dec 09 12:26:27 crc kubenswrapper[4703]: I1209 12:26:27.574543 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-78wn9"] Dec 09 12:26:27 crc kubenswrapper[4703]: I1209 12:26:27.626742 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Dec 09 12:26:27 crc kubenswrapper[4703]: I1209 12:26:27.663742 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cb943c46-2f32-43db-8192-d20d6e8059ea-dns-svc\") pod \"dnsmasq-dns-698758b865-78wn9\" (UID: \"cb943c46-2f32-43db-8192-d20d6e8059ea\") " pod="openstack/dnsmasq-dns-698758b865-78wn9" Dec 09 12:26:27 crc kubenswrapper[4703]: I1209 12:26:27.663823 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cb943c46-2f32-43db-8192-d20d6e8059ea-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-78wn9\" (UID: \"cb943c46-2f32-43db-8192-d20d6e8059ea\") " pod="openstack/dnsmasq-dns-698758b865-78wn9" Dec 09 12:26:27 crc kubenswrapper[4703]: I1209 12:26:27.663906 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cb943c46-2f32-43db-8192-d20d6e8059ea-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-78wn9\" (UID: \"cb943c46-2f32-43db-8192-d20d6e8059ea\") " pod="openstack/dnsmasq-dns-698758b865-78wn9" Dec 09 12:26:27 crc kubenswrapper[4703]: I1209 12:26:27.663948 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmvhn\" (UniqueName: \"kubernetes.io/projected/cb943c46-2f32-43db-8192-d20d6e8059ea-kube-api-access-lmvhn\") pod \"dnsmasq-dns-698758b865-78wn9\" (UID: \"cb943c46-2f32-43db-8192-d20d6e8059ea\") " pod="openstack/dnsmasq-dns-698758b865-78wn9" Dec 09 12:26:27 crc kubenswrapper[4703]: I1209 12:26:27.663977 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb943c46-2f32-43db-8192-d20d6e8059ea-config\") pod \"dnsmasq-dns-698758b865-78wn9\" (UID: \"cb943c46-2f32-43db-8192-d20d6e8059ea\") " pod="openstack/dnsmasq-dns-698758b865-78wn9" Dec 09 12:26:27 crc kubenswrapper[4703]: I1209 12:26:27.765817 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cb943c46-2f32-43db-8192-d20d6e8059ea-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-78wn9\" (UID: \"cb943c46-2f32-43db-8192-d20d6e8059ea\") " pod="openstack/dnsmasq-dns-698758b865-78wn9" Dec 09 12:26:27 crc kubenswrapper[4703]: I1209 12:26:27.765928 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmvhn\" (UniqueName: \"kubernetes.io/projected/cb943c46-2f32-43db-8192-d20d6e8059ea-kube-api-access-lmvhn\") pod \"dnsmasq-dns-698758b865-78wn9\" (UID: \"cb943c46-2f32-43db-8192-d20d6e8059ea\") " pod="openstack/dnsmasq-dns-698758b865-78wn9" Dec 09 12:26:27 crc kubenswrapper[4703]: I1209 12:26:27.765962 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb943c46-2f32-43db-8192-d20d6e8059ea-config\") pod \"dnsmasq-dns-698758b865-78wn9\" (UID: \"cb943c46-2f32-43db-8192-d20d6e8059ea\") " pod="openstack/dnsmasq-dns-698758b865-78wn9" Dec 09 12:26:27 crc kubenswrapper[4703]: I1209 12:26:27.766017 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cb943c46-2f32-43db-8192-d20d6e8059ea-dns-svc\") pod \"dnsmasq-dns-698758b865-78wn9\" (UID: \"cb943c46-2f32-43db-8192-d20d6e8059ea\") " pod="openstack/dnsmasq-dns-698758b865-78wn9" Dec 09 12:26:27 crc kubenswrapper[4703]: I1209 12:26:27.766058 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cb943c46-2f32-43db-8192-d20d6e8059ea-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-78wn9\" (UID: \"cb943c46-2f32-43db-8192-d20d6e8059ea\") " pod="openstack/dnsmasq-dns-698758b865-78wn9" Dec 09 12:26:27 crc kubenswrapper[4703]: I1209 12:26:27.767039 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cb943c46-2f32-43db-8192-d20d6e8059ea-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-78wn9\" (UID: \"cb943c46-2f32-43db-8192-d20d6e8059ea\") " pod="openstack/dnsmasq-dns-698758b865-78wn9" Dec 09 12:26:27 crc kubenswrapper[4703]: I1209 12:26:27.767731 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb943c46-2f32-43db-8192-d20d6e8059ea-config\") pod \"dnsmasq-dns-698758b865-78wn9\" (UID: \"cb943c46-2f32-43db-8192-d20d6e8059ea\") " pod="openstack/dnsmasq-dns-698758b865-78wn9" Dec 09 12:26:27 crc kubenswrapper[4703]: I1209 12:26:27.768129 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cb943c46-2f32-43db-8192-d20d6e8059ea-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-78wn9\" (UID: \"cb943c46-2f32-43db-8192-d20d6e8059ea\") " pod="openstack/dnsmasq-dns-698758b865-78wn9" Dec 09 12:26:27 crc kubenswrapper[4703]: I1209 12:26:27.768483 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cb943c46-2f32-43db-8192-d20d6e8059ea-dns-svc\") pod \"dnsmasq-dns-698758b865-78wn9\" (UID: \"cb943c46-2f32-43db-8192-d20d6e8059ea\") " pod="openstack/dnsmasq-dns-698758b865-78wn9" Dec 09 12:26:27 crc kubenswrapper[4703]: I1209 12:26:27.806024 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmvhn\" (UniqueName: \"kubernetes.io/projected/cb943c46-2f32-43db-8192-d20d6e8059ea-kube-api-access-lmvhn\") pod \"dnsmasq-dns-698758b865-78wn9\" (UID: \"cb943c46-2f32-43db-8192-d20d6e8059ea\") " pod="openstack/dnsmasq-dns-698758b865-78wn9" Dec 09 12:26:27 crc kubenswrapper[4703]: I1209 12:26:27.860260 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-78wn9" Dec 09 12:26:28 crc kubenswrapper[4703]: I1209 12:26:28.626709 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Dec 09 12:26:28 crc kubenswrapper[4703]: I1209 12:26:28.651308 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Dec 09 12:26:28 crc kubenswrapper[4703]: I1209 12:26:28.655006 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Dec 09 12:26:28 crc kubenswrapper[4703]: I1209 12:26:28.655051 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Dec 09 12:26:28 crc kubenswrapper[4703]: I1209 12:26:28.655221 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Dec 09 12:26:28 crc kubenswrapper[4703]: I1209 12:26:28.655352 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-5j2z5" Dec 09 12:26:28 crc kubenswrapper[4703]: W1209 12:26:28.708172 4703 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcb943c46_2f32_43db_8192_d20d6e8059ea.slice/crio-d43b82055ae174b2b2abbc0292d577465db4e2c5c5d1abcb41752c2f152105e9 WatchSource:0}: Error finding container d43b82055ae174b2b2abbc0292d577465db4e2c5c5d1abcb41752c2f152105e9: Status 404 returned error can't find the container with id d43b82055ae174b2b2abbc0292d577465db4e2c5c5d1abcb41752c2f152105e9 Dec 09 12:26:28 crc kubenswrapper[4703]: I1209 12:26:28.728783 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Dec 09 12:26:28 crc kubenswrapper[4703]: I1209 12:26:28.744268 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-78wn9"] Dec 09 12:26:28 crc kubenswrapper[4703]: I1209 12:26:28.811579 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/7ad4bf88-77c9-4a0a-8fad-75f44d1b4f44-lock\") pod \"swift-storage-0\" (UID: \"7ad4bf88-77c9-4a0a-8fad-75f44d1b4f44\") " pod="openstack/swift-storage-0" Dec 09 12:26:28 crc kubenswrapper[4703]: I1209 12:26:28.811635 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7ad4bf88-77c9-4a0a-8fad-75f44d1b4f44-etc-swift\") pod \"swift-storage-0\" (UID: \"7ad4bf88-77c9-4a0a-8fad-75f44d1b4f44\") " pod="openstack/swift-storage-0" Dec 09 12:26:28 crc kubenswrapper[4703]: I1209 12:26:28.811659 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d40610ca-b658-46f0-97c5-0427a3bcf853\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d40610ca-b658-46f0-97c5-0427a3bcf853\") pod \"swift-storage-0\" (UID: \"7ad4bf88-77c9-4a0a-8fad-75f44d1b4f44\") " pod="openstack/swift-storage-0" Dec 09 12:26:28 crc kubenswrapper[4703]: I1209 12:26:28.811742 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-597n2\" (UniqueName: \"kubernetes.io/projected/7ad4bf88-77c9-4a0a-8fad-75f44d1b4f44-kube-api-access-597n2\") pod \"swift-storage-0\" (UID: \"7ad4bf88-77c9-4a0a-8fad-75f44d1b4f44\") " pod="openstack/swift-storage-0" Dec 09 12:26:28 crc kubenswrapper[4703]: I1209 12:26:28.811769 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/7ad4bf88-77c9-4a0a-8fad-75f44d1b4f44-cache\") pod \"swift-storage-0\" (UID: \"7ad4bf88-77c9-4a0a-8fad-75f44d1b4f44\") " pod="openstack/swift-storage-0" Dec 09 12:26:28 crc kubenswrapper[4703]: I1209 12:26:28.846476 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-cscwb"] Dec 09 12:26:28 crc kubenswrapper[4703]: I1209 12:26:28.852893 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-cscwb" Dec 09 12:26:28 crc kubenswrapper[4703]: I1209 12:26:28.857120 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Dec 09 12:26:28 crc kubenswrapper[4703]: I1209 12:26:28.857345 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Dec 09 12:26:28 crc kubenswrapper[4703]: I1209 12:26:28.857456 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Dec 09 12:26:28 crc kubenswrapper[4703]: I1209 12:26:28.869083 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-cscwb"] Dec 09 12:26:28 crc kubenswrapper[4703]: I1209 12:26:28.914284 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/7ad4bf88-77c9-4a0a-8fad-75f44d1b4f44-lock\") pod \"swift-storage-0\" (UID: \"7ad4bf88-77c9-4a0a-8fad-75f44d1b4f44\") " pod="openstack/swift-storage-0" Dec 09 12:26:28 crc kubenswrapper[4703]: I1209 12:26:28.914464 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7ad4bf88-77c9-4a0a-8fad-75f44d1b4f44-etc-swift\") pod \"swift-storage-0\" (UID: \"7ad4bf88-77c9-4a0a-8fad-75f44d1b4f44\") " pod="openstack/swift-storage-0" Dec 09 12:26:28 crc kubenswrapper[4703]: I1209 12:26:28.914497 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-d40610ca-b658-46f0-97c5-0427a3bcf853\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d40610ca-b658-46f0-97c5-0427a3bcf853\") pod \"swift-storage-0\" (UID: \"7ad4bf88-77c9-4a0a-8fad-75f44d1b4f44\") " pod="openstack/swift-storage-0" Dec 09 12:26:28 crc kubenswrapper[4703]: I1209 12:26:28.914636 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-597n2\" (UniqueName: \"kubernetes.io/projected/7ad4bf88-77c9-4a0a-8fad-75f44d1b4f44-kube-api-access-597n2\") pod \"swift-storage-0\" (UID: \"7ad4bf88-77c9-4a0a-8fad-75f44d1b4f44\") " pod="openstack/swift-storage-0" Dec 09 12:26:28 crc kubenswrapper[4703]: I1209 12:26:28.914675 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/7ad4bf88-77c9-4a0a-8fad-75f44d1b4f44-cache\") pod \"swift-storage-0\" (UID: \"7ad4bf88-77c9-4a0a-8fad-75f44d1b4f44\") " pod="openstack/swift-storage-0" Dec 09 12:26:28 crc kubenswrapper[4703]: I1209 12:26:28.915057 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/7ad4bf88-77c9-4a0a-8fad-75f44d1b4f44-lock\") pod \"swift-storage-0\" (UID: \"7ad4bf88-77c9-4a0a-8fad-75f44d1b4f44\") " pod="openstack/swift-storage-0" Dec 09 12:26:28 crc kubenswrapper[4703]: I1209 12:26:28.915400 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/7ad4bf88-77c9-4a0a-8fad-75f44d1b4f44-cache\") pod \"swift-storage-0\" (UID: \"7ad4bf88-77c9-4a0a-8fad-75f44d1b4f44\") " pod="openstack/swift-storage-0" Dec 09 12:26:28 crc kubenswrapper[4703]: E1209 12:26:28.917001 4703 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 09 12:26:28 crc kubenswrapper[4703]: E1209 12:26:28.917027 4703 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 09 12:26:28 crc kubenswrapper[4703]: E1209 12:26:28.917076 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7ad4bf88-77c9-4a0a-8fad-75f44d1b4f44-etc-swift podName:7ad4bf88-77c9-4a0a-8fad-75f44d1b4f44 nodeName:}" failed. No retries permitted until 2025-12-09 12:26:29.417059085 +0000 UTC m=+1288.665822604 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/7ad4bf88-77c9-4a0a-8fad-75f44d1b4f44-etc-swift") pod "swift-storage-0" (UID: "7ad4bf88-77c9-4a0a-8fad-75f44d1b4f44") : configmap "swift-ring-files" not found Dec 09 12:26:28 crc kubenswrapper[4703]: I1209 12:26:28.922778 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Dec 09 12:26:28 crc kubenswrapper[4703]: I1209 12:26:28.927650 4703 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 09 12:26:28 crc kubenswrapper[4703]: I1209 12:26:28.927693 4703 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-d40610ca-b658-46f0-97c5-0427a3bcf853\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d40610ca-b658-46f0-97c5-0427a3bcf853\") pod \"swift-storage-0\" (UID: \"7ad4bf88-77c9-4a0a-8fad-75f44d1b4f44\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/09348ae9f12b381b36e2cf30a46a90d6d0e8f58c6e8c5d401512c5460e777d59/globalmount\"" pod="openstack/swift-storage-0" Dec 09 12:26:28 crc kubenswrapper[4703]: I1209 12:26:28.939300 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-597n2\" (UniqueName: \"kubernetes.io/projected/7ad4bf88-77c9-4a0a-8fad-75f44d1b4f44-kube-api-access-597n2\") pod \"swift-storage-0\" (UID: \"7ad4bf88-77c9-4a0a-8fad-75f44d1b4f44\") " pod="openstack/swift-storage-0" Dec 09 12:26:28 crc kubenswrapper[4703]: I1209 12:26:28.985227 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-d40610ca-b658-46f0-97c5-0427a3bcf853\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d40610ca-b658-46f0-97c5-0427a3bcf853\") pod \"swift-storage-0\" (UID: \"7ad4bf88-77c9-4a0a-8fad-75f44d1b4f44\") " pod="openstack/swift-storage-0" Dec 09 12:26:29 crc kubenswrapper[4703]: I1209 12:26:29.018111 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/2adbe90b-b3dc-480b-9ab1-f6084b5dee94-swiftconf\") pod \"swift-ring-rebalance-cscwb\" (UID: \"2adbe90b-b3dc-480b-9ab1-f6084b5dee94\") " pod="openstack/swift-ring-rebalance-cscwb" Dec 09 12:26:29 crc kubenswrapper[4703]: I1209 12:26:29.018207 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2adbe90b-b3dc-480b-9ab1-f6084b5dee94-scripts\") pod \"swift-ring-rebalance-cscwb\" (UID: \"2adbe90b-b3dc-480b-9ab1-f6084b5dee94\") " pod="openstack/swift-ring-rebalance-cscwb" Dec 09 12:26:29 crc kubenswrapper[4703]: I1209 12:26:29.018242 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9t4m\" (UniqueName: \"kubernetes.io/projected/2adbe90b-b3dc-480b-9ab1-f6084b5dee94-kube-api-access-h9t4m\") pod \"swift-ring-rebalance-cscwb\" (UID: \"2adbe90b-b3dc-480b-9ab1-f6084b5dee94\") " pod="openstack/swift-ring-rebalance-cscwb" Dec 09 12:26:29 crc kubenswrapper[4703]: I1209 12:26:29.018263 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2adbe90b-b3dc-480b-9ab1-f6084b5dee94-combined-ca-bundle\") pod \"swift-ring-rebalance-cscwb\" (UID: \"2adbe90b-b3dc-480b-9ab1-f6084b5dee94\") " pod="openstack/swift-ring-rebalance-cscwb" Dec 09 12:26:29 crc kubenswrapper[4703]: I1209 12:26:29.018327 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/2adbe90b-b3dc-480b-9ab1-f6084b5dee94-ring-data-devices\") pod \"swift-ring-rebalance-cscwb\" (UID: \"2adbe90b-b3dc-480b-9ab1-f6084b5dee94\") " pod="openstack/swift-ring-rebalance-cscwb" Dec 09 12:26:29 crc kubenswrapper[4703]: I1209 12:26:29.018453 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/2adbe90b-b3dc-480b-9ab1-f6084b5dee94-dispersionconf\") pod \"swift-ring-rebalance-cscwb\" (UID: \"2adbe90b-b3dc-480b-9ab1-f6084b5dee94\") " pod="openstack/swift-ring-rebalance-cscwb" Dec 09 12:26:29 crc kubenswrapper[4703]: I1209 12:26:29.018517 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/2adbe90b-b3dc-480b-9ab1-f6084b5dee94-etc-swift\") pod \"swift-ring-rebalance-cscwb\" (UID: \"2adbe90b-b3dc-480b-9ab1-f6084b5dee94\") " pod="openstack/swift-ring-rebalance-cscwb" Dec 09 12:26:29 crc kubenswrapper[4703]: I1209 12:26:29.066016 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"fe0eb656-a6e6-4b2e-9caf-53ef2d0a3f2b","Type":"ContainerStarted","Data":"80e10048d73f171041abee16fa6dac3f63e32213ca67452290fbf9d8cc76ac61"} Dec 09 12:26:29 crc kubenswrapper[4703]: I1209 12:26:29.068329 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Dec 09 12:26:29 crc kubenswrapper[4703]: I1209 12:26:29.085081 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-78wn9" event={"ID":"cb943c46-2f32-43db-8192-d20d6e8059ea","Type":"ContainerStarted","Data":"c514fa49afd349d8a6515b420b8a40a44a3ef0067e8220566ec24af3ee1edce6"} Dec 09 12:26:29 crc kubenswrapper[4703]: I1209 12:26:29.085140 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-78wn9" event={"ID":"cb943c46-2f32-43db-8192-d20d6e8059ea","Type":"ContainerStarted","Data":"d43b82055ae174b2b2abbc0292d577465db4e2c5c5d1abcb41752c2f152105e9"} Dec 09 12:26:29 crc kubenswrapper[4703]: I1209 12:26:29.088625 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"fa6ba9a6-c0d8-4260-867d-ad91464cc39b","Type":"ContainerStarted","Data":"52678f4d744b011a61cfa2fc49d42d2001590defbde3f8f4f3474ba036292a67"} Dec 09 12:26:29 crc kubenswrapper[4703]: I1209 12:26:29.088688 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"fa6ba9a6-c0d8-4260-867d-ad91464cc39b","Type":"ContainerStarted","Data":"24dffebf92dc0a255b0556aeacbd12916ee52769b966f307baf5e4b62a102138"} Dec 09 12:26:29 crc kubenswrapper[4703]: I1209 12:26:29.095159 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7f896c8c65-5594z" podUID="185ccfc8-6550-4ac1-89c2-b366fe27bbf2" containerName="dnsmasq-dns" containerID="cri-o://d20c9f9f914369b1f124ba71fdd0dcc538ef28caf7a2a29cdf8e381db9926a61" gracePeriod=10 Dec 09 12:26:29 crc kubenswrapper[4703]: I1209 12:26:29.126539 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/2adbe90b-b3dc-480b-9ab1-f6084b5dee94-dispersionconf\") pod \"swift-ring-rebalance-cscwb\" (UID: \"2adbe90b-b3dc-480b-9ab1-f6084b5dee94\") " pod="openstack/swift-ring-rebalance-cscwb" Dec 09 12:26:29 crc kubenswrapper[4703]: I1209 12:26:29.126973 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/2adbe90b-b3dc-480b-9ab1-f6084b5dee94-etc-swift\") pod \"swift-ring-rebalance-cscwb\" (UID: \"2adbe90b-b3dc-480b-9ab1-f6084b5dee94\") " pod="openstack/swift-ring-rebalance-cscwb" Dec 09 12:26:29 crc kubenswrapper[4703]: I1209 12:26:29.127180 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/2adbe90b-b3dc-480b-9ab1-f6084b5dee94-swiftconf\") pod \"swift-ring-rebalance-cscwb\" (UID: \"2adbe90b-b3dc-480b-9ab1-f6084b5dee94\") " pod="openstack/swift-ring-rebalance-cscwb" Dec 09 12:26:29 crc kubenswrapper[4703]: I1209 12:26:29.127328 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2adbe90b-b3dc-480b-9ab1-f6084b5dee94-scripts\") pod \"swift-ring-rebalance-cscwb\" (UID: \"2adbe90b-b3dc-480b-9ab1-f6084b5dee94\") " pod="openstack/swift-ring-rebalance-cscwb" Dec 09 12:26:29 crc kubenswrapper[4703]: I1209 12:26:29.127487 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9t4m\" (UniqueName: \"kubernetes.io/projected/2adbe90b-b3dc-480b-9ab1-f6084b5dee94-kube-api-access-h9t4m\") pod \"swift-ring-rebalance-cscwb\" (UID: \"2adbe90b-b3dc-480b-9ab1-f6084b5dee94\") " pod="openstack/swift-ring-rebalance-cscwb" Dec 09 12:26:29 crc kubenswrapper[4703]: I1209 12:26:29.127583 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2adbe90b-b3dc-480b-9ab1-f6084b5dee94-combined-ca-bundle\") pod \"swift-ring-rebalance-cscwb\" (UID: \"2adbe90b-b3dc-480b-9ab1-f6084b5dee94\") " pod="openstack/swift-ring-rebalance-cscwb" Dec 09 12:26:29 crc kubenswrapper[4703]: I1209 12:26:29.127825 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/2adbe90b-b3dc-480b-9ab1-f6084b5dee94-ring-data-devices\") pod \"swift-ring-rebalance-cscwb\" (UID: \"2adbe90b-b3dc-480b-9ab1-f6084b5dee94\") " pod="openstack/swift-ring-rebalance-cscwb" Dec 09 12:26:29 crc kubenswrapper[4703]: I1209 12:26:29.132545 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/2adbe90b-b3dc-480b-9ab1-f6084b5dee94-ring-data-devices\") pod \"swift-ring-rebalance-cscwb\" (UID: \"2adbe90b-b3dc-480b-9ab1-f6084b5dee94\") " pod="openstack/swift-ring-rebalance-cscwb" Dec 09 12:26:29 crc kubenswrapper[4703]: I1209 12:26:29.160279 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2adbe90b-b3dc-480b-9ab1-f6084b5dee94-scripts\") pod \"swift-ring-rebalance-cscwb\" (UID: \"2adbe90b-b3dc-480b-9ab1-f6084b5dee94\") " pod="openstack/swift-ring-rebalance-cscwb" Dec 09 12:26:29 crc kubenswrapper[4703]: I1209 12:26:29.165518 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2adbe90b-b3dc-480b-9ab1-f6084b5dee94-combined-ca-bundle\") pod \"swift-ring-rebalance-cscwb\" (UID: \"2adbe90b-b3dc-480b-9ab1-f6084b5dee94\") " pod="openstack/swift-ring-rebalance-cscwb" Dec 09 12:26:29 crc kubenswrapper[4703]: I1209 12:26:29.165867 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/2adbe90b-b3dc-480b-9ab1-f6084b5dee94-etc-swift\") pod \"swift-ring-rebalance-cscwb\" (UID: \"2adbe90b-b3dc-480b-9ab1-f6084b5dee94\") " pod="openstack/swift-ring-rebalance-cscwb" Dec 09 12:26:29 crc kubenswrapper[4703]: I1209 12:26:29.174424 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/2adbe90b-b3dc-480b-9ab1-f6084b5dee94-dispersionconf\") pod \"swift-ring-rebalance-cscwb\" (UID: \"2adbe90b-b3dc-480b-9ab1-f6084b5dee94\") " pod="openstack/swift-ring-rebalance-cscwb" Dec 09 12:26:29 crc kubenswrapper[4703]: I1209 12:26:29.197833 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/2adbe90b-b3dc-480b-9ab1-f6084b5dee94-swiftconf\") pod \"swift-ring-rebalance-cscwb\" (UID: \"2adbe90b-b3dc-480b-9ab1-f6084b5dee94\") " pod="openstack/swift-ring-rebalance-cscwb" Dec 09 12:26:29 crc kubenswrapper[4703]: I1209 12:26:29.203003 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9t4m\" (UniqueName: \"kubernetes.io/projected/2adbe90b-b3dc-480b-9ab1-f6084b5dee94-kube-api-access-h9t4m\") pod \"swift-ring-rebalance-cscwb\" (UID: \"2adbe90b-b3dc-480b-9ab1-f6084b5dee94\") " pod="openstack/swift-ring-rebalance-cscwb" Dec 09 12:26:29 crc kubenswrapper[4703]: I1209 12:26:29.226085 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-cscwb" Dec 09 12:26:29 crc kubenswrapper[4703]: I1209 12:26:29.240995 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=3.772718197 podStartE2EDuration="7.240966911s" podCreationTimestamp="2025-12-09 12:26:22 +0000 UTC" firstStartedPulling="2025-12-09 12:26:24.100146958 +0000 UTC m=+1283.348910477" lastFinishedPulling="2025-12-09 12:26:27.568395672 +0000 UTC m=+1286.817159191" observedRunningTime="2025-12-09 12:26:29.198276278 +0000 UTC m=+1288.447039797" watchObservedRunningTime="2025-12-09 12:26:29.240966911 +0000 UTC m=+1288.489730430" Dec 09 12:26:29 crc kubenswrapper[4703]: E1209 12:26:29.330856 4703 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcb943c46_2f32_43db_8192_d20d6e8059ea.slice/crio-c514fa49afd349d8a6515b420b8a40a44a3ef0067e8220566ec24af3ee1edce6.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcb943c46_2f32_43db_8192_d20d6e8059ea.slice/crio-conmon-c514fa49afd349d8a6515b420b8a40a44a3ef0067e8220566ec24af3ee1edce6.scope\": RecentStats: unable to find data in memory cache]" Dec 09 12:26:29 crc kubenswrapper[4703]: I1209 12:26:29.469295 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7ad4bf88-77c9-4a0a-8fad-75f44d1b4f44-etc-swift\") pod \"swift-storage-0\" (UID: \"7ad4bf88-77c9-4a0a-8fad-75f44d1b4f44\") " pod="openstack/swift-storage-0" Dec 09 12:26:29 crc kubenswrapper[4703]: E1209 12:26:29.469635 4703 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 09 12:26:29 crc kubenswrapper[4703]: E1209 12:26:29.470060 4703 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 09 12:26:29 crc kubenswrapper[4703]: E1209 12:26:29.470134 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7ad4bf88-77c9-4a0a-8fad-75f44d1b4f44-etc-swift podName:7ad4bf88-77c9-4a0a-8fad-75f44d1b4f44 nodeName:}" failed. No retries permitted until 2025-12-09 12:26:30.470113184 +0000 UTC m=+1289.718876703 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/7ad4bf88-77c9-4a0a-8fad-75f44d1b4f44-etc-swift") pod "swift-storage-0" (UID: "7ad4bf88-77c9-4a0a-8fad-75f44d1b4f44") : configmap "swift-ring-files" not found Dec 09 12:26:29 crc kubenswrapper[4703]: I1209 12:26:29.861140 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-5594z" Dec 09 12:26:30 crc kubenswrapper[4703]: I1209 12:26:30.000953 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/185ccfc8-6550-4ac1-89c2-b366fe27bbf2-dns-svc\") pod \"185ccfc8-6550-4ac1-89c2-b366fe27bbf2\" (UID: \"185ccfc8-6550-4ac1-89c2-b366fe27bbf2\") " Dec 09 12:26:30 crc kubenswrapper[4703]: I1209 12:26:30.001127 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/185ccfc8-6550-4ac1-89c2-b366fe27bbf2-config\") pod \"185ccfc8-6550-4ac1-89c2-b366fe27bbf2\" (UID: \"185ccfc8-6550-4ac1-89c2-b366fe27bbf2\") " Dec 09 12:26:30 crc kubenswrapper[4703]: I1209 12:26:30.002185 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5pk6c\" (UniqueName: \"kubernetes.io/projected/185ccfc8-6550-4ac1-89c2-b366fe27bbf2-kube-api-access-5pk6c\") pod \"185ccfc8-6550-4ac1-89c2-b366fe27bbf2\" (UID: \"185ccfc8-6550-4ac1-89c2-b366fe27bbf2\") " Dec 09 12:26:30 crc kubenswrapper[4703]: I1209 12:26:30.002474 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/185ccfc8-6550-4ac1-89c2-b366fe27bbf2-ovsdbserver-sb\") pod \"185ccfc8-6550-4ac1-89c2-b366fe27bbf2\" (UID: \"185ccfc8-6550-4ac1-89c2-b366fe27bbf2\") " Dec 09 12:26:30 crc kubenswrapper[4703]: I1209 12:26:30.016558 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-cscwb"] Dec 09 12:26:30 crc kubenswrapper[4703]: I1209 12:26:30.086058 4703 patch_prober.go:28] interesting pod/machine-config-daemon-q8sfk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 12:26:30 crc kubenswrapper[4703]: I1209 12:26:30.086116 4703 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 12:26:30 crc kubenswrapper[4703]: I1209 12:26:30.106403 4703 generic.go:334] "Generic (PLEG): container finished" podID="cb943c46-2f32-43db-8192-d20d6e8059ea" containerID="c514fa49afd349d8a6515b420b8a40a44a3ef0067e8220566ec24af3ee1edce6" exitCode=0 Dec 09 12:26:30 crc kubenswrapper[4703]: I1209 12:26:30.106493 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-78wn9" event={"ID":"cb943c46-2f32-43db-8192-d20d6e8059ea","Type":"ContainerDied","Data":"c514fa49afd349d8a6515b420b8a40a44a3ef0067e8220566ec24af3ee1edce6"} Dec 09 12:26:30 crc kubenswrapper[4703]: I1209 12:26:30.106527 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-78wn9" event={"ID":"cb943c46-2f32-43db-8192-d20d6e8059ea","Type":"ContainerStarted","Data":"914e48a0dde8274b9f23ac340400c01aa3da20933e4be792329ffbd3f744f87d"} Dec 09 12:26:30 crc kubenswrapper[4703]: I1209 12:26:30.108617 4703 generic.go:334] "Generic (PLEG): container finished" podID="185ccfc8-6550-4ac1-89c2-b366fe27bbf2" containerID="d20c9f9f914369b1f124ba71fdd0dcc538ef28caf7a2a29cdf8e381db9926a61" exitCode=0 Dec 09 12:26:30 crc kubenswrapper[4703]: I1209 12:26:30.108636 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-5594z" Dec 09 12:26:30 crc kubenswrapper[4703]: I1209 12:26:30.109584 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-5594z" event={"ID":"185ccfc8-6550-4ac1-89c2-b366fe27bbf2","Type":"ContainerDied","Data":"d20c9f9f914369b1f124ba71fdd0dcc538ef28caf7a2a29cdf8e381db9926a61"} Dec 09 12:26:30 crc kubenswrapper[4703]: I1209 12:26:30.109624 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Dec 09 12:26:30 crc kubenswrapper[4703]: I1209 12:26:30.109640 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-5594z" event={"ID":"185ccfc8-6550-4ac1-89c2-b366fe27bbf2","Type":"ContainerDied","Data":"64962d000ada7c80a6a07b21fc904acda89410b82c2702e86a355b3e74673bc4"} Dec 09 12:26:30 crc kubenswrapper[4703]: I1209 12:26:30.109663 4703 scope.go:117] "RemoveContainer" containerID="d20c9f9f914369b1f124ba71fdd0dcc538ef28caf7a2a29cdf8e381db9926a61" Dec 09 12:26:30 crc kubenswrapper[4703]: I1209 12:26:30.131737 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/185ccfc8-6550-4ac1-89c2-b366fe27bbf2-kube-api-access-5pk6c" (OuterVolumeSpecName: "kube-api-access-5pk6c") pod "185ccfc8-6550-4ac1-89c2-b366fe27bbf2" (UID: "185ccfc8-6550-4ac1-89c2-b366fe27bbf2"). InnerVolumeSpecName "kube-api-access-5pk6c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:26:30 crc kubenswrapper[4703]: I1209 12:26:30.199422 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/185ccfc8-6550-4ac1-89c2-b366fe27bbf2-config" (OuterVolumeSpecName: "config") pod "185ccfc8-6550-4ac1-89c2-b366fe27bbf2" (UID: "185ccfc8-6550-4ac1-89c2-b366fe27bbf2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:26:30 crc kubenswrapper[4703]: I1209 12:26:30.210890 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5pk6c\" (UniqueName: \"kubernetes.io/projected/185ccfc8-6550-4ac1-89c2-b366fe27bbf2-kube-api-access-5pk6c\") on node \"crc\" DevicePath \"\"" Dec 09 12:26:30 crc kubenswrapper[4703]: I1209 12:26:30.211263 4703 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/185ccfc8-6550-4ac1-89c2-b366fe27bbf2-config\") on node \"crc\" DevicePath \"\"" Dec 09 12:26:30 crc kubenswrapper[4703]: I1209 12:26:30.225444 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/185ccfc8-6550-4ac1-89c2-b366fe27bbf2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "185ccfc8-6550-4ac1-89c2-b366fe27bbf2" (UID: "185ccfc8-6550-4ac1-89c2-b366fe27bbf2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:26:30 crc kubenswrapper[4703]: I1209 12:26:30.226926 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/185ccfc8-6550-4ac1-89c2-b366fe27bbf2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "185ccfc8-6550-4ac1-89c2-b366fe27bbf2" (UID: "185ccfc8-6550-4ac1-89c2-b366fe27bbf2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:26:30 crc kubenswrapper[4703]: I1209 12:26:30.245647 4703 scope.go:117] "RemoveContainer" containerID="daa0f086da1aee6d24a61f07c1160311e958c469a38622e0d6a0eb7357f9cbe3" Dec 09 12:26:30 crc kubenswrapper[4703]: I1209 12:26:30.260234 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-distributor-664b687b54-cjj5n" Dec 09 12:26:30 crc kubenswrapper[4703]: I1209 12:26:30.313759 4703 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/185ccfc8-6550-4ac1-89c2-b366fe27bbf2-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 09 12:26:30 crc kubenswrapper[4703]: I1209 12:26:30.313801 4703 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/185ccfc8-6550-4ac1-89c2-b366fe27bbf2-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 09 12:26:30 crc kubenswrapper[4703]: I1209 12:26:30.459078 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-5594z"] Dec 09 12:26:30 crc kubenswrapper[4703]: I1209 12:26:30.472618 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-5594z"] Dec 09 12:26:30 crc kubenswrapper[4703]: I1209 12:26:30.503575 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-querier-5467947bf7-tfqqj" Dec 09 12:26:30 crc kubenswrapper[4703]: I1209 12:26:30.516947 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7ad4bf88-77c9-4a0a-8fad-75f44d1b4f44-etc-swift\") pod \"swift-storage-0\" (UID: \"7ad4bf88-77c9-4a0a-8fad-75f44d1b4f44\") " pod="openstack/swift-storage-0" Dec 09 12:26:30 crc kubenswrapper[4703]: E1209 12:26:30.517280 4703 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 09 12:26:30 crc kubenswrapper[4703]: E1209 12:26:30.517300 4703 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 09 12:26:30 crc kubenswrapper[4703]: E1209 12:26:30.517355 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7ad4bf88-77c9-4a0a-8fad-75f44d1b4f44-etc-swift podName:7ad4bf88-77c9-4a0a-8fad-75f44d1b4f44 nodeName:}" failed. No retries permitted until 2025-12-09 12:26:32.517334994 +0000 UTC m=+1291.766098513 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/7ad4bf88-77c9-4a0a-8fad-75f44d1b4f44-etc-swift") pod "swift-storage-0" (UID: "7ad4bf88-77c9-4a0a-8fad-75f44d1b4f44") : configmap "swift-ring-files" not found Dec 09 12:26:30 crc kubenswrapper[4703]: I1209 12:26:30.611702 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-query-frontend-7c8cd744d9-9rn7g" Dec 09 12:26:31 crc kubenswrapper[4703]: I1209 12:26:31.084776 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="185ccfc8-6550-4ac1-89c2-b366fe27bbf2" path="/var/lib/kubelet/pods/185ccfc8-6550-4ac1-89c2-b366fe27bbf2/volumes" Dec 09 12:26:31 crc kubenswrapper[4703]: I1209 12:26:31.135985 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-cscwb" event={"ID":"2adbe90b-b3dc-480b-9ab1-f6084b5dee94","Type":"ContainerStarted","Data":"19cfaa20be13e6c3f204858971397f33f5898083c5b82e4e66a25ab03e4e9148"} Dec 09 12:26:31 crc kubenswrapper[4703]: I1209 12:26:31.136263 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-78wn9" Dec 09 12:26:31 crc kubenswrapper[4703]: I1209 12:26:31.176453 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-78wn9" podStartSLOduration=4.17643079 podStartE2EDuration="4.17643079s" podCreationTimestamp="2025-12-09 12:26:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:26:31.171310375 +0000 UTC m=+1290.420073894" watchObservedRunningTime="2025-12-09 12:26:31.17643079 +0000 UTC m=+1290.425194309" Dec 09 12:26:31 crc kubenswrapper[4703]: I1209 12:26:31.667981 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-index-gateway-0" Dec 09 12:26:31 crc kubenswrapper[4703]: I1209 12:26:31.789578 4703 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="be3c1046-2c78-46ab-a62f-f4270561ca1c" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 09 12:26:32 crc kubenswrapper[4703]: I1209 12:26:32.148462 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"fe0eb656-a6e6-4b2e-9caf-53ef2d0a3f2b","Type":"ContainerStarted","Data":"ad1ed687d4b79ca1999ea61d025532ac4a4836339421de4fa41a020ec10d45d1"} Dec 09 12:26:32 crc kubenswrapper[4703]: I1209 12:26:32.148878 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/alertmanager-metric-storage-0" Dec 09 12:26:32 crc kubenswrapper[4703]: I1209 12:26:32.152302 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/alertmanager-metric-storage-0" Dec 09 12:26:32 crc kubenswrapper[4703]: I1209 12:26:32.179450 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/alertmanager-metric-storage-0" podStartSLOduration=6.382425369 podStartE2EDuration="1m14.179430037s" podCreationTimestamp="2025-12-09 12:25:18 +0000 UTC" firstStartedPulling="2025-12-09 12:25:19.771690651 +0000 UTC m=+1219.020454170" lastFinishedPulling="2025-12-09 12:26:27.568695319 +0000 UTC m=+1286.817458838" observedRunningTime="2025-12-09 12:26:32.172355071 +0000 UTC m=+1291.421118590" watchObservedRunningTime="2025-12-09 12:26:32.179430037 +0000 UTC m=+1291.428193556" Dec 09 12:26:32 crc kubenswrapper[4703]: I1209 12:26:32.572090 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7ad4bf88-77c9-4a0a-8fad-75f44d1b4f44-etc-swift\") pod \"swift-storage-0\" (UID: \"7ad4bf88-77c9-4a0a-8fad-75f44d1b4f44\") " pod="openstack/swift-storage-0" Dec 09 12:26:32 crc kubenswrapper[4703]: E1209 12:26:32.572410 4703 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 09 12:26:32 crc kubenswrapper[4703]: E1209 12:26:32.572541 4703 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 09 12:26:32 crc kubenswrapper[4703]: E1209 12:26:32.572625 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7ad4bf88-77c9-4a0a-8fad-75f44d1b4f44-etc-swift podName:7ad4bf88-77c9-4a0a-8fad-75f44d1b4f44 nodeName:}" failed. No retries permitted until 2025-12-09 12:26:36.572603193 +0000 UTC m=+1295.821366712 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/7ad4bf88-77c9-4a0a-8fad-75f44d1b4f44-etc-swift") pod "swift-storage-0" (UID: "7ad4bf88-77c9-4a0a-8fad-75f44d1b4f44") : configmap "swift-ring-files" not found Dec 09 12:26:32 crc kubenswrapper[4703]: I1209 12:26:32.795733 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Dec 09 12:26:32 crc kubenswrapper[4703]: I1209 12:26:32.851928 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86db49b7ff-zjvxl" Dec 09 12:26:32 crc kubenswrapper[4703]: I1209 12:26:32.924339 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Dec 09 12:26:33 crc kubenswrapper[4703]: I1209 12:26:33.698880 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-compactor-0" Dec 09 12:26:35 crc kubenswrapper[4703]: I1209 12:26:35.185137 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-44txf"] Dec 09 12:26:35 crc kubenswrapper[4703]: E1209 12:26:35.185851 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="185ccfc8-6550-4ac1-89c2-b366fe27bbf2" containerName="dnsmasq-dns" Dec 09 12:26:35 crc kubenswrapper[4703]: I1209 12:26:35.185865 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="185ccfc8-6550-4ac1-89c2-b366fe27bbf2" containerName="dnsmasq-dns" Dec 09 12:26:35 crc kubenswrapper[4703]: E1209 12:26:35.185892 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="185ccfc8-6550-4ac1-89c2-b366fe27bbf2" containerName="init" Dec 09 12:26:35 crc kubenswrapper[4703]: I1209 12:26:35.185898 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="185ccfc8-6550-4ac1-89c2-b366fe27bbf2" containerName="init" Dec 09 12:26:35 crc kubenswrapper[4703]: I1209 12:26:35.186084 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="185ccfc8-6550-4ac1-89c2-b366fe27bbf2" containerName="dnsmasq-dns" Dec 09 12:26:35 crc kubenswrapper[4703]: I1209 12:26:35.187059 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-44txf" Dec 09 12:26:35 crc kubenswrapper[4703]: I1209 12:26:35.207068 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-63d9-account-create-update-s4v4m"] Dec 09 12:26:35 crc kubenswrapper[4703]: I1209 12:26:35.208902 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-63d9-account-create-update-s4v4m" Dec 09 12:26:35 crc kubenswrapper[4703]: I1209 12:26:35.210901 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Dec 09 12:26:35 crc kubenswrapper[4703]: I1209 12:26:35.221547 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-44txf"] Dec 09 12:26:35 crc kubenswrapper[4703]: I1209 12:26:35.230064 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-63d9-account-create-update-s4v4m"] Dec 09 12:26:35 crc kubenswrapper[4703]: I1209 12:26:35.243676 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/58dc912e-9acc-425c-b79f-c76604da270b-operator-scripts\") pod \"keystone-63d9-account-create-update-s4v4m\" (UID: \"58dc912e-9acc-425c-b79f-c76604da270b\") " pod="openstack/keystone-63d9-account-create-update-s4v4m" Dec 09 12:26:35 crc kubenswrapper[4703]: I1209 12:26:35.243817 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvm2v\" (UniqueName: \"kubernetes.io/projected/02c41eb5-761c-4cb6-b9dc-57f238f48b87-kube-api-access-cvm2v\") pod \"keystone-db-create-44txf\" (UID: \"02c41eb5-761c-4cb6-b9dc-57f238f48b87\") " pod="openstack/keystone-db-create-44txf" Dec 09 12:26:35 crc kubenswrapper[4703]: I1209 12:26:35.243876 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/02c41eb5-761c-4cb6-b9dc-57f238f48b87-operator-scripts\") pod \"keystone-db-create-44txf\" (UID: \"02c41eb5-761c-4cb6-b9dc-57f238f48b87\") " pod="openstack/keystone-db-create-44txf" Dec 09 12:26:35 crc kubenswrapper[4703]: I1209 12:26:35.243967 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rbzl\" (UniqueName: \"kubernetes.io/projected/58dc912e-9acc-425c-b79f-c76604da270b-kube-api-access-8rbzl\") pod \"keystone-63d9-account-create-update-s4v4m\" (UID: \"58dc912e-9acc-425c-b79f-c76604da270b\") " pod="openstack/keystone-63d9-account-create-update-s4v4m" Dec 09 12:26:35 crc kubenswrapper[4703]: I1209 12:26:35.346047 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/58dc912e-9acc-425c-b79f-c76604da270b-operator-scripts\") pod \"keystone-63d9-account-create-update-s4v4m\" (UID: \"58dc912e-9acc-425c-b79f-c76604da270b\") " pod="openstack/keystone-63d9-account-create-update-s4v4m" Dec 09 12:26:35 crc kubenswrapper[4703]: I1209 12:26:35.346175 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvm2v\" (UniqueName: \"kubernetes.io/projected/02c41eb5-761c-4cb6-b9dc-57f238f48b87-kube-api-access-cvm2v\") pod \"keystone-db-create-44txf\" (UID: \"02c41eb5-761c-4cb6-b9dc-57f238f48b87\") " pod="openstack/keystone-db-create-44txf" Dec 09 12:26:35 crc kubenswrapper[4703]: I1209 12:26:35.346229 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/02c41eb5-761c-4cb6-b9dc-57f238f48b87-operator-scripts\") pod \"keystone-db-create-44txf\" (UID: \"02c41eb5-761c-4cb6-b9dc-57f238f48b87\") " pod="openstack/keystone-db-create-44txf" Dec 09 12:26:35 crc kubenswrapper[4703]: I1209 12:26:35.346257 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rbzl\" (UniqueName: \"kubernetes.io/projected/58dc912e-9acc-425c-b79f-c76604da270b-kube-api-access-8rbzl\") pod \"keystone-63d9-account-create-update-s4v4m\" (UID: \"58dc912e-9acc-425c-b79f-c76604da270b\") " pod="openstack/keystone-63d9-account-create-update-s4v4m" Dec 09 12:26:35 crc kubenswrapper[4703]: I1209 12:26:35.347048 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/58dc912e-9acc-425c-b79f-c76604da270b-operator-scripts\") pod \"keystone-63d9-account-create-update-s4v4m\" (UID: \"58dc912e-9acc-425c-b79f-c76604da270b\") " pod="openstack/keystone-63d9-account-create-update-s4v4m" Dec 09 12:26:35 crc kubenswrapper[4703]: I1209 12:26:35.347106 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/02c41eb5-761c-4cb6-b9dc-57f238f48b87-operator-scripts\") pod \"keystone-db-create-44txf\" (UID: \"02c41eb5-761c-4cb6-b9dc-57f238f48b87\") " pod="openstack/keystone-db-create-44txf" Dec 09 12:26:35 crc kubenswrapper[4703]: I1209 12:26:35.381356 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvm2v\" (UniqueName: \"kubernetes.io/projected/02c41eb5-761c-4cb6-b9dc-57f238f48b87-kube-api-access-cvm2v\") pod \"keystone-db-create-44txf\" (UID: \"02c41eb5-761c-4cb6-b9dc-57f238f48b87\") " pod="openstack/keystone-db-create-44txf" Dec 09 12:26:35 crc kubenswrapper[4703]: I1209 12:26:35.381449 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rbzl\" (UniqueName: \"kubernetes.io/projected/58dc912e-9acc-425c-b79f-c76604da270b-kube-api-access-8rbzl\") pod \"keystone-63d9-account-create-update-s4v4m\" (UID: \"58dc912e-9acc-425c-b79f-c76604da270b\") " pod="openstack/keystone-63d9-account-create-update-s4v4m" Dec 09 12:26:35 crc kubenswrapper[4703]: I1209 12:26:35.514743 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-nfgrz"] Dec 09 12:26:35 crc kubenswrapper[4703]: I1209 12:26:35.516655 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-nfgrz" Dec 09 12:26:35 crc kubenswrapper[4703]: I1209 12:26:35.555723 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-nfgrz"] Dec 09 12:26:35 crc kubenswrapper[4703]: I1209 12:26:35.558964 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dspvk\" (UniqueName: \"kubernetes.io/projected/e04561f1-805c-4e06-a01c-6548e9a234e5-kube-api-access-dspvk\") pod \"placement-db-create-nfgrz\" (UID: \"e04561f1-805c-4e06-a01c-6548e9a234e5\") " pod="openstack/placement-db-create-nfgrz" Dec 09 12:26:35 crc kubenswrapper[4703]: I1209 12:26:35.559725 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-44txf" Dec 09 12:26:35 crc kubenswrapper[4703]: I1209 12:26:35.560270 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e04561f1-805c-4e06-a01c-6548e9a234e5-operator-scripts\") pod \"placement-db-create-nfgrz\" (UID: \"e04561f1-805c-4e06-a01c-6548e9a234e5\") " pod="openstack/placement-db-create-nfgrz" Dec 09 12:26:35 crc kubenswrapper[4703]: I1209 12:26:35.572406 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-63d9-account-create-update-s4v4m" Dec 09 12:26:35 crc kubenswrapper[4703]: I1209 12:26:35.779771 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dspvk\" (UniqueName: \"kubernetes.io/projected/e04561f1-805c-4e06-a01c-6548e9a234e5-kube-api-access-dspvk\") pod \"placement-db-create-nfgrz\" (UID: \"e04561f1-805c-4e06-a01c-6548e9a234e5\") " pod="openstack/placement-db-create-nfgrz" Dec 09 12:26:35 crc kubenswrapper[4703]: I1209 12:26:35.779854 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e04561f1-805c-4e06-a01c-6548e9a234e5-operator-scripts\") pod \"placement-db-create-nfgrz\" (UID: \"e04561f1-805c-4e06-a01c-6548e9a234e5\") " pod="openstack/placement-db-create-nfgrz" Dec 09 12:26:35 crc kubenswrapper[4703]: I1209 12:26:35.780693 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e04561f1-805c-4e06-a01c-6548e9a234e5-operator-scripts\") pod \"placement-db-create-nfgrz\" (UID: \"e04561f1-805c-4e06-a01c-6548e9a234e5\") " pod="openstack/placement-db-create-nfgrz" Dec 09 12:26:35 crc kubenswrapper[4703]: I1209 12:26:35.790378 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-b17c-account-create-update-vgfln"] Dec 09 12:26:35 crc kubenswrapper[4703]: I1209 12:26:35.791924 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-b17c-account-create-update-vgfln" Dec 09 12:26:35 crc kubenswrapper[4703]: I1209 12:26:35.804559 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Dec 09 12:26:35 crc kubenswrapper[4703]: I1209 12:26:35.826516 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-b17c-account-create-update-vgfln"] Dec 09 12:26:35 crc kubenswrapper[4703]: I1209 12:26:35.851965 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dspvk\" (UniqueName: \"kubernetes.io/projected/e04561f1-805c-4e06-a01c-6548e9a234e5-kube-api-access-dspvk\") pod \"placement-db-create-nfgrz\" (UID: \"e04561f1-805c-4e06-a01c-6548e9a234e5\") " pod="openstack/placement-db-create-nfgrz" Dec 09 12:26:35 crc kubenswrapper[4703]: I1209 12:26:35.910146 4703 scope.go:117] "RemoveContainer" containerID="d20c9f9f914369b1f124ba71fdd0dcc538ef28caf7a2a29cdf8e381db9926a61" Dec 09 12:26:35 crc kubenswrapper[4703]: E1209 12:26:35.910585 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d20c9f9f914369b1f124ba71fdd0dcc538ef28caf7a2a29cdf8e381db9926a61\": container with ID starting with d20c9f9f914369b1f124ba71fdd0dcc538ef28caf7a2a29cdf8e381db9926a61 not found: ID does not exist" containerID="d20c9f9f914369b1f124ba71fdd0dcc538ef28caf7a2a29cdf8e381db9926a61" Dec 09 12:26:35 crc kubenswrapper[4703]: I1209 12:26:35.910638 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d20c9f9f914369b1f124ba71fdd0dcc538ef28caf7a2a29cdf8e381db9926a61"} err="failed to get container status \"d20c9f9f914369b1f124ba71fdd0dcc538ef28caf7a2a29cdf8e381db9926a61\": rpc error: code = NotFound desc = could not find container \"d20c9f9f914369b1f124ba71fdd0dcc538ef28caf7a2a29cdf8e381db9926a61\": container with ID starting with d20c9f9f914369b1f124ba71fdd0dcc538ef28caf7a2a29cdf8e381db9926a61 not found: ID does not exist" Dec 09 12:26:35 crc kubenswrapper[4703]: I1209 12:26:35.910666 4703 scope.go:117] "RemoveContainer" containerID="daa0f086da1aee6d24a61f07c1160311e958c469a38622e0d6a0eb7357f9cbe3" Dec 09 12:26:35 crc kubenswrapper[4703]: E1209 12:26:35.911490 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"daa0f086da1aee6d24a61f07c1160311e958c469a38622e0d6a0eb7357f9cbe3\": container with ID starting with daa0f086da1aee6d24a61f07c1160311e958c469a38622e0d6a0eb7357f9cbe3 not found: ID does not exist" containerID="daa0f086da1aee6d24a61f07c1160311e958c469a38622e0d6a0eb7357f9cbe3" Dec 09 12:26:35 crc kubenswrapper[4703]: I1209 12:26:35.911531 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"daa0f086da1aee6d24a61f07c1160311e958c469a38622e0d6a0eb7357f9cbe3"} err="failed to get container status \"daa0f086da1aee6d24a61f07c1160311e958c469a38622e0d6a0eb7357f9cbe3\": rpc error: code = NotFound desc = could not find container \"daa0f086da1aee6d24a61f07c1160311e958c469a38622e0d6a0eb7357f9cbe3\": container with ID starting with daa0f086da1aee6d24a61f07c1160311e958c469a38622e0d6a0eb7357f9cbe3 not found: ID does not exist" Dec 09 12:26:35 crc kubenswrapper[4703]: I1209 12:26:35.985329 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvwql\" (UniqueName: \"kubernetes.io/projected/1d051446-c217-438b-9b9f-7477ae28c6f4-kube-api-access-qvwql\") pod \"placement-b17c-account-create-update-vgfln\" (UID: \"1d051446-c217-438b-9b9f-7477ae28c6f4\") " pod="openstack/placement-b17c-account-create-update-vgfln" Dec 09 12:26:35 crc kubenswrapper[4703]: I1209 12:26:35.985811 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d051446-c217-438b-9b9f-7477ae28c6f4-operator-scripts\") pod \"placement-b17c-account-create-update-vgfln\" (UID: \"1d051446-c217-438b-9b9f-7477ae28c6f4\") " pod="openstack/placement-b17c-account-create-update-vgfln" Dec 09 12:26:36 crc kubenswrapper[4703]: I1209 12:26:36.092525 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvwql\" (UniqueName: \"kubernetes.io/projected/1d051446-c217-438b-9b9f-7477ae28c6f4-kube-api-access-qvwql\") pod \"placement-b17c-account-create-update-vgfln\" (UID: \"1d051446-c217-438b-9b9f-7477ae28c6f4\") " pod="openstack/placement-b17c-account-create-update-vgfln" Dec 09 12:26:36 crc kubenswrapper[4703]: I1209 12:26:36.092662 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d051446-c217-438b-9b9f-7477ae28c6f4-operator-scripts\") pod \"placement-b17c-account-create-update-vgfln\" (UID: \"1d051446-c217-438b-9b9f-7477ae28c6f4\") " pod="openstack/placement-b17c-account-create-update-vgfln" Dec 09 12:26:36 crc kubenswrapper[4703]: I1209 12:26:36.094952 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d051446-c217-438b-9b9f-7477ae28c6f4-operator-scripts\") pod \"placement-b17c-account-create-update-vgfln\" (UID: \"1d051446-c217-438b-9b9f-7477ae28c6f4\") " pod="openstack/placement-b17c-account-create-update-vgfln" Dec 09 12:26:36 crc kubenswrapper[4703]: I1209 12:26:36.110930 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvwql\" (UniqueName: \"kubernetes.io/projected/1d051446-c217-438b-9b9f-7477ae28c6f4-kube-api-access-qvwql\") pod \"placement-b17c-account-create-update-vgfln\" (UID: \"1d051446-c217-438b-9b9f-7477ae28c6f4\") " pod="openstack/placement-b17c-account-create-update-vgfln" Dec 09 12:26:36 crc kubenswrapper[4703]: I1209 12:26:36.124656 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-b17c-account-create-update-vgfln" Dec 09 12:26:36 crc kubenswrapper[4703]: I1209 12:26:36.151594 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-nfgrz" Dec 09 12:26:36 crc kubenswrapper[4703]: I1209 12:26:36.610849 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7ad4bf88-77c9-4a0a-8fad-75f44d1b4f44-etc-swift\") pod \"swift-storage-0\" (UID: \"7ad4bf88-77c9-4a0a-8fad-75f44d1b4f44\") " pod="openstack/swift-storage-0" Dec 09 12:26:36 crc kubenswrapper[4703]: E1209 12:26:36.611437 4703 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 09 12:26:36 crc kubenswrapper[4703]: E1209 12:26:36.611456 4703 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 09 12:26:36 crc kubenswrapper[4703]: E1209 12:26:36.611509 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7ad4bf88-77c9-4a0a-8fad-75f44d1b4f44-etc-swift podName:7ad4bf88-77c9-4a0a-8fad-75f44d1b4f44 nodeName:}" failed. No retries permitted until 2025-12-09 12:26:44.611489428 +0000 UTC m=+1303.860252947 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/7ad4bf88-77c9-4a0a-8fad-75f44d1b4f44-etc-swift") pod "swift-storage-0" (UID: "7ad4bf88-77c9-4a0a-8fad-75f44d1b4f44") : configmap "swift-ring-files" not found Dec 09 12:26:37 crc kubenswrapper[4703]: I1209 12:26:37.201969 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-44txf"] Dec 09 12:26:37 crc kubenswrapper[4703]: I1209 12:26:37.235876 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-44txf" event={"ID":"02c41eb5-761c-4cb6-b9dc-57f238f48b87","Type":"ContainerStarted","Data":"fdfe17e1fc4d115190a40d7eb4e49d07ae7cc9bb66e87f52df81941d2d6cedba"} Dec 09 12:26:37 crc kubenswrapper[4703]: I1209 12:26:37.459042 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-nfgrz"] Dec 09 12:26:37 crc kubenswrapper[4703]: W1209 12:26:37.465537 4703 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode04561f1_805c_4e06_a01c_6548e9a234e5.slice/crio-b62df298b660b1eccf325e420f669e1b61661543d763fb37d1c6d822afc30e70 WatchSource:0}: Error finding container b62df298b660b1eccf325e420f669e1b61661543d763fb37d1c6d822afc30e70: Status 404 returned error can't find the container with id b62df298b660b1eccf325e420f669e1b61661543d763fb37d1c6d822afc30e70 Dec 09 12:26:37 crc kubenswrapper[4703]: I1209 12:26:37.547354 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-b17c-account-create-update-vgfln"] Dec 09 12:26:37 crc kubenswrapper[4703]: I1209 12:26:37.564638 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-63d9-account-create-update-s4v4m"] Dec 09 12:26:37 crc kubenswrapper[4703]: I1209 12:26:37.862467 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-78wn9" Dec 09 12:26:37 crc kubenswrapper[4703]: I1209 12:26:37.922809 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-zjvxl"] Dec 09 12:26:37 crc kubenswrapper[4703]: I1209 12:26:37.923058 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-zjvxl" podUID="8a105fed-4e9c-4db9-972b-0d9f087f8ebb" containerName="dnsmasq-dns" containerID="cri-o://432ab5638ce787273d14ac821c05c332a7d9284e5741966ca1ed6d96b0957ad4" gracePeriod=10 Dec 09 12:26:38 crc kubenswrapper[4703]: I1209 12:26:38.245731 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-b17c-account-create-update-vgfln" event={"ID":"1d051446-c217-438b-9b9f-7477ae28c6f4","Type":"ContainerStarted","Data":"fa94be7e152f3ccba996ef28f86ec16f726f07850e0ef246e7ba0e7037c7adbc"} Dec 09 12:26:38 crc kubenswrapper[4703]: I1209 12:26:38.247125 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-nfgrz" event={"ID":"e04561f1-805c-4e06-a01c-6548e9a234e5","Type":"ContainerStarted","Data":"b62df298b660b1eccf325e420f669e1b61661543d763fb37d1c6d822afc30e70"} Dec 09 12:26:38 crc kubenswrapper[4703]: I1209 12:26:38.248137 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-63d9-account-create-update-s4v4m" event={"ID":"58dc912e-9acc-425c-b79f-c76604da270b","Type":"ContainerStarted","Data":"3bcb450137a579f206fbcf52eb54445f383205829b0cc427efad6edfe34ae640"} Dec 09 12:26:38 crc kubenswrapper[4703]: I1209 12:26:38.524933 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Dec 09 12:26:39 crc kubenswrapper[4703]: I1209 12:26:39.273386 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"4ff9ddab-6c86-431b-b31b-3ec7372b7144","Type":"ContainerStarted","Data":"605cbc593b56d04e246306ab18f15c7cd0add555c9a2edf5bf49679a37d0aa83"} Dec 09 12:26:39 crc kubenswrapper[4703]: I1209 12:26:39.277209 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-44txf" event={"ID":"02c41eb5-761c-4cb6-b9dc-57f238f48b87","Type":"ContainerStarted","Data":"4b9d06174afbe1726e1b77cf2da342f854c892cc37ab83cf2b5a5cf40de1552d"} Dec 09 12:26:39 crc kubenswrapper[4703]: I1209 12:26:39.281016 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-b17c-account-create-update-vgfln" event={"ID":"1d051446-c217-438b-9b9f-7477ae28c6f4","Type":"ContainerStarted","Data":"9fefe56742d92f05796b63895e03254a7f8fa3b253391bdfcd472de5d1b83d26"} Dec 09 12:26:39 crc kubenswrapper[4703]: I1209 12:26:39.307568 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-zjvxl" event={"ID":"8a105fed-4e9c-4db9-972b-0d9f087f8ebb","Type":"ContainerDied","Data":"432ab5638ce787273d14ac821c05c332a7d9284e5741966ca1ed6d96b0957ad4"} Dec 09 12:26:39 crc kubenswrapper[4703]: I1209 12:26:39.307603 4703 generic.go:334] "Generic (PLEG): container finished" podID="8a105fed-4e9c-4db9-972b-0d9f087f8ebb" containerID="432ab5638ce787273d14ac821c05c332a7d9284e5741966ca1ed6d96b0957ad4" exitCode=0 Dec 09 12:26:39 crc kubenswrapper[4703]: I1209 12:26:39.309041 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-44txf" podStartSLOduration=4.30902024 podStartE2EDuration="4.30902024s" podCreationTimestamp="2025-12-09 12:26:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:26:39.292958869 +0000 UTC m=+1298.541722398" watchObservedRunningTime="2025-12-09 12:26:39.30902024 +0000 UTC m=+1298.557783759" Dec 09 12:26:39 crc kubenswrapper[4703]: I1209 12:26:39.318385 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-nfgrz" event={"ID":"e04561f1-805c-4e06-a01c-6548e9a234e5","Type":"ContainerStarted","Data":"1d21cd5ee97425d5b078d5f90d4c77843d9520c6c2e53ea074fd2c9ef637612b"} Dec 09 12:26:39 crc kubenswrapper[4703]: I1209 12:26:39.344206 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-b17c-account-create-update-vgfln" podStartSLOduration=4.344164464 podStartE2EDuration="4.344164464s" podCreationTimestamp="2025-12-09 12:26:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:26:39.318411417 +0000 UTC m=+1298.567174936" watchObservedRunningTime="2025-12-09 12:26:39.344164464 +0000 UTC m=+1298.592927993" Dec 09 12:26:39 crc kubenswrapper[4703]: I1209 12:26:39.347694 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-63d9-account-create-update-s4v4m" event={"ID":"58dc912e-9acc-425c-b79f-c76604da270b","Type":"ContainerStarted","Data":"64325e0686200aa9d7e9a6bc134228605d4b1832fe255bf900296ba1fdff6630"} Dec 09 12:26:39 crc kubenswrapper[4703]: I1209 12:26:39.359218 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-nfgrz" podStartSLOduration=4.35919981 podStartE2EDuration="4.35919981s" podCreationTimestamp="2025-12-09 12:26:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:26:39.345269433 +0000 UTC m=+1298.594032952" watchObservedRunningTime="2025-12-09 12:26:39.35919981 +0000 UTC m=+1298.607963329" Dec 09 12:26:39 crc kubenswrapper[4703]: I1209 12:26:39.372671 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-63d9-account-create-update-s4v4m" podStartSLOduration=4.372649754 podStartE2EDuration="4.372649754s" podCreationTimestamp="2025-12-09 12:26:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:26:39.370756743 +0000 UTC m=+1298.619520262" watchObservedRunningTime="2025-12-09 12:26:39.372649754 +0000 UTC m=+1298.621413273" Dec 09 12:26:39 crc kubenswrapper[4703]: E1209 12:26:39.732855 4703 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod02c41eb5_761c_4cb6_b9dc_57f238f48b87.slice/crio-conmon-4b9d06174afbe1726e1b77cf2da342f854c892cc37ab83cf2b5a5cf40de1552d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode04561f1_805c_4e06_a01c_6548e9a234e5.slice/crio-conmon-1d21cd5ee97425d5b078d5f90d4c77843d9520c6c2e53ea074fd2c9ef637612b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod58dc912e_9acc_425c_b79f_c76604da270b.slice/crio-64325e0686200aa9d7e9a6bc134228605d4b1832fe255bf900296ba1fdff6630.scope\": RecentStats: unable to find data in memory cache]" Dec 09 12:26:39 crc kubenswrapper[4703]: I1209 12:26:39.850151 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-zjvxl" Dec 09 12:26:39 crc kubenswrapper[4703]: I1209 12:26:39.944310 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a105fed-4e9c-4db9-972b-0d9f087f8ebb-config\") pod \"8a105fed-4e9c-4db9-972b-0d9f087f8ebb\" (UID: \"8a105fed-4e9c-4db9-972b-0d9f087f8ebb\") " Dec 09 12:26:39 crc kubenswrapper[4703]: I1209 12:26:39.944710 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8a105fed-4e9c-4db9-972b-0d9f087f8ebb-ovsdbserver-sb\") pod \"8a105fed-4e9c-4db9-972b-0d9f087f8ebb\" (UID: \"8a105fed-4e9c-4db9-972b-0d9f087f8ebb\") " Dec 09 12:26:39 crc kubenswrapper[4703]: I1209 12:26:39.944846 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pmth2\" (UniqueName: \"kubernetes.io/projected/8a105fed-4e9c-4db9-972b-0d9f087f8ebb-kube-api-access-pmth2\") pod \"8a105fed-4e9c-4db9-972b-0d9f087f8ebb\" (UID: \"8a105fed-4e9c-4db9-972b-0d9f087f8ebb\") " Dec 09 12:26:39 crc kubenswrapper[4703]: I1209 12:26:39.944957 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8a105fed-4e9c-4db9-972b-0d9f087f8ebb-ovsdbserver-nb\") pod \"8a105fed-4e9c-4db9-972b-0d9f087f8ebb\" (UID: \"8a105fed-4e9c-4db9-972b-0d9f087f8ebb\") " Dec 09 12:26:39 crc kubenswrapper[4703]: I1209 12:26:39.945083 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8a105fed-4e9c-4db9-972b-0d9f087f8ebb-dns-svc\") pod \"8a105fed-4e9c-4db9-972b-0d9f087f8ebb\" (UID: \"8a105fed-4e9c-4db9-972b-0d9f087f8ebb\") " Dec 09 12:26:39 crc kubenswrapper[4703]: I1209 12:26:39.981429 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a105fed-4e9c-4db9-972b-0d9f087f8ebb-kube-api-access-pmth2" (OuterVolumeSpecName: "kube-api-access-pmth2") pod "8a105fed-4e9c-4db9-972b-0d9f087f8ebb" (UID: "8a105fed-4e9c-4db9-972b-0d9f087f8ebb"). InnerVolumeSpecName "kube-api-access-pmth2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:26:40 crc kubenswrapper[4703]: I1209 12:26:40.015652 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a105fed-4e9c-4db9-972b-0d9f087f8ebb-config" (OuterVolumeSpecName: "config") pod "8a105fed-4e9c-4db9-972b-0d9f087f8ebb" (UID: "8a105fed-4e9c-4db9-972b-0d9f087f8ebb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:26:40 crc kubenswrapper[4703]: I1209 12:26:40.025168 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a105fed-4e9c-4db9-972b-0d9f087f8ebb-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8a105fed-4e9c-4db9-972b-0d9f087f8ebb" (UID: "8a105fed-4e9c-4db9-972b-0d9f087f8ebb"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:26:40 crc kubenswrapper[4703]: I1209 12:26:40.036901 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a105fed-4e9c-4db9-972b-0d9f087f8ebb-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8a105fed-4e9c-4db9-972b-0d9f087f8ebb" (UID: "8a105fed-4e9c-4db9-972b-0d9f087f8ebb"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:26:40 crc kubenswrapper[4703]: I1209 12:26:40.048133 4703 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a105fed-4e9c-4db9-972b-0d9f087f8ebb-config\") on node \"crc\" DevicePath \"\"" Dec 09 12:26:40 crc kubenswrapper[4703]: I1209 12:26:40.049225 4703 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8a105fed-4e9c-4db9-972b-0d9f087f8ebb-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 09 12:26:40 crc kubenswrapper[4703]: I1209 12:26:40.050017 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pmth2\" (UniqueName: \"kubernetes.io/projected/8a105fed-4e9c-4db9-972b-0d9f087f8ebb-kube-api-access-pmth2\") on node \"crc\" DevicePath \"\"" Dec 09 12:26:40 crc kubenswrapper[4703]: I1209 12:26:40.050123 4703 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8a105fed-4e9c-4db9-972b-0d9f087f8ebb-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 09 12:26:40 crc kubenswrapper[4703]: I1209 12:26:40.069805 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a105fed-4e9c-4db9-972b-0d9f087f8ebb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8a105fed-4e9c-4db9-972b-0d9f087f8ebb" (UID: "8a105fed-4e9c-4db9-972b-0d9f087f8ebb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:26:40 crc kubenswrapper[4703]: I1209 12:26:40.151913 4703 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8a105fed-4e9c-4db9-972b-0d9f087f8ebb-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 09 12:26:40 crc kubenswrapper[4703]: I1209 12:26:40.370330 4703 generic.go:334] "Generic (PLEG): container finished" podID="58dc912e-9acc-425c-b79f-c76604da270b" containerID="64325e0686200aa9d7e9a6bc134228605d4b1832fe255bf900296ba1fdff6630" exitCode=0 Dec 09 12:26:40 crc kubenswrapper[4703]: I1209 12:26:40.370419 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-63d9-account-create-update-s4v4m" event={"ID":"58dc912e-9acc-425c-b79f-c76604da270b","Type":"ContainerDied","Data":"64325e0686200aa9d7e9a6bc134228605d4b1832fe255bf900296ba1fdff6630"} Dec 09 12:26:40 crc kubenswrapper[4703]: I1209 12:26:40.383781 4703 generic.go:334] "Generic (PLEG): container finished" podID="02c41eb5-761c-4cb6-b9dc-57f238f48b87" containerID="4b9d06174afbe1726e1b77cf2da342f854c892cc37ab83cf2b5a5cf40de1552d" exitCode=0 Dec 09 12:26:40 crc kubenswrapper[4703]: I1209 12:26:40.383863 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-44txf" event={"ID":"02c41eb5-761c-4cb6-b9dc-57f238f48b87","Type":"ContainerDied","Data":"4b9d06174afbe1726e1b77cf2da342f854c892cc37ab83cf2b5a5cf40de1552d"} Dec 09 12:26:40 crc kubenswrapper[4703]: I1209 12:26:40.387102 4703 generic.go:334] "Generic (PLEG): container finished" podID="1d051446-c217-438b-9b9f-7477ae28c6f4" containerID="9fefe56742d92f05796b63895e03254a7f8fa3b253391bdfcd472de5d1b83d26" exitCode=0 Dec 09 12:26:40 crc kubenswrapper[4703]: I1209 12:26:40.387220 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-b17c-account-create-update-vgfln" event={"ID":"1d051446-c217-438b-9b9f-7477ae28c6f4","Type":"ContainerDied","Data":"9fefe56742d92f05796b63895e03254a7f8fa3b253391bdfcd472de5d1b83d26"} Dec 09 12:26:40 crc kubenswrapper[4703]: I1209 12:26:40.391991 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-zjvxl" event={"ID":"8a105fed-4e9c-4db9-972b-0d9f087f8ebb","Type":"ContainerDied","Data":"00d4d919efd27bb785af6bd5f90561d3ef7d41b53525d7f217aae366d40790ec"} Dec 09 12:26:40 crc kubenswrapper[4703]: I1209 12:26:40.392054 4703 scope.go:117] "RemoveContainer" containerID="432ab5638ce787273d14ac821c05c332a7d9284e5741966ca1ed6d96b0957ad4" Dec 09 12:26:40 crc kubenswrapper[4703]: I1209 12:26:40.392072 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-zjvxl" Dec 09 12:26:40 crc kubenswrapper[4703]: I1209 12:26:40.420890 4703 generic.go:334] "Generic (PLEG): container finished" podID="e04561f1-805c-4e06-a01c-6548e9a234e5" containerID="1d21cd5ee97425d5b078d5f90d4c77843d9520c6c2e53ea074fd2c9ef637612b" exitCode=0 Dec 09 12:26:40 crc kubenswrapper[4703]: I1209 12:26:40.420940 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-nfgrz" event={"ID":"e04561f1-805c-4e06-a01c-6548e9a234e5","Type":"ContainerDied","Data":"1d21cd5ee97425d5b078d5f90d4c77843d9520c6c2e53ea074fd2c9ef637612b"} Dec 09 12:26:40 crc kubenswrapper[4703]: I1209 12:26:40.445590 4703 scope.go:117] "RemoveContainer" containerID="8eb73e13671127b82b551c077e9e3c91cce7c658346ee217b6733639c1aa2bd2" Dec 09 12:26:40 crc kubenswrapper[4703]: I1209 12:26:40.491428 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-zjvxl"] Dec 09 12:26:40 crc kubenswrapper[4703]: I1209 12:26:40.504104 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-zjvxl"] Dec 09 12:26:40 crc kubenswrapper[4703]: I1209 12:26:40.678480 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-kwzbm"] Dec 09 12:26:40 crc kubenswrapper[4703]: E1209 12:26:40.679154 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a105fed-4e9c-4db9-972b-0d9f087f8ebb" containerName="dnsmasq-dns" Dec 09 12:26:40 crc kubenswrapper[4703]: I1209 12:26:40.679178 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a105fed-4e9c-4db9-972b-0d9f087f8ebb" containerName="dnsmasq-dns" Dec 09 12:26:40 crc kubenswrapper[4703]: E1209 12:26:40.679221 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a105fed-4e9c-4db9-972b-0d9f087f8ebb" containerName="init" Dec 09 12:26:40 crc kubenswrapper[4703]: I1209 12:26:40.679230 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a105fed-4e9c-4db9-972b-0d9f087f8ebb" containerName="init" Dec 09 12:26:40 crc kubenswrapper[4703]: I1209 12:26:40.679501 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a105fed-4e9c-4db9-972b-0d9f087f8ebb" containerName="dnsmasq-dns" Dec 09 12:26:40 crc kubenswrapper[4703]: I1209 12:26:40.680455 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-kwzbm" Dec 09 12:26:40 crc kubenswrapper[4703]: I1209 12:26:40.697378 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-kwzbm"] Dec 09 12:26:40 crc kubenswrapper[4703]: I1209 12:26:40.773851 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7aa6f4bf-5553-4149-ae85-b17092bef181-operator-scripts\") pod \"glance-db-create-kwzbm\" (UID: \"7aa6f4bf-5553-4149-ae85-b17092bef181\") " pod="openstack/glance-db-create-kwzbm" Dec 09 12:26:40 crc kubenswrapper[4703]: I1209 12:26:40.775074 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtwkc\" (UniqueName: \"kubernetes.io/projected/7aa6f4bf-5553-4149-ae85-b17092bef181-kube-api-access-dtwkc\") pod \"glance-db-create-kwzbm\" (UID: \"7aa6f4bf-5553-4149-ae85-b17092bef181\") " pod="openstack/glance-db-create-kwzbm" Dec 09 12:26:40 crc kubenswrapper[4703]: I1209 12:26:40.786653 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-0b9b-account-create-update-cv86j"] Dec 09 12:26:40 crc kubenswrapper[4703]: I1209 12:26:40.794691 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-0b9b-account-create-update-cv86j" Dec 09 12:26:40 crc kubenswrapper[4703]: I1209 12:26:40.797587 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-0b9b-account-create-update-cv86j"] Dec 09 12:26:40 crc kubenswrapper[4703]: I1209 12:26:40.799931 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Dec 09 12:26:40 crc kubenswrapper[4703]: I1209 12:26:40.877301 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtwkc\" (UniqueName: \"kubernetes.io/projected/7aa6f4bf-5553-4149-ae85-b17092bef181-kube-api-access-dtwkc\") pod \"glance-db-create-kwzbm\" (UID: \"7aa6f4bf-5553-4149-ae85-b17092bef181\") " pod="openstack/glance-db-create-kwzbm" Dec 09 12:26:40 crc kubenswrapper[4703]: I1209 12:26:40.877414 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qx6j5\" (UniqueName: \"kubernetes.io/projected/f188683b-b8cf-4aab-81cb-ad7318e3e07f-kube-api-access-qx6j5\") pod \"glance-0b9b-account-create-update-cv86j\" (UID: \"f188683b-b8cf-4aab-81cb-ad7318e3e07f\") " pod="openstack/glance-0b9b-account-create-update-cv86j" Dec 09 12:26:40 crc kubenswrapper[4703]: I1209 12:26:40.877458 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f188683b-b8cf-4aab-81cb-ad7318e3e07f-operator-scripts\") pod \"glance-0b9b-account-create-update-cv86j\" (UID: \"f188683b-b8cf-4aab-81cb-ad7318e3e07f\") " pod="openstack/glance-0b9b-account-create-update-cv86j" Dec 09 12:26:40 crc kubenswrapper[4703]: I1209 12:26:40.877492 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7aa6f4bf-5553-4149-ae85-b17092bef181-operator-scripts\") pod \"glance-db-create-kwzbm\" (UID: \"7aa6f4bf-5553-4149-ae85-b17092bef181\") " pod="openstack/glance-db-create-kwzbm" Dec 09 12:26:40 crc kubenswrapper[4703]: I1209 12:26:40.878283 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7aa6f4bf-5553-4149-ae85-b17092bef181-operator-scripts\") pod \"glance-db-create-kwzbm\" (UID: \"7aa6f4bf-5553-4149-ae85-b17092bef181\") " pod="openstack/glance-db-create-kwzbm" Dec 09 12:26:40 crc kubenswrapper[4703]: I1209 12:26:40.904520 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtwkc\" (UniqueName: \"kubernetes.io/projected/7aa6f4bf-5553-4149-ae85-b17092bef181-kube-api-access-dtwkc\") pod \"glance-db-create-kwzbm\" (UID: \"7aa6f4bf-5553-4149-ae85-b17092bef181\") " pod="openstack/glance-db-create-kwzbm" Dec 09 12:26:40 crc kubenswrapper[4703]: I1209 12:26:40.979635 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qx6j5\" (UniqueName: \"kubernetes.io/projected/f188683b-b8cf-4aab-81cb-ad7318e3e07f-kube-api-access-qx6j5\") pod \"glance-0b9b-account-create-update-cv86j\" (UID: \"f188683b-b8cf-4aab-81cb-ad7318e3e07f\") " pod="openstack/glance-0b9b-account-create-update-cv86j" Dec 09 12:26:40 crc kubenswrapper[4703]: I1209 12:26:40.979987 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f188683b-b8cf-4aab-81cb-ad7318e3e07f-operator-scripts\") pod \"glance-0b9b-account-create-update-cv86j\" (UID: \"f188683b-b8cf-4aab-81cb-ad7318e3e07f\") " pod="openstack/glance-0b9b-account-create-update-cv86j" Dec 09 12:26:40 crc kubenswrapper[4703]: I1209 12:26:40.980804 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f188683b-b8cf-4aab-81cb-ad7318e3e07f-operator-scripts\") pod \"glance-0b9b-account-create-update-cv86j\" (UID: \"f188683b-b8cf-4aab-81cb-ad7318e3e07f\") " pod="openstack/glance-0b9b-account-create-update-cv86j" Dec 09 12:26:41 crc kubenswrapper[4703]: I1209 12:26:41.018910 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qx6j5\" (UniqueName: \"kubernetes.io/projected/f188683b-b8cf-4aab-81cb-ad7318e3e07f-kube-api-access-qx6j5\") pod \"glance-0b9b-account-create-update-cv86j\" (UID: \"f188683b-b8cf-4aab-81cb-ad7318e3e07f\") " pod="openstack/glance-0b9b-account-create-update-cv86j" Dec 09 12:26:41 crc kubenswrapper[4703]: I1209 12:26:41.070392 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-kwzbm" Dec 09 12:26:41 crc kubenswrapper[4703]: I1209 12:26:41.085074 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a105fed-4e9c-4db9-972b-0d9f087f8ebb" path="/var/lib/kubelet/pods/8a105fed-4e9c-4db9-972b-0d9f087f8ebb/volumes" Dec 09 12:26:41 crc kubenswrapper[4703]: I1209 12:26:41.126310 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-0b9b-account-create-update-cv86j" Dec 09 12:26:41 crc kubenswrapper[4703]: I1209 12:26:41.792577 4703 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="be3c1046-2c78-46ab-a62f-f4270561ca1c" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 09 12:26:42 crc kubenswrapper[4703]: I1209 12:26:42.576731 4703 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 09 12:26:43 crc kubenswrapper[4703]: I1209 12:26:43.459535 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"4ff9ddab-6c86-431b-b31b-3ec7372b7144","Type":"ContainerStarted","Data":"30c179592cb0fc711f436fe3bbcd1a54682da5299dbb1ad561f8df2842cf104d"} Dec 09 12:26:44 crc kubenswrapper[4703]: I1209 12:26:44.143621 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-nfgrz" Dec 09 12:26:44 crc kubenswrapper[4703]: I1209 12:26:44.172329 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-63d9-account-create-update-s4v4m" Dec 09 12:26:44 crc kubenswrapper[4703]: I1209 12:26:44.178486 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dspvk\" (UniqueName: \"kubernetes.io/projected/e04561f1-805c-4e06-a01c-6548e9a234e5-kube-api-access-dspvk\") pod \"e04561f1-805c-4e06-a01c-6548e9a234e5\" (UID: \"e04561f1-805c-4e06-a01c-6548e9a234e5\") " Dec 09 12:26:44 crc kubenswrapper[4703]: I1209 12:26:44.178518 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e04561f1-805c-4e06-a01c-6548e9a234e5-operator-scripts\") pod \"e04561f1-805c-4e06-a01c-6548e9a234e5\" (UID: \"e04561f1-805c-4e06-a01c-6548e9a234e5\") " Dec 09 12:26:44 crc kubenswrapper[4703]: I1209 12:26:44.178568 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8rbzl\" (UniqueName: \"kubernetes.io/projected/58dc912e-9acc-425c-b79f-c76604da270b-kube-api-access-8rbzl\") pod \"58dc912e-9acc-425c-b79f-c76604da270b\" (UID: \"58dc912e-9acc-425c-b79f-c76604da270b\") " Dec 09 12:26:44 crc kubenswrapper[4703]: I1209 12:26:44.178595 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/58dc912e-9acc-425c-b79f-c76604da270b-operator-scripts\") pod \"58dc912e-9acc-425c-b79f-c76604da270b\" (UID: \"58dc912e-9acc-425c-b79f-c76604da270b\") " Dec 09 12:26:44 crc kubenswrapper[4703]: I1209 12:26:44.180109 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e04561f1-805c-4e06-a01c-6548e9a234e5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e04561f1-805c-4e06-a01c-6548e9a234e5" (UID: "e04561f1-805c-4e06-a01c-6548e9a234e5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:26:44 crc kubenswrapper[4703]: I1209 12:26:44.185985 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58dc912e-9acc-425c-b79f-c76604da270b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "58dc912e-9acc-425c-b79f-c76604da270b" (UID: "58dc912e-9acc-425c-b79f-c76604da270b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:26:44 crc kubenswrapper[4703]: I1209 12:26:44.189996 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58dc912e-9acc-425c-b79f-c76604da270b-kube-api-access-8rbzl" (OuterVolumeSpecName: "kube-api-access-8rbzl") pod "58dc912e-9acc-425c-b79f-c76604da270b" (UID: "58dc912e-9acc-425c-b79f-c76604da270b"). InnerVolumeSpecName "kube-api-access-8rbzl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:26:44 crc kubenswrapper[4703]: I1209 12:26:44.193447 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e04561f1-805c-4e06-a01c-6548e9a234e5-kube-api-access-dspvk" (OuterVolumeSpecName: "kube-api-access-dspvk") pod "e04561f1-805c-4e06-a01c-6548e9a234e5" (UID: "e04561f1-805c-4e06-a01c-6548e9a234e5"). InnerVolumeSpecName "kube-api-access-dspvk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:26:44 crc kubenswrapper[4703]: I1209 12:26:44.257928 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-b17c-account-create-update-vgfln" Dec 09 12:26:44 crc kubenswrapper[4703]: I1209 12:26:44.270344 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-44txf" Dec 09 12:26:44 crc kubenswrapper[4703]: I1209 12:26:44.279158 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cvm2v\" (UniqueName: \"kubernetes.io/projected/02c41eb5-761c-4cb6-b9dc-57f238f48b87-kube-api-access-cvm2v\") pod \"02c41eb5-761c-4cb6-b9dc-57f238f48b87\" (UID: \"02c41eb5-761c-4cb6-b9dc-57f238f48b87\") " Dec 09 12:26:44 crc kubenswrapper[4703]: I1209 12:26:44.279294 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qvwql\" (UniqueName: \"kubernetes.io/projected/1d051446-c217-438b-9b9f-7477ae28c6f4-kube-api-access-qvwql\") pod \"1d051446-c217-438b-9b9f-7477ae28c6f4\" (UID: \"1d051446-c217-438b-9b9f-7477ae28c6f4\") " Dec 09 12:26:44 crc kubenswrapper[4703]: I1209 12:26:44.279331 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d051446-c217-438b-9b9f-7477ae28c6f4-operator-scripts\") pod \"1d051446-c217-438b-9b9f-7477ae28c6f4\" (UID: \"1d051446-c217-438b-9b9f-7477ae28c6f4\") " Dec 09 12:26:44 crc kubenswrapper[4703]: I1209 12:26:44.279351 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/02c41eb5-761c-4cb6-b9dc-57f238f48b87-operator-scripts\") pod \"02c41eb5-761c-4cb6-b9dc-57f238f48b87\" (UID: \"02c41eb5-761c-4cb6-b9dc-57f238f48b87\") " Dec 09 12:26:44 crc kubenswrapper[4703]: I1209 12:26:44.279651 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dspvk\" (UniqueName: \"kubernetes.io/projected/e04561f1-805c-4e06-a01c-6548e9a234e5-kube-api-access-dspvk\") on node \"crc\" DevicePath \"\"" Dec 09 12:26:44 crc kubenswrapper[4703]: I1209 12:26:44.279664 4703 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e04561f1-805c-4e06-a01c-6548e9a234e5-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 12:26:44 crc kubenswrapper[4703]: I1209 12:26:44.279675 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8rbzl\" (UniqueName: \"kubernetes.io/projected/58dc912e-9acc-425c-b79f-c76604da270b-kube-api-access-8rbzl\") on node \"crc\" DevicePath \"\"" Dec 09 12:26:44 crc kubenswrapper[4703]: I1209 12:26:44.279686 4703 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/58dc912e-9acc-425c-b79f-c76604da270b-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 12:26:44 crc kubenswrapper[4703]: I1209 12:26:44.280156 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02c41eb5-761c-4cb6-b9dc-57f238f48b87-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "02c41eb5-761c-4cb6-b9dc-57f238f48b87" (UID: "02c41eb5-761c-4cb6-b9dc-57f238f48b87"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:26:44 crc kubenswrapper[4703]: I1209 12:26:44.281575 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d051446-c217-438b-9b9f-7477ae28c6f4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1d051446-c217-438b-9b9f-7477ae28c6f4" (UID: "1d051446-c217-438b-9b9f-7477ae28c6f4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:26:44 crc kubenswrapper[4703]: I1209 12:26:44.285053 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d051446-c217-438b-9b9f-7477ae28c6f4-kube-api-access-qvwql" (OuterVolumeSpecName: "kube-api-access-qvwql") pod "1d051446-c217-438b-9b9f-7477ae28c6f4" (UID: "1d051446-c217-438b-9b9f-7477ae28c6f4"). InnerVolumeSpecName "kube-api-access-qvwql". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:26:44 crc kubenswrapper[4703]: I1209 12:26:44.286070 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02c41eb5-761c-4cb6-b9dc-57f238f48b87-kube-api-access-cvm2v" (OuterVolumeSpecName: "kube-api-access-cvm2v") pod "02c41eb5-761c-4cb6-b9dc-57f238f48b87" (UID: "02c41eb5-761c-4cb6-b9dc-57f238f48b87"). InnerVolumeSpecName "kube-api-access-cvm2v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:26:44 crc kubenswrapper[4703]: I1209 12:26:44.381600 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cvm2v\" (UniqueName: \"kubernetes.io/projected/02c41eb5-761c-4cb6-b9dc-57f238f48b87-kube-api-access-cvm2v\") on node \"crc\" DevicePath \"\"" Dec 09 12:26:44 crc kubenswrapper[4703]: I1209 12:26:44.381649 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qvwql\" (UniqueName: \"kubernetes.io/projected/1d051446-c217-438b-9b9f-7477ae28c6f4-kube-api-access-qvwql\") on node \"crc\" DevicePath \"\"" Dec 09 12:26:44 crc kubenswrapper[4703]: I1209 12:26:44.381660 4703 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d051446-c217-438b-9b9f-7477ae28c6f4-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 12:26:44 crc kubenswrapper[4703]: I1209 12:26:44.381671 4703 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/02c41eb5-761c-4cb6-b9dc-57f238f48b87-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 12:26:44 crc kubenswrapper[4703]: I1209 12:26:44.451867 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-0b9b-account-create-update-cv86j"] Dec 09 12:26:44 crc kubenswrapper[4703]: W1209 12:26:44.457587 4703 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf188683b_b8cf_4aab_81cb_ad7318e3e07f.slice/crio-a9fb6eb09451eded20531c7f2fc4d4e5b9d0f817a322d7fcdcaefc875115cc80 WatchSource:0}: Error finding container a9fb6eb09451eded20531c7f2fc4d4e5b9d0f817a322d7fcdcaefc875115cc80: Status 404 returned error can't find the container with id a9fb6eb09451eded20531c7f2fc4d4e5b9d0f817a322d7fcdcaefc875115cc80 Dec 09 12:26:44 crc kubenswrapper[4703]: I1209 12:26:44.481618 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-nfgrz" event={"ID":"e04561f1-805c-4e06-a01c-6548e9a234e5","Type":"ContainerDied","Data":"b62df298b660b1eccf325e420f669e1b61661543d763fb37d1c6d822afc30e70"} Dec 09 12:26:44 crc kubenswrapper[4703]: I1209 12:26:44.481682 4703 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b62df298b660b1eccf325e420f669e1b61661543d763fb37d1c6d822afc30e70" Dec 09 12:26:44 crc kubenswrapper[4703]: I1209 12:26:44.481688 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-nfgrz" Dec 09 12:26:44 crc kubenswrapper[4703]: I1209 12:26:44.483780 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-0b9b-account-create-update-cv86j" event={"ID":"f188683b-b8cf-4aab-81cb-ad7318e3e07f","Type":"ContainerStarted","Data":"a9fb6eb09451eded20531c7f2fc4d4e5b9d0f817a322d7fcdcaefc875115cc80"} Dec 09 12:26:44 crc kubenswrapper[4703]: I1209 12:26:44.486210 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-63d9-account-create-update-s4v4m" Dec 09 12:26:44 crc kubenswrapper[4703]: I1209 12:26:44.486210 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-63d9-account-create-update-s4v4m" event={"ID":"58dc912e-9acc-425c-b79f-c76604da270b","Type":"ContainerDied","Data":"3bcb450137a579f206fbcf52eb54445f383205829b0cc427efad6edfe34ae640"} Dec 09 12:26:44 crc kubenswrapper[4703]: I1209 12:26:44.486277 4703 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3bcb450137a579f206fbcf52eb54445f383205829b0cc427efad6edfe34ae640" Dec 09 12:26:44 crc kubenswrapper[4703]: I1209 12:26:44.491154 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-44txf" Dec 09 12:26:44 crc kubenswrapper[4703]: I1209 12:26:44.491210 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-44txf" event={"ID":"02c41eb5-761c-4cb6-b9dc-57f238f48b87","Type":"ContainerDied","Data":"fdfe17e1fc4d115190a40d7eb4e49d07ae7cc9bb66e87f52df81941d2d6cedba"} Dec 09 12:26:44 crc kubenswrapper[4703]: I1209 12:26:44.491262 4703 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fdfe17e1fc4d115190a40d7eb4e49d07ae7cc9bb66e87f52df81941d2d6cedba" Dec 09 12:26:44 crc kubenswrapper[4703]: I1209 12:26:44.499091 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-b17c-account-create-update-vgfln" event={"ID":"1d051446-c217-438b-9b9f-7477ae28c6f4","Type":"ContainerDied","Data":"fa94be7e152f3ccba996ef28f86ec16f726f07850e0ef246e7ba0e7037c7adbc"} Dec 09 12:26:44 crc kubenswrapper[4703]: I1209 12:26:44.499138 4703 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa94be7e152f3ccba996ef28f86ec16f726f07850e0ef246e7ba0e7037c7adbc" Dec 09 12:26:44 crc kubenswrapper[4703]: I1209 12:26:44.499422 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-b17c-account-create-update-vgfln" Dec 09 12:26:44 crc kubenswrapper[4703]: I1209 12:26:44.516931 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-kwzbm"] Dec 09 12:26:44 crc kubenswrapper[4703]: I1209 12:26:44.686612 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7ad4bf88-77c9-4a0a-8fad-75f44d1b4f44-etc-swift\") pod \"swift-storage-0\" (UID: \"7ad4bf88-77c9-4a0a-8fad-75f44d1b4f44\") " pod="openstack/swift-storage-0" Dec 09 12:26:44 crc kubenswrapper[4703]: E1209 12:26:44.686852 4703 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 09 12:26:44 crc kubenswrapper[4703]: E1209 12:26:44.686867 4703 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 09 12:26:44 crc kubenswrapper[4703]: E1209 12:26:44.686912 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7ad4bf88-77c9-4a0a-8fad-75f44d1b4f44-etc-swift podName:7ad4bf88-77c9-4a0a-8fad-75f44d1b4f44 nodeName:}" failed. No retries permitted until 2025-12-09 12:27:00.686898363 +0000 UTC m=+1319.935661882 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/7ad4bf88-77c9-4a0a-8fad-75f44d1b4f44-etc-swift") pod "swift-storage-0" (UID: "7ad4bf88-77c9-4a0a-8fad-75f44d1b4f44") : configmap "swift-ring-files" not found Dec 09 12:26:45 crc kubenswrapper[4703]: I1209 12:26:45.515012 4703 generic.go:334] "Generic (PLEG): container finished" podID="7aa6f4bf-5553-4149-ae85-b17092bef181" containerID="e80be929b3869438c19736e55e885dbea0518c3720d8019b45e243a6ddb3507f" exitCode=0 Dec 09 12:26:45 crc kubenswrapper[4703]: I1209 12:26:45.515082 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-kwzbm" event={"ID":"7aa6f4bf-5553-4149-ae85-b17092bef181","Type":"ContainerDied","Data":"e80be929b3869438c19736e55e885dbea0518c3720d8019b45e243a6ddb3507f"} Dec 09 12:26:45 crc kubenswrapper[4703]: I1209 12:26:45.515423 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-kwzbm" event={"ID":"7aa6f4bf-5553-4149-ae85-b17092bef181","Type":"ContainerStarted","Data":"f17fd03593247336575b81b1a25c54766a0d531451f57059f3b70514d7e5f38c"} Dec 09 12:26:45 crc kubenswrapper[4703]: I1209 12:26:45.518826 4703 generic.go:334] "Generic (PLEG): container finished" podID="f188683b-b8cf-4aab-81cb-ad7318e3e07f" containerID="20cc77896d41d7d9a2ffe3a600b895b715d42c2f993eb8da23b8acec00fc0703" exitCode=0 Dec 09 12:26:45 crc kubenswrapper[4703]: I1209 12:26:45.518897 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-0b9b-account-create-update-cv86j" event={"ID":"f188683b-b8cf-4aab-81cb-ad7318e3e07f","Type":"ContainerDied","Data":"20cc77896d41d7d9a2ffe3a600b895b715d42c2f993eb8da23b8acec00fc0703"} Dec 09 12:26:45 crc kubenswrapper[4703]: I1209 12:26:45.521205 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-cscwb" event={"ID":"2adbe90b-b3dc-480b-9ab1-f6084b5dee94","Type":"ContainerStarted","Data":"fc8e6f088e45a1b094a10a49b724eee8587616d028b6539a3aa4b2f1221bc7d6"} Dec 09 12:26:45 crc kubenswrapper[4703]: I1209 12:26:45.581372 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-cscwb" podStartSLOduration=3.809928884 podStartE2EDuration="17.581349446s" podCreationTimestamp="2025-12-09 12:26:28 +0000 UTC" firstStartedPulling="2025-12-09 12:26:30.149788862 +0000 UTC m=+1289.398552391" lastFinishedPulling="2025-12-09 12:26:43.921209434 +0000 UTC m=+1303.169972953" observedRunningTime="2025-12-09 12:26:45.577368412 +0000 UTC m=+1304.826131931" watchObservedRunningTime="2025-12-09 12:26:45.581349446 +0000 UTC m=+1304.830112965" Dec 09 12:26:46 crc kubenswrapper[4703]: I1209 12:26:46.543987 4703 generic.go:334] "Generic (PLEG): container finished" podID="8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692" containerID="224e6199fe19659e824e440bbf703885352f49c9d09a917c672c84d49e959a75" exitCode=0 Dec 09 12:26:46 crc kubenswrapper[4703]: I1209 12:26:46.544331 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692","Type":"ContainerDied","Data":"224e6199fe19659e824e440bbf703885352f49c9d09a917c672c84d49e959a75"} Dec 09 12:26:46 crc kubenswrapper[4703]: I1209 12:26:46.580510 4703 generic.go:334] "Generic (PLEG): container finished" podID="0b084a1a-44b8-439b-ad26-d1ead9d2f225" containerID="7004fbf8033811bf800c6568cb5bc504bf11e5f90793c8b4a4ad5e5198027033" exitCode=0 Dec 09 12:26:46 crc kubenswrapper[4703]: I1209 12:26:46.580772 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0b084a1a-44b8-439b-ad26-d1ead9d2f225","Type":"ContainerDied","Data":"7004fbf8033811bf800c6568cb5bc504bf11e5f90793c8b4a4ad5e5198027033"} Dec 09 12:26:46 crc kubenswrapper[4703]: I1209 12:26:46.729612 4703 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-lsj78" podUID="4222b5eb-89d5-41be-ab08-6f3f3f4dab42" containerName="ovn-controller" probeResult="failure" output=< Dec 09 12:26:46 crc kubenswrapper[4703]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Dec 09 12:26:46 crc kubenswrapper[4703]: > Dec 09 12:26:46 crc kubenswrapper[4703]: I1209 12:26:46.817593 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-b4tvr" Dec 09 12:26:46 crc kubenswrapper[4703]: I1209 12:26:46.842659 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-b4tvr" Dec 09 12:26:47 crc kubenswrapper[4703]: I1209 12:26:47.178502 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-lsj78-config-ht5qk"] Dec 09 12:26:47 crc kubenswrapper[4703]: E1209 12:26:47.179448 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d051446-c217-438b-9b9f-7477ae28c6f4" containerName="mariadb-account-create-update" Dec 09 12:26:47 crc kubenswrapper[4703]: I1209 12:26:47.179473 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d051446-c217-438b-9b9f-7477ae28c6f4" containerName="mariadb-account-create-update" Dec 09 12:26:47 crc kubenswrapper[4703]: E1209 12:26:47.179492 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02c41eb5-761c-4cb6-b9dc-57f238f48b87" containerName="mariadb-database-create" Dec 09 12:26:47 crc kubenswrapper[4703]: I1209 12:26:47.179501 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="02c41eb5-761c-4cb6-b9dc-57f238f48b87" containerName="mariadb-database-create" Dec 09 12:26:47 crc kubenswrapper[4703]: E1209 12:26:47.179519 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e04561f1-805c-4e06-a01c-6548e9a234e5" containerName="mariadb-database-create" Dec 09 12:26:47 crc kubenswrapper[4703]: I1209 12:26:47.179528 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="e04561f1-805c-4e06-a01c-6548e9a234e5" containerName="mariadb-database-create" Dec 09 12:26:47 crc kubenswrapper[4703]: E1209 12:26:47.179550 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58dc912e-9acc-425c-b79f-c76604da270b" containerName="mariadb-account-create-update" Dec 09 12:26:47 crc kubenswrapper[4703]: I1209 12:26:47.179560 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="58dc912e-9acc-425c-b79f-c76604da270b" containerName="mariadb-account-create-update" Dec 09 12:26:47 crc kubenswrapper[4703]: I1209 12:26:47.179814 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="02c41eb5-761c-4cb6-b9dc-57f238f48b87" containerName="mariadb-database-create" Dec 09 12:26:47 crc kubenswrapper[4703]: I1209 12:26:47.179836 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="e04561f1-805c-4e06-a01c-6548e9a234e5" containerName="mariadb-database-create" Dec 09 12:26:47 crc kubenswrapper[4703]: I1209 12:26:47.179863 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d051446-c217-438b-9b9f-7477ae28c6f4" containerName="mariadb-account-create-update" Dec 09 12:26:47 crc kubenswrapper[4703]: I1209 12:26:47.179878 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="58dc912e-9acc-425c-b79f-c76604da270b" containerName="mariadb-account-create-update" Dec 09 12:26:47 crc kubenswrapper[4703]: I1209 12:26:47.181242 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-lsj78-config-ht5qk" Dec 09 12:26:48 crc kubenswrapper[4703]: I1209 12:26:47.187210 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Dec 09 12:26:48 crc kubenswrapper[4703]: I1209 12:26:47.198882 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-lsj78-config-ht5qk"] Dec 09 12:26:48 crc kubenswrapper[4703]: I1209 12:26:47.216757 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-0b9b-account-create-update-cv86j" Dec 09 12:26:48 crc kubenswrapper[4703]: I1209 12:26:47.261707 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f188683b-b8cf-4aab-81cb-ad7318e3e07f-operator-scripts\") pod \"f188683b-b8cf-4aab-81cb-ad7318e3e07f\" (UID: \"f188683b-b8cf-4aab-81cb-ad7318e3e07f\") " Dec 09 12:26:48 crc kubenswrapper[4703]: I1209 12:26:47.261970 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qx6j5\" (UniqueName: \"kubernetes.io/projected/f188683b-b8cf-4aab-81cb-ad7318e3e07f-kube-api-access-qx6j5\") pod \"f188683b-b8cf-4aab-81cb-ad7318e3e07f\" (UID: \"f188683b-b8cf-4aab-81cb-ad7318e3e07f\") " Dec 09 12:26:48 crc kubenswrapper[4703]: I1209 12:26:47.262254 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f4885ff5-3e9f-4297-8601-9b058062c8c3-scripts\") pod \"ovn-controller-lsj78-config-ht5qk\" (UID: \"f4885ff5-3e9f-4297-8601-9b058062c8c3\") " pod="openstack/ovn-controller-lsj78-config-ht5qk" Dec 09 12:26:48 crc kubenswrapper[4703]: I1209 12:26:47.262391 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phn5s\" (UniqueName: \"kubernetes.io/projected/f4885ff5-3e9f-4297-8601-9b058062c8c3-kube-api-access-phn5s\") pod \"ovn-controller-lsj78-config-ht5qk\" (UID: \"f4885ff5-3e9f-4297-8601-9b058062c8c3\") " pod="openstack/ovn-controller-lsj78-config-ht5qk" Dec 09 12:26:48 crc kubenswrapper[4703]: I1209 12:26:47.262441 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f4885ff5-3e9f-4297-8601-9b058062c8c3-additional-scripts\") pod \"ovn-controller-lsj78-config-ht5qk\" (UID: \"f4885ff5-3e9f-4297-8601-9b058062c8c3\") " pod="openstack/ovn-controller-lsj78-config-ht5qk" Dec 09 12:26:48 crc kubenswrapper[4703]: I1209 12:26:47.262558 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f4885ff5-3e9f-4297-8601-9b058062c8c3-var-log-ovn\") pod \"ovn-controller-lsj78-config-ht5qk\" (UID: \"f4885ff5-3e9f-4297-8601-9b058062c8c3\") " pod="openstack/ovn-controller-lsj78-config-ht5qk" Dec 09 12:26:48 crc kubenswrapper[4703]: I1209 12:26:47.262605 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f4885ff5-3e9f-4297-8601-9b058062c8c3-var-run\") pod \"ovn-controller-lsj78-config-ht5qk\" (UID: \"f4885ff5-3e9f-4297-8601-9b058062c8c3\") " pod="openstack/ovn-controller-lsj78-config-ht5qk" Dec 09 12:26:48 crc kubenswrapper[4703]: I1209 12:26:47.262629 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f4885ff5-3e9f-4297-8601-9b058062c8c3-var-run-ovn\") pod \"ovn-controller-lsj78-config-ht5qk\" (UID: \"f4885ff5-3e9f-4297-8601-9b058062c8c3\") " pod="openstack/ovn-controller-lsj78-config-ht5qk" Dec 09 12:26:48 crc kubenswrapper[4703]: I1209 12:26:47.263632 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f188683b-b8cf-4aab-81cb-ad7318e3e07f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f188683b-b8cf-4aab-81cb-ad7318e3e07f" (UID: "f188683b-b8cf-4aab-81cb-ad7318e3e07f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:26:48 crc kubenswrapper[4703]: I1209 12:26:47.269212 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f188683b-b8cf-4aab-81cb-ad7318e3e07f-kube-api-access-qx6j5" (OuterVolumeSpecName: "kube-api-access-qx6j5") pod "f188683b-b8cf-4aab-81cb-ad7318e3e07f" (UID: "f188683b-b8cf-4aab-81cb-ad7318e3e07f"). InnerVolumeSpecName "kube-api-access-qx6j5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:26:48 crc kubenswrapper[4703]: I1209 12:26:47.364694 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f4885ff5-3e9f-4297-8601-9b058062c8c3-additional-scripts\") pod \"ovn-controller-lsj78-config-ht5qk\" (UID: \"f4885ff5-3e9f-4297-8601-9b058062c8c3\") " pod="openstack/ovn-controller-lsj78-config-ht5qk" Dec 09 12:26:48 crc kubenswrapper[4703]: I1209 12:26:47.364902 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f4885ff5-3e9f-4297-8601-9b058062c8c3-var-log-ovn\") pod \"ovn-controller-lsj78-config-ht5qk\" (UID: \"f4885ff5-3e9f-4297-8601-9b058062c8c3\") " pod="openstack/ovn-controller-lsj78-config-ht5qk" Dec 09 12:26:48 crc kubenswrapper[4703]: I1209 12:26:47.364954 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f4885ff5-3e9f-4297-8601-9b058062c8c3-var-run\") pod \"ovn-controller-lsj78-config-ht5qk\" (UID: \"f4885ff5-3e9f-4297-8601-9b058062c8c3\") " pod="openstack/ovn-controller-lsj78-config-ht5qk" Dec 09 12:26:48 crc kubenswrapper[4703]: I1209 12:26:47.364987 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f4885ff5-3e9f-4297-8601-9b058062c8c3-var-run-ovn\") pod \"ovn-controller-lsj78-config-ht5qk\" (UID: \"f4885ff5-3e9f-4297-8601-9b058062c8c3\") " pod="openstack/ovn-controller-lsj78-config-ht5qk" Dec 09 12:26:48 crc kubenswrapper[4703]: I1209 12:26:47.365072 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f4885ff5-3e9f-4297-8601-9b058062c8c3-scripts\") pod \"ovn-controller-lsj78-config-ht5qk\" (UID: \"f4885ff5-3e9f-4297-8601-9b058062c8c3\") " pod="openstack/ovn-controller-lsj78-config-ht5qk" Dec 09 12:26:48 crc kubenswrapper[4703]: I1209 12:26:47.365102 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phn5s\" (UniqueName: \"kubernetes.io/projected/f4885ff5-3e9f-4297-8601-9b058062c8c3-kube-api-access-phn5s\") pod \"ovn-controller-lsj78-config-ht5qk\" (UID: \"f4885ff5-3e9f-4297-8601-9b058062c8c3\") " pod="openstack/ovn-controller-lsj78-config-ht5qk" Dec 09 12:26:48 crc kubenswrapper[4703]: I1209 12:26:47.365202 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qx6j5\" (UniqueName: \"kubernetes.io/projected/f188683b-b8cf-4aab-81cb-ad7318e3e07f-kube-api-access-qx6j5\") on node \"crc\" DevicePath \"\"" Dec 09 12:26:48 crc kubenswrapper[4703]: I1209 12:26:47.365216 4703 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f188683b-b8cf-4aab-81cb-ad7318e3e07f-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 12:26:48 crc kubenswrapper[4703]: I1209 12:26:47.366686 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f4885ff5-3e9f-4297-8601-9b058062c8c3-additional-scripts\") pod \"ovn-controller-lsj78-config-ht5qk\" (UID: \"f4885ff5-3e9f-4297-8601-9b058062c8c3\") " pod="openstack/ovn-controller-lsj78-config-ht5qk" Dec 09 12:26:48 crc kubenswrapper[4703]: I1209 12:26:47.367050 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f4885ff5-3e9f-4297-8601-9b058062c8c3-var-log-ovn\") pod \"ovn-controller-lsj78-config-ht5qk\" (UID: \"f4885ff5-3e9f-4297-8601-9b058062c8c3\") " pod="openstack/ovn-controller-lsj78-config-ht5qk" Dec 09 12:26:48 crc kubenswrapper[4703]: I1209 12:26:47.367121 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f4885ff5-3e9f-4297-8601-9b058062c8c3-var-run\") pod \"ovn-controller-lsj78-config-ht5qk\" (UID: \"f4885ff5-3e9f-4297-8601-9b058062c8c3\") " pod="openstack/ovn-controller-lsj78-config-ht5qk" Dec 09 12:26:48 crc kubenswrapper[4703]: I1209 12:26:47.367173 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f4885ff5-3e9f-4297-8601-9b058062c8c3-var-run-ovn\") pod \"ovn-controller-lsj78-config-ht5qk\" (UID: \"f4885ff5-3e9f-4297-8601-9b058062c8c3\") " pod="openstack/ovn-controller-lsj78-config-ht5qk" Dec 09 12:26:48 crc kubenswrapper[4703]: I1209 12:26:47.369275 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f4885ff5-3e9f-4297-8601-9b058062c8c3-scripts\") pod \"ovn-controller-lsj78-config-ht5qk\" (UID: \"f4885ff5-3e9f-4297-8601-9b058062c8c3\") " pod="openstack/ovn-controller-lsj78-config-ht5qk" Dec 09 12:26:48 crc kubenswrapper[4703]: I1209 12:26:47.393157 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phn5s\" (UniqueName: \"kubernetes.io/projected/f4885ff5-3e9f-4297-8601-9b058062c8c3-kube-api-access-phn5s\") pod \"ovn-controller-lsj78-config-ht5qk\" (UID: \"f4885ff5-3e9f-4297-8601-9b058062c8c3\") " pod="openstack/ovn-controller-lsj78-config-ht5qk" Dec 09 12:26:48 crc kubenswrapper[4703]: I1209 12:26:47.462151 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-kwzbm" Dec 09 12:26:48 crc kubenswrapper[4703]: I1209 12:26:47.520355 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-lsj78-config-ht5qk" Dec 09 12:26:48 crc kubenswrapper[4703]: I1209 12:26:47.568248 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7aa6f4bf-5553-4149-ae85-b17092bef181-operator-scripts\") pod \"7aa6f4bf-5553-4149-ae85-b17092bef181\" (UID: \"7aa6f4bf-5553-4149-ae85-b17092bef181\") " Dec 09 12:26:48 crc kubenswrapper[4703]: I1209 12:26:47.568393 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dtwkc\" (UniqueName: \"kubernetes.io/projected/7aa6f4bf-5553-4149-ae85-b17092bef181-kube-api-access-dtwkc\") pod \"7aa6f4bf-5553-4149-ae85-b17092bef181\" (UID: \"7aa6f4bf-5553-4149-ae85-b17092bef181\") " Dec 09 12:26:48 crc kubenswrapper[4703]: I1209 12:26:47.569692 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7aa6f4bf-5553-4149-ae85-b17092bef181-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7aa6f4bf-5553-4149-ae85-b17092bef181" (UID: "7aa6f4bf-5553-4149-ae85-b17092bef181"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:26:48 crc kubenswrapper[4703]: I1209 12:26:47.580445 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7aa6f4bf-5553-4149-ae85-b17092bef181-kube-api-access-dtwkc" (OuterVolumeSpecName: "kube-api-access-dtwkc") pod "7aa6f4bf-5553-4149-ae85-b17092bef181" (UID: "7aa6f4bf-5553-4149-ae85-b17092bef181"). InnerVolumeSpecName "kube-api-access-dtwkc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:26:48 crc kubenswrapper[4703]: I1209 12:26:47.606404 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0b084a1a-44b8-439b-ad26-d1ead9d2f225","Type":"ContainerStarted","Data":"93c8b05923234bf1df08fc613abeb309450e92de8ba8c1f863d264be47f47d01"} Dec 09 12:26:48 crc kubenswrapper[4703]: I1209 12:26:47.607535 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Dec 09 12:26:48 crc kubenswrapper[4703]: I1209 12:26:47.610157 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-0b9b-account-create-update-cv86j" event={"ID":"f188683b-b8cf-4aab-81cb-ad7318e3e07f","Type":"ContainerDied","Data":"a9fb6eb09451eded20531c7f2fc4d4e5b9d0f817a322d7fcdcaefc875115cc80"} Dec 09 12:26:48 crc kubenswrapper[4703]: I1209 12:26:47.610205 4703 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9fb6eb09451eded20531c7f2fc4d4e5b9d0f817a322d7fcdcaefc875115cc80" Dec 09 12:26:48 crc kubenswrapper[4703]: I1209 12:26:47.610263 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-0b9b-account-create-update-cv86j" Dec 09 12:26:48 crc kubenswrapper[4703]: I1209 12:26:47.613484 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692","Type":"ContainerStarted","Data":"d802eddc1bf30be7ca0f224c45b6c4fc51bf6ffb664ee95893f357f53dc9a666"} Dec 09 12:26:48 crc kubenswrapper[4703]: I1209 12:26:47.614036 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:26:48 crc kubenswrapper[4703]: I1209 12:26:47.615869 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"4ff9ddab-6c86-431b-b31b-3ec7372b7144","Type":"ContainerStarted","Data":"d3f263676dcde21f82db11c76743339c4a32612a5f214f238f61a275430d703d"} Dec 09 12:26:48 crc kubenswrapper[4703]: I1209 12:26:47.622762 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-kwzbm" event={"ID":"7aa6f4bf-5553-4149-ae85-b17092bef181","Type":"ContainerDied","Data":"f17fd03593247336575b81b1a25c54766a0d531451f57059f3b70514d7e5f38c"} Dec 09 12:26:48 crc kubenswrapper[4703]: I1209 12:26:47.622815 4703 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f17fd03593247336575b81b1a25c54766a0d531451f57059f3b70514d7e5f38c" Dec 09 12:26:48 crc kubenswrapper[4703]: I1209 12:26:47.622782 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-kwzbm" Dec 09 12:26:48 crc kubenswrapper[4703]: I1209 12:26:47.641800 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=46.651058762 podStartE2EDuration="1m36.641782792s" podCreationTimestamp="2025-12-09 12:25:11 +0000 UTC" firstStartedPulling="2025-12-09 12:25:13.934902177 +0000 UTC m=+1213.183665696" lastFinishedPulling="2025-12-09 12:26:03.925626207 +0000 UTC m=+1263.174389726" observedRunningTime="2025-12-09 12:26:47.640513729 +0000 UTC m=+1306.889277258" watchObservedRunningTime="2025-12-09 12:26:47.641782792 +0000 UTC m=+1306.890546311" Dec 09 12:26:48 crc kubenswrapper[4703]: I1209 12:26:47.670578 4703 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7aa6f4bf-5553-4149-ae85-b17092bef181-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 12:26:48 crc kubenswrapper[4703]: I1209 12:26:47.670979 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dtwkc\" (UniqueName: \"kubernetes.io/projected/7aa6f4bf-5553-4149-ae85-b17092bef181-kube-api-access-dtwkc\") on node \"crc\" DevicePath \"\"" Dec 09 12:26:48 crc kubenswrapper[4703]: I1209 12:26:47.708363 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=-9223371940.146437 podStartE2EDuration="1m36.708339541s" podCreationTimestamp="2025-12-09 12:25:11 +0000 UTC" firstStartedPulling="2025-12-09 12:25:14.125955307 +0000 UTC m=+1213.374718826" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:26:47.701085831 +0000 UTC m=+1306.949849370" watchObservedRunningTime="2025-12-09 12:26:47.708339541 +0000 UTC m=+1306.957103060" Dec 09 12:26:48 crc kubenswrapper[4703]: I1209 12:26:47.755240 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=4.405264127 podStartE2EDuration="1m30.755220854s" podCreationTimestamp="2025-12-09 12:25:17 +0000 UTC" firstStartedPulling="2025-12-09 12:25:19.939118305 +0000 UTC m=+1219.187881824" lastFinishedPulling="2025-12-09 12:26:46.289075032 +0000 UTC m=+1305.537838551" observedRunningTime="2025-12-09 12:26:47.750171561 +0000 UTC m=+1306.998935100" watchObservedRunningTime="2025-12-09 12:26:47.755220854 +0000 UTC m=+1307.003984373" Dec 09 12:26:48 crc kubenswrapper[4703]: I1209 12:26:48.971594 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Dec 09 12:26:48 crc kubenswrapper[4703]: I1209 12:26:48.971966 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Dec 09 12:26:48 crc kubenswrapper[4703]: I1209 12:26:48.974219 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Dec 09 12:26:48 crc kubenswrapper[4703]: W1209 12:26:48.992699 4703 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf4885ff5_3e9f_4297_8601_9b058062c8c3.slice/crio-a378e174fd4607d4ac9d13a483e552e20206d285514b7955f8a81a6289149dce WatchSource:0}: Error finding container a378e174fd4607d4ac9d13a483e552e20206d285514b7955f8a81a6289149dce: Status 404 returned error can't find the container with id a378e174fd4607d4ac9d13a483e552e20206d285514b7955f8a81a6289149dce Dec 09 12:26:49 crc kubenswrapper[4703]: I1209 12:26:49.016685 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-lsj78-config-ht5qk"] Dec 09 12:26:49 crc kubenswrapper[4703]: I1209 12:26:49.643232 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-lsj78-config-ht5qk" event={"ID":"f4885ff5-3e9f-4297-8601-9b058062c8c3","Type":"ContainerStarted","Data":"3b3f2b61c96f9048f951cf177867d2aeeafff9cdcf473e38b29623a95aaa3a3f"} Dec 09 12:26:49 crc kubenswrapper[4703]: I1209 12:26:49.643549 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-lsj78-config-ht5qk" event={"ID":"f4885ff5-3e9f-4297-8601-9b058062c8c3","Type":"ContainerStarted","Data":"a378e174fd4607d4ac9d13a483e552e20206d285514b7955f8a81a6289149dce"} Dec 09 12:26:49 crc kubenswrapper[4703]: I1209 12:26:49.644341 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Dec 09 12:26:49 crc kubenswrapper[4703]: I1209 12:26:49.672006 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-lsj78-config-ht5qk" podStartSLOduration=2.671981432 podStartE2EDuration="2.671981432s" podCreationTimestamp="2025-12-09 12:26:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:26:49.661728012 +0000 UTC m=+1308.910491551" watchObservedRunningTime="2025-12-09 12:26:49.671981432 +0000 UTC m=+1308.920744951" Dec 09 12:26:50 crc kubenswrapper[4703]: I1209 12:26:50.651139 4703 generic.go:334] "Generic (PLEG): container finished" podID="f4885ff5-3e9f-4297-8601-9b058062c8c3" containerID="3b3f2b61c96f9048f951cf177867d2aeeafff9cdcf473e38b29623a95aaa3a3f" exitCode=0 Dec 09 12:26:50 crc kubenswrapper[4703]: I1209 12:26:50.651301 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-lsj78-config-ht5qk" event={"ID":"f4885ff5-3e9f-4297-8601-9b058062c8c3","Type":"ContainerDied","Data":"3b3f2b61c96f9048f951cf177867d2aeeafff9cdcf473e38b29623a95aaa3a3f"} Dec 09 12:26:51 crc kubenswrapper[4703]: I1209 12:26:51.094464 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-29stw"] Dec 09 12:26:51 crc kubenswrapper[4703]: E1209 12:26:51.094945 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7aa6f4bf-5553-4149-ae85-b17092bef181" containerName="mariadb-database-create" Dec 09 12:26:51 crc kubenswrapper[4703]: I1209 12:26:51.094962 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="7aa6f4bf-5553-4149-ae85-b17092bef181" containerName="mariadb-database-create" Dec 09 12:26:51 crc kubenswrapper[4703]: E1209 12:26:51.094991 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f188683b-b8cf-4aab-81cb-ad7318e3e07f" containerName="mariadb-account-create-update" Dec 09 12:26:51 crc kubenswrapper[4703]: I1209 12:26:51.094999 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="f188683b-b8cf-4aab-81cb-ad7318e3e07f" containerName="mariadb-account-create-update" Dec 09 12:26:51 crc kubenswrapper[4703]: I1209 12:26:51.095230 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="7aa6f4bf-5553-4149-ae85-b17092bef181" containerName="mariadb-database-create" Dec 09 12:26:51 crc kubenswrapper[4703]: I1209 12:26:51.095255 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="f188683b-b8cf-4aab-81cb-ad7318e3e07f" containerName="mariadb-account-create-update" Dec 09 12:26:51 crc kubenswrapper[4703]: I1209 12:26:51.096142 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-29stw" Dec 09 12:26:51 crc kubenswrapper[4703]: I1209 12:26:51.098983 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Dec 09 12:26:51 crc kubenswrapper[4703]: I1209 12:26:51.099543 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-5lwc2" Dec 09 12:26:51 crc kubenswrapper[4703]: I1209 12:26:51.128566 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-29stw"] Dec 09 12:26:51 crc kubenswrapper[4703]: I1209 12:26:51.145268 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4e5cccd-9c7c-467b-a10b-4a989ea688e3-combined-ca-bundle\") pod \"glance-db-sync-29stw\" (UID: \"e4e5cccd-9c7c-467b-a10b-4a989ea688e3\") " pod="openstack/glance-db-sync-29stw" Dec 09 12:26:51 crc kubenswrapper[4703]: I1209 12:26:51.145356 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4e5cccd-9c7c-467b-a10b-4a989ea688e3-config-data\") pod \"glance-db-sync-29stw\" (UID: \"e4e5cccd-9c7c-467b-a10b-4a989ea688e3\") " pod="openstack/glance-db-sync-29stw" Dec 09 12:26:51 crc kubenswrapper[4703]: I1209 12:26:51.145443 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e4e5cccd-9c7c-467b-a10b-4a989ea688e3-db-sync-config-data\") pod \"glance-db-sync-29stw\" (UID: \"e4e5cccd-9c7c-467b-a10b-4a989ea688e3\") " pod="openstack/glance-db-sync-29stw" Dec 09 12:26:51 crc kubenswrapper[4703]: I1209 12:26:51.145493 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brmls\" (UniqueName: \"kubernetes.io/projected/e4e5cccd-9c7c-467b-a10b-4a989ea688e3-kube-api-access-brmls\") pod \"glance-db-sync-29stw\" (UID: \"e4e5cccd-9c7c-467b-a10b-4a989ea688e3\") " pod="openstack/glance-db-sync-29stw" Dec 09 12:26:51 crc kubenswrapper[4703]: I1209 12:26:51.247791 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4e5cccd-9c7c-467b-a10b-4a989ea688e3-config-data\") pod \"glance-db-sync-29stw\" (UID: \"e4e5cccd-9c7c-467b-a10b-4a989ea688e3\") " pod="openstack/glance-db-sync-29stw" Dec 09 12:26:51 crc kubenswrapper[4703]: I1209 12:26:51.247915 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e4e5cccd-9c7c-467b-a10b-4a989ea688e3-db-sync-config-data\") pod \"glance-db-sync-29stw\" (UID: \"e4e5cccd-9c7c-467b-a10b-4a989ea688e3\") " pod="openstack/glance-db-sync-29stw" Dec 09 12:26:51 crc kubenswrapper[4703]: I1209 12:26:51.247989 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brmls\" (UniqueName: \"kubernetes.io/projected/e4e5cccd-9c7c-467b-a10b-4a989ea688e3-kube-api-access-brmls\") pod \"glance-db-sync-29stw\" (UID: \"e4e5cccd-9c7c-467b-a10b-4a989ea688e3\") " pod="openstack/glance-db-sync-29stw" Dec 09 12:26:51 crc kubenswrapper[4703]: I1209 12:26:51.248171 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4e5cccd-9c7c-467b-a10b-4a989ea688e3-combined-ca-bundle\") pod \"glance-db-sync-29stw\" (UID: \"e4e5cccd-9c7c-467b-a10b-4a989ea688e3\") " pod="openstack/glance-db-sync-29stw" Dec 09 12:26:51 crc kubenswrapper[4703]: I1209 12:26:51.257160 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4e5cccd-9c7c-467b-a10b-4a989ea688e3-config-data\") pod \"glance-db-sync-29stw\" (UID: \"e4e5cccd-9c7c-467b-a10b-4a989ea688e3\") " pod="openstack/glance-db-sync-29stw" Dec 09 12:26:51 crc kubenswrapper[4703]: I1209 12:26:51.257183 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e4e5cccd-9c7c-467b-a10b-4a989ea688e3-db-sync-config-data\") pod \"glance-db-sync-29stw\" (UID: \"e4e5cccd-9c7c-467b-a10b-4a989ea688e3\") " pod="openstack/glance-db-sync-29stw" Dec 09 12:26:51 crc kubenswrapper[4703]: I1209 12:26:51.257435 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4e5cccd-9c7c-467b-a10b-4a989ea688e3-combined-ca-bundle\") pod \"glance-db-sync-29stw\" (UID: \"e4e5cccd-9c7c-467b-a10b-4a989ea688e3\") " pod="openstack/glance-db-sync-29stw" Dec 09 12:26:51 crc kubenswrapper[4703]: I1209 12:26:51.270216 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brmls\" (UniqueName: \"kubernetes.io/projected/e4e5cccd-9c7c-467b-a10b-4a989ea688e3-kube-api-access-brmls\") pod \"glance-db-sync-29stw\" (UID: \"e4e5cccd-9c7c-467b-a10b-4a989ea688e3\") " pod="openstack/glance-db-sync-29stw" Dec 09 12:26:51 crc kubenswrapper[4703]: I1209 12:26:51.419645 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-29stw" Dec 09 12:26:51 crc kubenswrapper[4703]: I1209 12:26:51.765954 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-lsj78" Dec 09 12:26:51 crc kubenswrapper[4703]: I1209 12:26:51.917053 4703 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="be3c1046-2c78-46ab-a62f-f4270561ca1c" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 09 12:26:52 crc kubenswrapper[4703]: I1209 12:26:52.574395 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-lsj78-config-ht5qk" Dec 09 12:26:52 crc kubenswrapper[4703]: W1209 12:26:52.609361 4703 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode4e5cccd_9c7c_467b_a10b_4a989ea688e3.slice/crio-7fb255c9f79112ea0512986264efb2b60b76bb46ee0e3d503b6a9dc8c6fa5e3b WatchSource:0}: Error finding container 7fb255c9f79112ea0512986264efb2b60b76bb46ee0e3d503b6a9dc8c6fa5e3b: Status 404 returned error can't find the container with id 7fb255c9f79112ea0512986264efb2b60b76bb46ee0e3d503b6a9dc8c6fa5e3b Dec 09 12:26:52 crc kubenswrapper[4703]: I1209 12:26:52.609872 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-29stw"] Dec 09 12:26:52 crc kubenswrapper[4703]: I1209 12:26:52.691870 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-29stw" event={"ID":"e4e5cccd-9c7c-467b-a10b-4a989ea688e3","Type":"ContainerStarted","Data":"7fb255c9f79112ea0512986264efb2b60b76bb46ee0e3d503b6a9dc8c6fa5e3b"} Dec 09 12:26:52 crc kubenswrapper[4703]: I1209 12:26:52.696965 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-lsj78-config-ht5qk" event={"ID":"f4885ff5-3e9f-4297-8601-9b058062c8c3","Type":"ContainerDied","Data":"a378e174fd4607d4ac9d13a483e552e20206d285514b7955f8a81a6289149dce"} Dec 09 12:26:52 crc kubenswrapper[4703]: I1209 12:26:52.697234 4703 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a378e174fd4607d4ac9d13a483e552e20206d285514b7955f8a81a6289149dce" Dec 09 12:26:52 crc kubenswrapper[4703]: I1209 12:26:52.697353 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-lsj78-config-ht5qk" Dec 09 12:26:52 crc kubenswrapper[4703]: I1209 12:26:52.753230 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f4885ff5-3e9f-4297-8601-9b058062c8c3-var-run-ovn\") pod \"f4885ff5-3e9f-4297-8601-9b058062c8c3\" (UID: \"f4885ff5-3e9f-4297-8601-9b058062c8c3\") " Dec 09 12:26:52 crc kubenswrapper[4703]: I1209 12:26:52.753551 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f4885ff5-3e9f-4297-8601-9b058062c8c3-additional-scripts\") pod \"f4885ff5-3e9f-4297-8601-9b058062c8c3\" (UID: \"f4885ff5-3e9f-4297-8601-9b058062c8c3\") " Dec 09 12:26:52 crc kubenswrapper[4703]: I1209 12:26:52.753861 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-phn5s\" (UniqueName: \"kubernetes.io/projected/f4885ff5-3e9f-4297-8601-9b058062c8c3-kube-api-access-phn5s\") pod \"f4885ff5-3e9f-4297-8601-9b058062c8c3\" (UID: \"f4885ff5-3e9f-4297-8601-9b058062c8c3\") " Dec 09 12:26:52 crc kubenswrapper[4703]: I1209 12:26:52.753978 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f4885ff5-3e9f-4297-8601-9b058062c8c3-var-log-ovn\") pod \"f4885ff5-3e9f-4297-8601-9b058062c8c3\" (UID: \"f4885ff5-3e9f-4297-8601-9b058062c8c3\") " Dec 09 12:26:52 crc kubenswrapper[4703]: I1209 12:26:52.754075 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f4885ff5-3e9f-4297-8601-9b058062c8c3-var-run\") pod \"f4885ff5-3e9f-4297-8601-9b058062c8c3\" (UID: \"f4885ff5-3e9f-4297-8601-9b058062c8c3\") " Dec 09 12:26:52 crc kubenswrapper[4703]: I1209 12:26:52.754236 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f4885ff5-3e9f-4297-8601-9b058062c8c3-scripts\") pod \"f4885ff5-3e9f-4297-8601-9b058062c8c3\" (UID: \"f4885ff5-3e9f-4297-8601-9b058062c8c3\") " Dec 09 12:26:52 crc kubenswrapper[4703]: I1209 12:26:52.755720 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4885ff5-3e9f-4297-8601-9b058062c8c3-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "f4885ff5-3e9f-4297-8601-9b058062c8c3" (UID: "f4885ff5-3e9f-4297-8601-9b058062c8c3"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 12:26:52 crc kubenswrapper[4703]: I1209 12:26:52.756602 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4885ff5-3e9f-4297-8601-9b058062c8c3-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "f4885ff5-3e9f-4297-8601-9b058062c8c3" (UID: "f4885ff5-3e9f-4297-8601-9b058062c8c3"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:26:52 crc kubenswrapper[4703]: I1209 12:26:52.757452 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4885ff5-3e9f-4297-8601-9b058062c8c3-var-run" (OuterVolumeSpecName: "var-run") pod "f4885ff5-3e9f-4297-8601-9b058062c8c3" (UID: "f4885ff5-3e9f-4297-8601-9b058062c8c3"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 12:26:52 crc kubenswrapper[4703]: I1209 12:26:52.757562 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4885ff5-3e9f-4297-8601-9b058062c8c3-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "f4885ff5-3e9f-4297-8601-9b058062c8c3" (UID: "f4885ff5-3e9f-4297-8601-9b058062c8c3"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 12:26:52 crc kubenswrapper[4703]: I1209 12:26:52.758558 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4885ff5-3e9f-4297-8601-9b058062c8c3-scripts" (OuterVolumeSpecName: "scripts") pod "f4885ff5-3e9f-4297-8601-9b058062c8c3" (UID: "f4885ff5-3e9f-4297-8601-9b058062c8c3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:26:52 crc kubenswrapper[4703]: I1209 12:26:52.765133 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4885ff5-3e9f-4297-8601-9b058062c8c3-kube-api-access-phn5s" (OuterVolumeSpecName: "kube-api-access-phn5s") pod "f4885ff5-3e9f-4297-8601-9b058062c8c3" (UID: "f4885ff5-3e9f-4297-8601-9b058062c8c3"). InnerVolumeSpecName "kube-api-access-phn5s". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:26:52 crc kubenswrapper[4703]: I1209 12:26:52.857033 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-phn5s\" (UniqueName: \"kubernetes.io/projected/f4885ff5-3e9f-4297-8601-9b058062c8c3-kube-api-access-phn5s\") on node \"crc\" DevicePath \"\"" Dec 09 12:26:52 crc kubenswrapper[4703]: I1209 12:26:52.857080 4703 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f4885ff5-3e9f-4297-8601-9b058062c8c3-var-log-ovn\") on node \"crc\" DevicePath \"\"" Dec 09 12:26:52 crc kubenswrapper[4703]: I1209 12:26:52.857091 4703 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f4885ff5-3e9f-4297-8601-9b058062c8c3-var-run\") on node \"crc\" DevicePath \"\"" Dec 09 12:26:52 crc kubenswrapper[4703]: I1209 12:26:52.857099 4703 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f4885ff5-3e9f-4297-8601-9b058062c8c3-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 12:26:52 crc kubenswrapper[4703]: I1209 12:26:52.857108 4703 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f4885ff5-3e9f-4297-8601-9b058062c8c3-var-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 09 12:26:52 crc kubenswrapper[4703]: I1209 12:26:52.857119 4703 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f4885ff5-3e9f-4297-8601-9b058062c8c3-additional-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 12:26:53 crc kubenswrapper[4703]: I1209 12:26:53.185423 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 09 12:26:53 crc kubenswrapper[4703]: I1209 12:26:53.185727 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="4ff9ddab-6c86-431b-b31b-3ec7372b7144" containerName="prometheus" containerID="cri-o://605cbc593b56d04e246306ab18f15c7cd0add555c9a2edf5bf49679a37d0aa83" gracePeriod=600 Dec 09 12:26:53 crc kubenswrapper[4703]: I1209 12:26:53.185876 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="4ff9ddab-6c86-431b-b31b-3ec7372b7144" containerName="thanos-sidecar" containerID="cri-o://d3f263676dcde21f82db11c76743339c4a32612a5f214f238f61a275430d703d" gracePeriod=600 Dec 09 12:26:53 crc kubenswrapper[4703]: I1209 12:26:53.185970 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="4ff9ddab-6c86-431b-b31b-3ec7372b7144" containerName="config-reloader" containerID="cri-o://30c179592cb0fc711f436fe3bbcd1a54682da5299dbb1ad561f8df2842cf104d" gracePeriod=600 Dec 09 12:26:53 crc kubenswrapper[4703]: I1209 12:26:53.712073 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-lsj78-config-ht5qk"] Dec 09 12:26:53 crc kubenswrapper[4703]: I1209 12:26:53.721780 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-lsj78-config-ht5qk"] Dec 09 12:26:53 crc kubenswrapper[4703]: I1209 12:26:53.872463 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-lsj78-config-rg8rv"] Dec 09 12:26:53 crc kubenswrapper[4703]: E1209 12:26:53.872887 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4885ff5-3e9f-4297-8601-9b058062c8c3" containerName="ovn-config" Dec 09 12:26:53 crc kubenswrapper[4703]: I1209 12:26:53.872901 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4885ff5-3e9f-4297-8601-9b058062c8c3" containerName="ovn-config" Dec 09 12:26:53 crc kubenswrapper[4703]: I1209 12:26:53.873084 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4885ff5-3e9f-4297-8601-9b058062c8c3" containerName="ovn-config" Dec 09 12:26:53 crc kubenswrapper[4703]: I1209 12:26:53.873817 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-lsj78-config-rg8rv" Dec 09 12:26:53 crc kubenswrapper[4703]: I1209 12:26:53.877181 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Dec 09 12:26:53 crc kubenswrapper[4703]: I1209 12:26:53.891797 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-lsj78-config-rg8rv"] Dec 09 12:26:53 crc kubenswrapper[4703]: I1209 12:26:53.971184 4703 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="4ff9ddab-6c86-431b-b31b-3ec7372b7144" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.112:9090/-/ready\": dial tcp 10.217.0.112:9090: connect: connection refused" Dec 09 12:26:53 crc kubenswrapper[4703]: I1209 12:26:53.983621 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/503838a9-bcc9-4360-adac-b9d27a5607cc-scripts\") pod \"ovn-controller-lsj78-config-rg8rv\" (UID: \"503838a9-bcc9-4360-adac-b9d27a5607cc\") " pod="openstack/ovn-controller-lsj78-config-rg8rv" Dec 09 12:26:53 crc kubenswrapper[4703]: I1209 12:26:53.983706 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/503838a9-bcc9-4360-adac-b9d27a5607cc-var-run\") pod \"ovn-controller-lsj78-config-rg8rv\" (UID: \"503838a9-bcc9-4360-adac-b9d27a5607cc\") " pod="openstack/ovn-controller-lsj78-config-rg8rv" Dec 09 12:26:53 crc kubenswrapper[4703]: I1209 12:26:53.983785 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vq7x4\" (UniqueName: \"kubernetes.io/projected/503838a9-bcc9-4360-adac-b9d27a5607cc-kube-api-access-vq7x4\") pod \"ovn-controller-lsj78-config-rg8rv\" (UID: \"503838a9-bcc9-4360-adac-b9d27a5607cc\") " pod="openstack/ovn-controller-lsj78-config-rg8rv" Dec 09 12:26:53 crc kubenswrapper[4703]: I1209 12:26:53.983855 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/503838a9-bcc9-4360-adac-b9d27a5607cc-var-log-ovn\") pod \"ovn-controller-lsj78-config-rg8rv\" (UID: \"503838a9-bcc9-4360-adac-b9d27a5607cc\") " pod="openstack/ovn-controller-lsj78-config-rg8rv" Dec 09 12:26:53 crc kubenswrapper[4703]: I1209 12:26:53.983901 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/503838a9-bcc9-4360-adac-b9d27a5607cc-var-run-ovn\") pod \"ovn-controller-lsj78-config-rg8rv\" (UID: \"503838a9-bcc9-4360-adac-b9d27a5607cc\") " pod="openstack/ovn-controller-lsj78-config-rg8rv" Dec 09 12:26:53 crc kubenswrapper[4703]: I1209 12:26:53.983927 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/503838a9-bcc9-4360-adac-b9d27a5607cc-additional-scripts\") pod \"ovn-controller-lsj78-config-rg8rv\" (UID: \"503838a9-bcc9-4360-adac-b9d27a5607cc\") " pod="openstack/ovn-controller-lsj78-config-rg8rv" Dec 09 12:26:54 crc kubenswrapper[4703]: I1209 12:26:54.086672 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vq7x4\" (UniqueName: \"kubernetes.io/projected/503838a9-bcc9-4360-adac-b9d27a5607cc-kube-api-access-vq7x4\") pod \"ovn-controller-lsj78-config-rg8rv\" (UID: \"503838a9-bcc9-4360-adac-b9d27a5607cc\") " pod="openstack/ovn-controller-lsj78-config-rg8rv" Dec 09 12:26:54 crc kubenswrapper[4703]: I1209 12:26:54.087095 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/503838a9-bcc9-4360-adac-b9d27a5607cc-var-log-ovn\") pod \"ovn-controller-lsj78-config-rg8rv\" (UID: \"503838a9-bcc9-4360-adac-b9d27a5607cc\") " pod="openstack/ovn-controller-lsj78-config-rg8rv" Dec 09 12:26:54 crc kubenswrapper[4703]: I1209 12:26:54.087445 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/503838a9-bcc9-4360-adac-b9d27a5607cc-var-log-ovn\") pod \"ovn-controller-lsj78-config-rg8rv\" (UID: \"503838a9-bcc9-4360-adac-b9d27a5607cc\") " pod="openstack/ovn-controller-lsj78-config-rg8rv" Dec 09 12:26:54 crc kubenswrapper[4703]: I1209 12:26:54.087508 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/503838a9-bcc9-4360-adac-b9d27a5607cc-var-run-ovn\") pod \"ovn-controller-lsj78-config-rg8rv\" (UID: \"503838a9-bcc9-4360-adac-b9d27a5607cc\") " pod="openstack/ovn-controller-lsj78-config-rg8rv" Dec 09 12:26:54 crc kubenswrapper[4703]: I1209 12:26:54.087581 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/503838a9-bcc9-4360-adac-b9d27a5607cc-var-run-ovn\") pod \"ovn-controller-lsj78-config-rg8rv\" (UID: \"503838a9-bcc9-4360-adac-b9d27a5607cc\") " pod="openstack/ovn-controller-lsj78-config-rg8rv" Dec 09 12:26:54 crc kubenswrapper[4703]: I1209 12:26:54.087620 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/503838a9-bcc9-4360-adac-b9d27a5607cc-additional-scripts\") pod \"ovn-controller-lsj78-config-rg8rv\" (UID: \"503838a9-bcc9-4360-adac-b9d27a5607cc\") " pod="openstack/ovn-controller-lsj78-config-rg8rv" Dec 09 12:26:54 crc kubenswrapper[4703]: I1209 12:26:54.088528 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/503838a9-bcc9-4360-adac-b9d27a5607cc-additional-scripts\") pod \"ovn-controller-lsj78-config-rg8rv\" (UID: \"503838a9-bcc9-4360-adac-b9d27a5607cc\") " pod="openstack/ovn-controller-lsj78-config-rg8rv" Dec 09 12:26:54 crc kubenswrapper[4703]: I1209 12:26:54.087657 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/503838a9-bcc9-4360-adac-b9d27a5607cc-scripts\") pod \"ovn-controller-lsj78-config-rg8rv\" (UID: \"503838a9-bcc9-4360-adac-b9d27a5607cc\") " pod="openstack/ovn-controller-lsj78-config-rg8rv" Dec 09 12:26:54 crc kubenswrapper[4703]: I1209 12:26:54.089450 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/503838a9-bcc9-4360-adac-b9d27a5607cc-var-run\") pod \"ovn-controller-lsj78-config-rg8rv\" (UID: \"503838a9-bcc9-4360-adac-b9d27a5607cc\") " pod="openstack/ovn-controller-lsj78-config-rg8rv" Dec 09 12:26:54 crc kubenswrapper[4703]: I1209 12:26:54.090109 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/503838a9-bcc9-4360-adac-b9d27a5607cc-var-run\") pod \"ovn-controller-lsj78-config-rg8rv\" (UID: \"503838a9-bcc9-4360-adac-b9d27a5607cc\") " pod="openstack/ovn-controller-lsj78-config-rg8rv" Dec 09 12:26:54 crc kubenswrapper[4703]: I1209 12:26:54.090698 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/503838a9-bcc9-4360-adac-b9d27a5607cc-scripts\") pod \"ovn-controller-lsj78-config-rg8rv\" (UID: \"503838a9-bcc9-4360-adac-b9d27a5607cc\") " pod="openstack/ovn-controller-lsj78-config-rg8rv" Dec 09 12:26:54 crc kubenswrapper[4703]: I1209 12:26:54.108276 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vq7x4\" (UniqueName: \"kubernetes.io/projected/503838a9-bcc9-4360-adac-b9d27a5607cc-kube-api-access-vq7x4\") pod \"ovn-controller-lsj78-config-rg8rv\" (UID: \"503838a9-bcc9-4360-adac-b9d27a5607cc\") " pod="openstack/ovn-controller-lsj78-config-rg8rv" Dec 09 12:26:54 crc kubenswrapper[4703]: I1209 12:26:54.199984 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-lsj78-config-rg8rv" Dec 09 12:26:54 crc kubenswrapper[4703]: I1209 12:26:54.372168 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Dec 09 12:26:54 crc kubenswrapper[4703]: I1209 12:26:54.404001 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4ff9ddab-6c86-431b-b31b-3ec7372b7144-config\") pod \"4ff9ddab-6c86-431b-b31b-3ec7372b7144\" (UID: \"4ff9ddab-6c86-431b-b31b-3ec7372b7144\") " Dec 09 12:26:54 crc kubenswrapper[4703]: I1209 12:26:54.404062 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/4ff9ddab-6c86-431b-b31b-3ec7372b7144-prometheus-metric-storage-rulefiles-0\") pod \"4ff9ddab-6c86-431b-b31b-3ec7372b7144\" (UID: \"4ff9ddab-6c86-431b-b31b-3ec7372b7144\") " Dec 09 12:26:54 crc kubenswrapper[4703]: I1209 12:26:54.404165 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/4ff9ddab-6c86-431b-b31b-3ec7372b7144-tls-assets\") pod \"4ff9ddab-6c86-431b-b31b-3ec7372b7144\" (UID: \"4ff9ddab-6c86-431b-b31b-3ec7372b7144\") " Dec 09 12:26:54 crc kubenswrapper[4703]: I1209 12:26:54.404333 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-71cbae65-332b-48bf-9748-f734228bec27\") pod \"4ff9ddab-6c86-431b-b31b-3ec7372b7144\" (UID: \"4ff9ddab-6c86-431b-b31b-3ec7372b7144\") " Dec 09 12:26:54 crc kubenswrapper[4703]: I1209 12:26:54.404431 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/4ff9ddab-6c86-431b-b31b-3ec7372b7144-web-config\") pod \"4ff9ddab-6c86-431b-b31b-3ec7372b7144\" (UID: \"4ff9ddab-6c86-431b-b31b-3ec7372b7144\") " Dec 09 12:26:54 crc kubenswrapper[4703]: I1209 12:26:54.404456 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/4ff9ddab-6c86-431b-b31b-3ec7372b7144-config-out\") pod \"4ff9ddab-6c86-431b-b31b-3ec7372b7144\" (UID: \"4ff9ddab-6c86-431b-b31b-3ec7372b7144\") " Dec 09 12:26:54 crc kubenswrapper[4703]: I1209 12:26:54.404481 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ltxxx\" (UniqueName: \"kubernetes.io/projected/4ff9ddab-6c86-431b-b31b-3ec7372b7144-kube-api-access-ltxxx\") pod \"4ff9ddab-6c86-431b-b31b-3ec7372b7144\" (UID: \"4ff9ddab-6c86-431b-b31b-3ec7372b7144\") " Dec 09 12:26:54 crc kubenswrapper[4703]: I1209 12:26:54.404501 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/4ff9ddab-6c86-431b-b31b-3ec7372b7144-thanos-prometheus-http-client-file\") pod \"4ff9ddab-6c86-431b-b31b-3ec7372b7144\" (UID: \"4ff9ddab-6c86-431b-b31b-3ec7372b7144\") " Dec 09 12:26:54 crc kubenswrapper[4703]: I1209 12:26:54.405625 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ff9ddab-6c86-431b-b31b-3ec7372b7144-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "4ff9ddab-6c86-431b-b31b-3ec7372b7144" (UID: "4ff9ddab-6c86-431b-b31b-3ec7372b7144"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:26:54 crc kubenswrapper[4703]: I1209 12:26:54.411509 4703 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/4ff9ddab-6c86-431b-b31b-3ec7372b7144-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Dec 09 12:26:54 crc kubenswrapper[4703]: I1209 12:26:54.413373 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ff9ddab-6c86-431b-b31b-3ec7372b7144-config" (OuterVolumeSpecName: "config") pod "4ff9ddab-6c86-431b-b31b-3ec7372b7144" (UID: "4ff9ddab-6c86-431b-b31b-3ec7372b7144"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:26:54 crc kubenswrapper[4703]: I1209 12:26:54.413581 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ff9ddab-6c86-431b-b31b-3ec7372b7144-kube-api-access-ltxxx" (OuterVolumeSpecName: "kube-api-access-ltxxx") pod "4ff9ddab-6c86-431b-b31b-3ec7372b7144" (UID: "4ff9ddab-6c86-431b-b31b-3ec7372b7144"). InnerVolumeSpecName "kube-api-access-ltxxx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:26:54 crc kubenswrapper[4703]: I1209 12:26:54.415459 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ff9ddab-6c86-431b-b31b-3ec7372b7144-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "4ff9ddab-6c86-431b-b31b-3ec7372b7144" (UID: "4ff9ddab-6c86-431b-b31b-3ec7372b7144"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:26:54 crc kubenswrapper[4703]: I1209 12:26:54.416446 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ff9ddab-6c86-431b-b31b-3ec7372b7144-config-out" (OuterVolumeSpecName: "config-out") pod "4ff9ddab-6c86-431b-b31b-3ec7372b7144" (UID: "4ff9ddab-6c86-431b-b31b-3ec7372b7144"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:26:54 crc kubenswrapper[4703]: I1209 12:26:54.423491 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ff9ddab-6c86-431b-b31b-3ec7372b7144-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "4ff9ddab-6c86-431b-b31b-3ec7372b7144" (UID: "4ff9ddab-6c86-431b-b31b-3ec7372b7144"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:26:54 crc kubenswrapper[4703]: I1209 12:26:54.461726 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-71cbae65-332b-48bf-9748-f734228bec27" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "4ff9ddab-6c86-431b-b31b-3ec7372b7144" (UID: "4ff9ddab-6c86-431b-b31b-3ec7372b7144"). InnerVolumeSpecName "pvc-71cbae65-332b-48bf-9748-f734228bec27". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 09 12:26:54 crc kubenswrapper[4703]: I1209 12:26:54.466447 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ff9ddab-6c86-431b-b31b-3ec7372b7144-web-config" (OuterVolumeSpecName: "web-config") pod "4ff9ddab-6c86-431b-b31b-3ec7372b7144" (UID: "4ff9ddab-6c86-431b-b31b-3ec7372b7144"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:26:54 crc kubenswrapper[4703]: I1209 12:26:54.513125 4703 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/4ff9ddab-6c86-431b-b31b-3ec7372b7144-tls-assets\") on node \"crc\" DevicePath \"\"" Dec 09 12:26:54 crc kubenswrapper[4703]: I1209 12:26:54.513217 4703 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-71cbae65-332b-48bf-9748-f734228bec27\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-71cbae65-332b-48bf-9748-f734228bec27\") on node \"crc\" " Dec 09 12:26:54 crc kubenswrapper[4703]: I1209 12:26:54.513230 4703 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/4ff9ddab-6c86-431b-b31b-3ec7372b7144-web-config\") on node \"crc\" DevicePath \"\"" Dec 09 12:26:54 crc kubenswrapper[4703]: I1209 12:26:54.513242 4703 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/4ff9ddab-6c86-431b-b31b-3ec7372b7144-config-out\") on node \"crc\" DevicePath \"\"" Dec 09 12:26:54 crc kubenswrapper[4703]: I1209 12:26:54.513254 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ltxxx\" (UniqueName: \"kubernetes.io/projected/4ff9ddab-6c86-431b-b31b-3ec7372b7144-kube-api-access-ltxxx\") on node \"crc\" DevicePath \"\"" Dec 09 12:26:54 crc kubenswrapper[4703]: I1209 12:26:54.520314 4703 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/4ff9ddab-6c86-431b-b31b-3ec7372b7144-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Dec 09 12:26:54 crc kubenswrapper[4703]: I1209 12:26:54.520329 4703 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/4ff9ddab-6c86-431b-b31b-3ec7372b7144-config\") on node \"crc\" DevicePath \"\"" Dec 09 12:26:54 crc kubenswrapper[4703]: I1209 12:26:54.543436 4703 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Dec 09 12:26:54 crc kubenswrapper[4703]: I1209 12:26:54.543642 4703 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-71cbae65-332b-48bf-9748-f734228bec27" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-71cbae65-332b-48bf-9748-f734228bec27") on node "crc" Dec 09 12:26:54 crc kubenswrapper[4703]: I1209 12:26:54.622049 4703 reconciler_common.go:293] "Volume detached for volume \"pvc-71cbae65-332b-48bf-9748-f734228bec27\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-71cbae65-332b-48bf-9748-f734228bec27\") on node \"crc\" DevicePath \"\"" Dec 09 12:26:54 crc kubenswrapper[4703]: I1209 12:26:54.722967 4703 generic.go:334] "Generic (PLEG): container finished" podID="4ff9ddab-6c86-431b-b31b-3ec7372b7144" containerID="d3f263676dcde21f82db11c76743339c4a32612a5f214f238f61a275430d703d" exitCode=0 Dec 09 12:26:54 crc kubenswrapper[4703]: I1209 12:26:54.723025 4703 generic.go:334] "Generic (PLEG): container finished" podID="4ff9ddab-6c86-431b-b31b-3ec7372b7144" containerID="30c179592cb0fc711f436fe3bbcd1a54682da5299dbb1ad561f8df2842cf104d" exitCode=0 Dec 09 12:26:54 crc kubenswrapper[4703]: I1209 12:26:54.723036 4703 generic.go:334] "Generic (PLEG): container finished" podID="4ff9ddab-6c86-431b-b31b-3ec7372b7144" containerID="605cbc593b56d04e246306ab18f15c7cd0add555c9a2edf5bf49679a37d0aa83" exitCode=0 Dec 09 12:26:54 crc kubenswrapper[4703]: I1209 12:26:54.723078 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"4ff9ddab-6c86-431b-b31b-3ec7372b7144","Type":"ContainerDied","Data":"d3f263676dcde21f82db11c76743339c4a32612a5f214f238f61a275430d703d"} Dec 09 12:26:54 crc kubenswrapper[4703]: I1209 12:26:54.723094 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Dec 09 12:26:54 crc kubenswrapper[4703]: I1209 12:26:54.723125 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"4ff9ddab-6c86-431b-b31b-3ec7372b7144","Type":"ContainerDied","Data":"30c179592cb0fc711f436fe3bbcd1a54682da5299dbb1ad561f8df2842cf104d"} Dec 09 12:26:54 crc kubenswrapper[4703]: I1209 12:26:54.723141 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"4ff9ddab-6c86-431b-b31b-3ec7372b7144","Type":"ContainerDied","Data":"605cbc593b56d04e246306ab18f15c7cd0add555c9a2edf5bf49679a37d0aa83"} Dec 09 12:26:54 crc kubenswrapper[4703]: I1209 12:26:54.723158 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"4ff9ddab-6c86-431b-b31b-3ec7372b7144","Type":"ContainerDied","Data":"14de0b022292cf950d4236a81b72d33a2c58a631a903965ca39cd975201f61c6"} Dec 09 12:26:54 crc kubenswrapper[4703]: I1209 12:26:54.723178 4703 scope.go:117] "RemoveContainer" containerID="d3f263676dcde21f82db11c76743339c4a32612a5f214f238f61a275430d703d" Dec 09 12:26:54 crc kubenswrapper[4703]: I1209 12:26:54.792543 4703 scope.go:117] "RemoveContainer" containerID="30c179592cb0fc711f436fe3bbcd1a54682da5299dbb1ad561f8df2842cf104d" Dec 09 12:26:54 crc kubenswrapper[4703]: W1209 12:26:54.849354 4703 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod503838a9_bcc9_4360_adac_b9d27a5607cc.slice/crio-9464887615e2ee34f81ed673b06b53719cbbe960909b3f4f5184b156eef2c3a9 WatchSource:0}: Error finding container 9464887615e2ee34f81ed673b06b53719cbbe960909b3f4f5184b156eef2c3a9: Status 404 returned error can't find the container with id 9464887615e2ee34f81ed673b06b53719cbbe960909b3f4f5184b156eef2c3a9 Dec 09 12:26:54 crc kubenswrapper[4703]: I1209 12:26:54.861046 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 09 12:26:54 crc kubenswrapper[4703]: I1209 12:26:54.904342 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 09 12:26:54 crc kubenswrapper[4703]: I1209 12:26:54.909615 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-lsj78-config-rg8rv"] Dec 09 12:26:54 crc kubenswrapper[4703]: I1209 12:26:54.910919 4703 scope.go:117] "RemoveContainer" containerID="605cbc593b56d04e246306ab18f15c7cd0add555c9a2edf5bf49679a37d0aa83" Dec 09 12:26:54 crc kubenswrapper[4703]: I1209 12:26:54.930274 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 09 12:26:54 crc kubenswrapper[4703]: E1209 12:26:54.930775 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ff9ddab-6c86-431b-b31b-3ec7372b7144" containerName="config-reloader" Dec 09 12:26:54 crc kubenswrapper[4703]: I1209 12:26:54.930793 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ff9ddab-6c86-431b-b31b-3ec7372b7144" containerName="config-reloader" Dec 09 12:26:54 crc kubenswrapper[4703]: E1209 12:26:54.930815 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ff9ddab-6c86-431b-b31b-3ec7372b7144" containerName="init-config-reloader" Dec 09 12:26:54 crc kubenswrapper[4703]: I1209 12:26:54.930826 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ff9ddab-6c86-431b-b31b-3ec7372b7144" containerName="init-config-reloader" Dec 09 12:26:54 crc kubenswrapper[4703]: E1209 12:26:54.930836 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ff9ddab-6c86-431b-b31b-3ec7372b7144" containerName="prometheus" Dec 09 12:26:54 crc kubenswrapper[4703]: I1209 12:26:54.930844 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ff9ddab-6c86-431b-b31b-3ec7372b7144" containerName="prometheus" Dec 09 12:26:54 crc kubenswrapper[4703]: E1209 12:26:54.930874 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ff9ddab-6c86-431b-b31b-3ec7372b7144" containerName="thanos-sidecar" Dec 09 12:26:54 crc kubenswrapper[4703]: I1209 12:26:54.930882 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ff9ddab-6c86-431b-b31b-3ec7372b7144" containerName="thanos-sidecar" Dec 09 12:26:54 crc kubenswrapper[4703]: I1209 12:26:54.931136 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ff9ddab-6c86-431b-b31b-3ec7372b7144" containerName="thanos-sidecar" Dec 09 12:26:54 crc kubenswrapper[4703]: I1209 12:26:54.931154 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ff9ddab-6c86-431b-b31b-3ec7372b7144" containerName="prometheus" Dec 09 12:26:54 crc kubenswrapper[4703]: I1209 12:26:54.931180 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ff9ddab-6c86-431b-b31b-3ec7372b7144" containerName="config-reloader" Dec 09 12:26:54 crc kubenswrapper[4703]: I1209 12:26:54.934425 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Dec 09 12:26:54 crc kubenswrapper[4703]: I1209 12:26:54.939476 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Dec 09 12:26:54 crc kubenswrapper[4703]: I1209 12:26:54.939683 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Dec 09 12:26:54 crc kubenswrapper[4703]: I1209 12:26:54.939853 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Dec 09 12:26:54 crc kubenswrapper[4703]: I1209 12:26:54.941169 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Dec 09 12:26:54 crc kubenswrapper[4703]: I1209 12:26:54.941649 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Dec 09 12:26:54 crc kubenswrapper[4703]: I1209 12:26:54.943546 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-7dvs9" Dec 09 12:26:54 crc kubenswrapper[4703]: I1209 12:26:54.951549 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 09 12:26:54 crc kubenswrapper[4703]: I1209 12:26:54.952826 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Dec 09 12:26:54 crc kubenswrapper[4703]: I1209 12:26:54.984089 4703 scope.go:117] "RemoveContainer" containerID="c6e7abdf451223dfd615035d786a66424b733ba4e4b0836f75e40f06e3986a33" Dec 09 12:26:55 crc kubenswrapper[4703]: I1209 12:26:55.034463 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/30ba4b6c-b025-4ab0-b589-a2c72caf1997-config\") pod \"prometheus-metric-storage-0\" (UID: \"30ba4b6c-b025-4ab0-b589-a2c72caf1997\") " pod="openstack/prometheus-metric-storage-0" Dec 09 12:26:55 crc kubenswrapper[4703]: I1209 12:26:55.034783 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-71cbae65-332b-48bf-9748-f734228bec27\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-71cbae65-332b-48bf-9748-f734228bec27\") pod \"prometheus-metric-storage-0\" (UID: \"30ba4b6c-b025-4ab0-b589-a2c72caf1997\") " pod="openstack/prometheus-metric-storage-0" Dec 09 12:26:55 crc kubenswrapper[4703]: I1209 12:26:55.034876 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/30ba4b6c-b025-4ab0-b589-a2c72caf1997-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"30ba4b6c-b025-4ab0-b589-a2c72caf1997\") " pod="openstack/prometheus-metric-storage-0" Dec 09 12:26:55 crc kubenswrapper[4703]: I1209 12:26:55.034958 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzr9g\" (UniqueName: \"kubernetes.io/projected/30ba4b6c-b025-4ab0-b589-a2c72caf1997-kube-api-access-nzr9g\") pod \"prometheus-metric-storage-0\" (UID: \"30ba4b6c-b025-4ab0-b589-a2c72caf1997\") " pod="openstack/prometheus-metric-storage-0" Dec 09 12:26:55 crc kubenswrapper[4703]: I1209 12:26:55.035019 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/30ba4b6c-b025-4ab0-b589-a2c72caf1997-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"30ba4b6c-b025-4ab0-b589-a2c72caf1997\") " pod="openstack/prometheus-metric-storage-0" Dec 09 12:26:55 crc kubenswrapper[4703]: I1209 12:26:55.035038 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/30ba4b6c-b025-4ab0-b589-a2c72caf1997-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"30ba4b6c-b025-4ab0-b589-a2c72caf1997\") " pod="openstack/prometheus-metric-storage-0" Dec 09 12:26:55 crc kubenswrapper[4703]: I1209 12:26:55.035071 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/30ba4b6c-b025-4ab0-b589-a2c72caf1997-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"30ba4b6c-b025-4ab0-b589-a2c72caf1997\") " pod="openstack/prometheus-metric-storage-0" Dec 09 12:26:55 crc kubenswrapper[4703]: I1209 12:26:55.035097 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/30ba4b6c-b025-4ab0-b589-a2c72caf1997-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"30ba4b6c-b025-4ab0-b589-a2c72caf1997\") " pod="openstack/prometheus-metric-storage-0" Dec 09 12:26:55 crc kubenswrapper[4703]: I1209 12:26:55.035160 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30ba4b6c-b025-4ab0-b589-a2c72caf1997-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"30ba4b6c-b025-4ab0-b589-a2c72caf1997\") " pod="openstack/prometheus-metric-storage-0" Dec 09 12:26:55 crc kubenswrapper[4703]: I1209 12:26:55.035179 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/30ba4b6c-b025-4ab0-b589-a2c72caf1997-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"30ba4b6c-b025-4ab0-b589-a2c72caf1997\") " pod="openstack/prometheus-metric-storage-0" Dec 09 12:26:55 crc kubenswrapper[4703]: I1209 12:26:55.035537 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/30ba4b6c-b025-4ab0-b589-a2c72caf1997-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"30ba4b6c-b025-4ab0-b589-a2c72caf1997\") " pod="openstack/prometheus-metric-storage-0" Dec 09 12:26:55 crc kubenswrapper[4703]: I1209 12:26:55.047749 4703 scope.go:117] "RemoveContainer" containerID="d3f263676dcde21f82db11c76743339c4a32612a5f214f238f61a275430d703d" Dec 09 12:26:55 crc kubenswrapper[4703]: E1209 12:26:55.048272 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3f263676dcde21f82db11c76743339c4a32612a5f214f238f61a275430d703d\": container with ID starting with d3f263676dcde21f82db11c76743339c4a32612a5f214f238f61a275430d703d not found: ID does not exist" containerID="d3f263676dcde21f82db11c76743339c4a32612a5f214f238f61a275430d703d" Dec 09 12:26:55 crc kubenswrapper[4703]: I1209 12:26:55.048312 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3f263676dcde21f82db11c76743339c4a32612a5f214f238f61a275430d703d"} err="failed to get container status \"d3f263676dcde21f82db11c76743339c4a32612a5f214f238f61a275430d703d\": rpc error: code = NotFound desc = could not find container \"d3f263676dcde21f82db11c76743339c4a32612a5f214f238f61a275430d703d\": container with ID starting with d3f263676dcde21f82db11c76743339c4a32612a5f214f238f61a275430d703d not found: ID does not exist" Dec 09 12:26:55 crc kubenswrapper[4703]: I1209 12:26:55.048341 4703 scope.go:117] "RemoveContainer" containerID="30c179592cb0fc711f436fe3bbcd1a54682da5299dbb1ad561f8df2842cf104d" Dec 09 12:26:55 crc kubenswrapper[4703]: E1209 12:26:55.048673 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30c179592cb0fc711f436fe3bbcd1a54682da5299dbb1ad561f8df2842cf104d\": container with ID starting with 30c179592cb0fc711f436fe3bbcd1a54682da5299dbb1ad561f8df2842cf104d not found: ID does not exist" containerID="30c179592cb0fc711f436fe3bbcd1a54682da5299dbb1ad561f8df2842cf104d" Dec 09 12:26:55 crc kubenswrapper[4703]: I1209 12:26:55.048704 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30c179592cb0fc711f436fe3bbcd1a54682da5299dbb1ad561f8df2842cf104d"} err="failed to get container status \"30c179592cb0fc711f436fe3bbcd1a54682da5299dbb1ad561f8df2842cf104d\": rpc error: code = NotFound desc = could not find container \"30c179592cb0fc711f436fe3bbcd1a54682da5299dbb1ad561f8df2842cf104d\": container with ID starting with 30c179592cb0fc711f436fe3bbcd1a54682da5299dbb1ad561f8df2842cf104d not found: ID does not exist" Dec 09 12:26:55 crc kubenswrapper[4703]: I1209 12:26:55.048722 4703 scope.go:117] "RemoveContainer" containerID="605cbc593b56d04e246306ab18f15c7cd0add555c9a2edf5bf49679a37d0aa83" Dec 09 12:26:55 crc kubenswrapper[4703]: E1209 12:26:55.053991 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"605cbc593b56d04e246306ab18f15c7cd0add555c9a2edf5bf49679a37d0aa83\": container with ID starting with 605cbc593b56d04e246306ab18f15c7cd0add555c9a2edf5bf49679a37d0aa83 not found: ID does not exist" containerID="605cbc593b56d04e246306ab18f15c7cd0add555c9a2edf5bf49679a37d0aa83" Dec 09 12:26:55 crc kubenswrapper[4703]: I1209 12:26:55.054039 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"605cbc593b56d04e246306ab18f15c7cd0add555c9a2edf5bf49679a37d0aa83"} err="failed to get container status \"605cbc593b56d04e246306ab18f15c7cd0add555c9a2edf5bf49679a37d0aa83\": rpc error: code = NotFound desc = could not find container \"605cbc593b56d04e246306ab18f15c7cd0add555c9a2edf5bf49679a37d0aa83\": container with ID starting with 605cbc593b56d04e246306ab18f15c7cd0add555c9a2edf5bf49679a37d0aa83 not found: ID does not exist" Dec 09 12:26:55 crc kubenswrapper[4703]: I1209 12:26:55.054067 4703 scope.go:117] "RemoveContainer" containerID="c6e7abdf451223dfd615035d786a66424b733ba4e4b0836f75e40f06e3986a33" Dec 09 12:26:55 crc kubenswrapper[4703]: E1209 12:26:55.057992 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6e7abdf451223dfd615035d786a66424b733ba4e4b0836f75e40f06e3986a33\": container with ID starting with c6e7abdf451223dfd615035d786a66424b733ba4e4b0836f75e40f06e3986a33 not found: ID does not exist" containerID="c6e7abdf451223dfd615035d786a66424b733ba4e4b0836f75e40f06e3986a33" Dec 09 12:26:55 crc kubenswrapper[4703]: I1209 12:26:55.058058 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6e7abdf451223dfd615035d786a66424b733ba4e4b0836f75e40f06e3986a33"} err="failed to get container status \"c6e7abdf451223dfd615035d786a66424b733ba4e4b0836f75e40f06e3986a33\": rpc error: code = NotFound desc = could not find container \"c6e7abdf451223dfd615035d786a66424b733ba4e4b0836f75e40f06e3986a33\": container with ID starting with c6e7abdf451223dfd615035d786a66424b733ba4e4b0836f75e40f06e3986a33 not found: ID does not exist" Dec 09 12:26:55 crc kubenswrapper[4703]: I1209 12:26:55.058092 4703 scope.go:117] "RemoveContainer" containerID="d3f263676dcde21f82db11c76743339c4a32612a5f214f238f61a275430d703d" Dec 09 12:26:55 crc kubenswrapper[4703]: I1209 12:26:55.059174 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3f263676dcde21f82db11c76743339c4a32612a5f214f238f61a275430d703d"} err="failed to get container status \"d3f263676dcde21f82db11c76743339c4a32612a5f214f238f61a275430d703d\": rpc error: code = NotFound desc = could not find container \"d3f263676dcde21f82db11c76743339c4a32612a5f214f238f61a275430d703d\": container with ID starting with d3f263676dcde21f82db11c76743339c4a32612a5f214f238f61a275430d703d not found: ID does not exist" Dec 09 12:26:55 crc kubenswrapper[4703]: I1209 12:26:55.059211 4703 scope.go:117] "RemoveContainer" containerID="30c179592cb0fc711f436fe3bbcd1a54682da5299dbb1ad561f8df2842cf104d" Dec 09 12:26:55 crc kubenswrapper[4703]: I1209 12:26:55.059452 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30c179592cb0fc711f436fe3bbcd1a54682da5299dbb1ad561f8df2842cf104d"} err="failed to get container status \"30c179592cb0fc711f436fe3bbcd1a54682da5299dbb1ad561f8df2842cf104d\": rpc error: code = NotFound desc = could not find container \"30c179592cb0fc711f436fe3bbcd1a54682da5299dbb1ad561f8df2842cf104d\": container with ID starting with 30c179592cb0fc711f436fe3bbcd1a54682da5299dbb1ad561f8df2842cf104d not found: ID does not exist" Dec 09 12:26:55 crc kubenswrapper[4703]: I1209 12:26:55.059475 4703 scope.go:117] "RemoveContainer" containerID="605cbc593b56d04e246306ab18f15c7cd0add555c9a2edf5bf49679a37d0aa83" Dec 09 12:26:55 crc kubenswrapper[4703]: I1209 12:26:55.059736 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"605cbc593b56d04e246306ab18f15c7cd0add555c9a2edf5bf49679a37d0aa83"} err="failed to get container status \"605cbc593b56d04e246306ab18f15c7cd0add555c9a2edf5bf49679a37d0aa83\": rpc error: code = NotFound desc = could not find container \"605cbc593b56d04e246306ab18f15c7cd0add555c9a2edf5bf49679a37d0aa83\": container with ID starting with 605cbc593b56d04e246306ab18f15c7cd0add555c9a2edf5bf49679a37d0aa83 not found: ID does not exist" Dec 09 12:26:55 crc kubenswrapper[4703]: I1209 12:26:55.059769 4703 scope.go:117] "RemoveContainer" containerID="c6e7abdf451223dfd615035d786a66424b733ba4e4b0836f75e40f06e3986a33" Dec 09 12:26:55 crc kubenswrapper[4703]: I1209 12:26:55.059996 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6e7abdf451223dfd615035d786a66424b733ba4e4b0836f75e40f06e3986a33"} err="failed to get container status \"c6e7abdf451223dfd615035d786a66424b733ba4e4b0836f75e40f06e3986a33\": rpc error: code = NotFound desc = could not find container \"c6e7abdf451223dfd615035d786a66424b733ba4e4b0836f75e40f06e3986a33\": container with ID starting with c6e7abdf451223dfd615035d786a66424b733ba4e4b0836f75e40f06e3986a33 not found: ID does not exist" Dec 09 12:26:55 crc kubenswrapper[4703]: I1209 12:26:55.060017 4703 scope.go:117] "RemoveContainer" containerID="d3f263676dcde21f82db11c76743339c4a32612a5f214f238f61a275430d703d" Dec 09 12:26:55 crc kubenswrapper[4703]: I1209 12:26:55.060294 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3f263676dcde21f82db11c76743339c4a32612a5f214f238f61a275430d703d"} err="failed to get container status \"d3f263676dcde21f82db11c76743339c4a32612a5f214f238f61a275430d703d\": rpc error: code = NotFound desc = could not find container \"d3f263676dcde21f82db11c76743339c4a32612a5f214f238f61a275430d703d\": container with ID starting with d3f263676dcde21f82db11c76743339c4a32612a5f214f238f61a275430d703d not found: ID does not exist" Dec 09 12:26:55 crc kubenswrapper[4703]: I1209 12:26:55.060322 4703 scope.go:117] "RemoveContainer" containerID="30c179592cb0fc711f436fe3bbcd1a54682da5299dbb1ad561f8df2842cf104d" Dec 09 12:26:55 crc kubenswrapper[4703]: I1209 12:26:55.060677 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30c179592cb0fc711f436fe3bbcd1a54682da5299dbb1ad561f8df2842cf104d"} err="failed to get container status \"30c179592cb0fc711f436fe3bbcd1a54682da5299dbb1ad561f8df2842cf104d\": rpc error: code = NotFound desc = could not find container \"30c179592cb0fc711f436fe3bbcd1a54682da5299dbb1ad561f8df2842cf104d\": container with ID starting with 30c179592cb0fc711f436fe3bbcd1a54682da5299dbb1ad561f8df2842cf104d not found: ID does not exist" Dec 09 12:26:55 crc kubenswrapper[4703]: I1209 12:26:55.060712 4703 scope.go:117] "RemoveContainer" containerID="605cbc593b56d04e246306ab18f15c7cd0add555c9a2edf5bf49679a37d0aa83" Dec 09 12:26:55 crc kubenswrapper[4703]: I1209 12:26:55.064144 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"605cbc593b56d04e246306ab18f15c7cd0add555c9a2edf5bf49679a37d0aa83"} err="failed to get container status \"605cbc593b56d04e246306ab18f15c7cd0add555c9a2edf5bf49679a37d0aa83\": rpc error: code = NotFound desc = could not find container \"605cbc593b56d04e246306ab18f15c7cd0add555c9a2edf5bf49679a37d0aa83\": container with ID starting with 605cbc593b56d04e246306ab18f15c7cd0add555c9a2edf5bf49679a37d0aa83 not found: ID does not exist" Dec 09 12:26:55 crc kubenswrapper[4703]: I1209 12:26:55.064168 4703 scope.go:117] "RemoveContainer" containerID="c6e7abdf451223dfd615035d786a66424b733ba4e4b0836f75e40f06e3986a33" Dec 09 12:26:55 crc kubenswrapper[4703]: I1209 12:26:55.064702 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6e7abdf451223dfd615035d786a66424b733ba4e4b0836f75e40f06e3986a33"} err="failed to get container status \"c6e7abdf451223dfd615035d786a66424b733ba4e4b0836f75e40f06e3986a33\": rpc error: code = NotFound desc = could not find container \"c6e7abdf451223dfd615035d786a66424b733ba4e4b0836f75e40f06e3986a33\": container with ID starting with c6e7abdf451223dfd615035d786a66424b733ba4e4b0836f75e40f06e3986a33 not found: ID does not exist" Dec 09 12:26:55 crc kubenswrapper[4703]: I1209 12:26:55.093962 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ff9ddab-6c86-431b-b31b-3ec7372b7144" path="/var/lib/kubelet/pods/4ff9ddab-6c86-431b-b31b-3ec7372b7144/volumes" Dec 09 12:26:55 crc kubenswrapper[4703]: I1209 12:26:55.094950 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4885ff5-3e9f-4297-8601-9b058062c8c3" path="/var/lib/kubelet/pods/f4885ff5-3e9f-4297-8601-9b058062c8c3/volumes" Dec 09 12:26:55 crc kubenswrapper[4703]: I1209 12:26:55.137520 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30ba4b6c-b025-4ab0-b589-a2c72caf1997-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"30ba4b6c-b025-4ab0-b589-a2c72caf1997\") " pod="openstack/prometheus-metric-storage-0" Dec 09 12:26:55 crc kubenswrapper[4703]: I1209 12:26:55.137586 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/30ba4b6c-b025-4ab0-b589-a2c72caf1997-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"30ba4b6c-b025-4ab0-b589-a2c72caf1997\") " pod="openstack/prometheus-metric-storage-0" Dec 09 12:26:55 crc kubenswrapper[4703]: I1209 12:26:55.137660 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/30ba4b6c-b025-4ab0-b589-a2c72caf1997-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"30ba4b6c-b025-4ab0-b589-a2c72caf1997\") " pod="openstack/prometheus-metric-storage-0" Dec 09 12:26:55 crc kubenswrapper[4703]: I1209 12:26:55.137734 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/30ba4b6c-b025-4ab0-b589-a2c72caf1997-config\") pod \"prometheus-metric-storage-0\" (UID: \"30ba4b6c-b025-4ab0-b589-a2c72caf1997\") " pod="openstack/prometheus-metric-storage-0" Dec 09 12:26:55 crc kubenswrapper[4703]: I1209 12:26:55.137778 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-71cbae65-332b-48bf-9748-f734228bec27\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-71cbae65-332b-48bf-9748-f734228bec27\") pod \"prometheus-metric-storage-0\" (UID: \"30ba4b6c-b025-4ab0-b589-a2c72caf1997\") " pod="openstack/prometheus-metric-storage-0" Dec 09 12:26:55 crc kubenswrapper[4703]: I1209 12:26:55.137827 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/30ba4b6c-b025-4ab0-b589-a2c72caf1997-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"30ba4b6c-b025-4ab0-b589-a2c72caf1997\") " pod="openstack/prometheus-metric-storage-0" Dec 09 12:26:55 crc kubenswrapper[4703]: I1209 12:26:55.137876 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzr9g\" (UniqueName: \"kubernetes.io/projected/30ba4b6c-b025-4ab0-b589-a2c72caf1997-kube-api-access-nzr9g\") pod \"prometheus-metric-storage-0\" (UID: \"30ba4b6c-b025-4ab0-b589-a2c72caf1997\") " pod="openstack/prometheus-metric-storage-0" Dec 09 12:26:55 crc kubenswrapper[4703]: I1209 12:26:55.137961 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/30ba4b6c-b025-4ab0-b589-a2c72caf1997-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"30ba4b6c-b025-4ab0-b589-a2c72caf1997\") " pod="openstack/prometheus-metric-storage-0" Dec 09 12:26:55 crc kubenswrapper[4703]: I1209 12:26:55.138002 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/30ba4b6c-b025-4ab0-b589-a2c72caf1997-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"30ba4b6c-b025-4ab0-b589-a2c72caf1997\") " pod="openstack/prometheus-metric-storage-0" Dec 09 12:26:55 crc kubenswrapper[4703]: I1209 12:26:55.138064 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/30ba4b6c-b025-4ab0-b589-a2c72caf1997-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"30ba4b6c-b025-4ab0-b589-a2c72caf1997\") " pod="openstack/prometheus-metric-storage-0" Dec 09 12:26:55 crc kubenswrapper[4703]: I1209 12:26:55.138097 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/30ba4b6c-b025-4ab0-b589-a2c72caf1997-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"30ba4b6c-b025-4ab0-b589-a2c72caf1997\") " pod="openstack/prometheus-metric-storage-0" Dec 09 12:26:55 crc kubenswrapper[4703]: I1209 12:26:55.146468 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/30ba4b6c-b025-4ab0-b589-a2c72caf1997-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"30ba4b6c-b025-4ab0-b589-a2c72caf1997\") " pod="openstack/prometheus-metric-storage-0" Dec 09 12:26:55 crc kubenswrapper[4703]: I1209 12:26:55.147259 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30ba4b6c-b025-4ab0-b589-a2c72caf1997-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"30ba4b6c-b025-4ab0-b589-a2c72caf1997\") " pod="openstack/prometheus-metric-storage-0" Dec 09 12:26:55 crc kubenswrapper[4703]: I1209 12:26:55.148262 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/30ba4b6c-b025-4ab0-b589-a2c72caf1997-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"30ba4b6c-b025-4ab0-b589-a2c72caf1997\") " pod="openstack/prometheus-metric-storage-0" Dec 09 12:26:55 crc kubenswrapper[4703]: I1209 12:26:55.148323 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/30ba4b6c-b025-4ab0-b589-a2c72caf1997-config\") pod \"prometheus-metric-storage-0\" (UID: \"30ba4b6c-b025-4ab0-b589-a2c72caf1997\") " pod="openstack/prometheus-metric-storage-0" Dec 09 12:26:55 crc kubenswrapper[4703]: I1209 12:26:55.148998 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/30ba4b6c-b025-4ab0-b589-a2c72caf1997-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"30ba4b6c-b025-4ab0-b589-a2c72caf1997\") " pod="openstack/prometheus-metric-storage-0" Dec 09 12:26:55 crc kubenswrapper[4703]: I1209 12:26:55.150201 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/30ba4b6c-b025-4ab0-b589-a2c72caf1997-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"30ba4b6c-b025-4ab0-b589-a2c72caf1997\") " pod="openstack/prometheus-metric-storage-0" Dec 09 12:26:55 crc kubenswrapper[4703]: I1209 12:26:55.150926 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/30ba4b6c-b025-4ab0-b589-a2c72caf1997-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"30ba4b6c-b025-4ab0-b589-a2c72caf1997\") " pod="openstack/prometheus-metric-storage-0" Dec 09 12:26:55 crc kubenswrapper[4703]: I1209 12:26:55.150934 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/30ba4b6c-b025-4ab0-b589-a2c72caf1997-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"30ba4b6c-b025-4ab0-b589-a2c72caf1997\") " pod="openstack/prometheus-metric-storage-0" Dec 09 12:26:55 crc kubenswrapper[4703]: I1209 12:26:55.151514 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/30ba4b6c-b025-4ab0-b589-a2c72caf1997-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"30ba4b6c-b025-4ab0-b589-a2c72caf1997\") " pod="openstack/prometheus-metric-storage-0" Dec 09 12:26:55 crc kubenswrapper[4703]: I1209 12:26:55.151527 4703 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 09 12:26:55 crc kubenswrapper[4703]: I1209 12:26:55.151574 4703 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-71cbae65-332b-48bf-9748-f734228bec27\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-71cbae65-332b-48bf-9748-f734228bec27\") pod \"prometheus-metric-storage-0\" (UID: \"30ba4b6c-b025-4ab0-b589-a2c72caf1997\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/d37810f32880edcabb6347a05932ab8c2bcb1d3e05f799d5d3282bcd71eea829/globalmount\"" pod="openstack/prometheus-metric-storage-0" Dec 09 12:26:55 crc kubenswrapper[4703]: I1209 12:26:55.189875 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzr9g\" (UniqueName: \"kubernetes.io/projected/30ba4b6c-b025-4ab0-b589-a2c72caf1997-kube-api-access-nzr9g\") pod \"prometheus-metric-storage-0\" (UID: \"30ba4b6c-b025-4ab0-b589-a2c72caf1997\") " pod="openstack/prometheus-metric-storage-0" Dec 09 12:26:55 crc kubenswrapper[4703]: I1209 12:26:55.286924 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-71cbae65-332b-48bf-9748-f734228bec27\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-71cbae65-332b-48bf-9748-f734228bec27\") pod \"prometheus-metric-storage-0\" (UID: \"30ba4b6c-b025-4ab0-b589-a2c72caf1997\") " pod="openstack/prometheus-metric-storage-0" Dec 09 12:26:55 crc kubenswrapper[4703]: I1209 12:26:55.571681 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Dec 09 12:26:55 crc kubenswrapper[4703]: I1209 12:26:55.746103 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-lsj78-config-rg8rv" event={"ID":"503838a9-bcc9-4360-adac-b9d27a5607cc","Type":"ContainerStarted","Data":"f01ccc31dd3ac96a8b8d86143102d3fdb78e32cde1edbe82ee282feaebacf713"} Dec 09 12:26:55 crc kubenswrapper[4703]: I1209 12:26:55.746599 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-lsj78-config-rg8rv" event={"ID":"503838a9-bcc9-4360-adac-b9d27a5607cc","Type":"ContainerStarted","Data":"9464887615e2ee34f81ed673b06b53719cbbe960909b3f4f5184b156eef2c3a9"} Dec 09 12:26:55 crc kubenswrapper[4703]: I1209 12:26:55.755831 4703 generic.go:334] "Generic (PLEG): container finished" podID="2adbe90b-b3dc-480b-9ab1-f6084b5dee94" containerID="fc8e6f088e45a1b094a10a49b724eee8587616d028b6539a3aa4b2f1221bc7d6" exitCode=0 Dec 09 12:26:55 crc kubenswrapper[4703]: I1209 12:26:55.755899 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-cscwb" event={"ID":"2adbe90b-b3dc-480b-9ab1-f6084b5dee94","Type":"ContainerDied","Data":"fc8e6f088e45a1b094a10a49b724eee8587616d028b6539a3aa4b2f1221bc7d6"} Dec 09 12:26:55 crc kubenswrapper[4703]: I1209 12:26:55.779692 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-lsj78-config-rg8rv" podStartSLOduration=2.7796748019999997 podStartE2EDuration="2.779674802s" podCreationTimestamp="2025-12-09 12:26:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:26:55.773050997 +0000 UTC m=+1315.021814516" watchObservedRunningTime="2025-12-09 12:26:55.779674802 +0000 UTC m=+1315.028438321" Dec 09 12:26:56 crc kubenswrapper[4703]: I1209 12:26:56.135843 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 09 12:26:56 crc kubenswrapper[4703]: W1209 12:26:56.150737 4703 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod30ba4b6c_b025_4ab0_b589_a2c72caf1997.slice/crio-bb3e048f955ac9771b85b3406705dcd682dffa784157ca9ff6447e47eee012fb WatchSource:0}: Error finding container bb3e048f955ac9771b85b3406705dcd682dffa784157ca9ff6447e47eee012fb: Status 404 returned error can't find the container with id bb3e048f955ac9771b85b3406705dcd682dffa784157ca9ff6447e47eee012fb Dec 09 12:26:56 crc kubenswrapper[4703]: I1209 12:26:56.766876 4703 generic.go:334] "Generic (PLEG): container finished" podID="503838a9-bcc9-4360-adac-b9d27a5607cc" containerID="f01ccc31dd3ac96a8b8d86143102d3fdb78e32cde1edbe82ee282feaebacf713" exitCode=0 Dec 09 12:26:56 crc kubenswrapper[4703]: I1209 12:26:56.767078 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-lsj78-config-rg8rv" event={"ID":"503838a9-bcc9-4360-adac-b9d27a5607cc","Type":"ContainerDied","Data":"f01ccc31dd3ac96a8b8d86143102d3fdb78e32cde1edbe82ee282feaebacf713"} Dec 09 12:26:56 crc kubenswrapper[4703]: I1209 12:26:56.768553 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"30ba4b6c-b025-4ab0-b589-a2c72caf1997","Type":"ContainerStarted","Data":"bb3e048f955ac9771b85b3406705dcd682dffa784157ca9ff6447e47eee012fb"} Dec 09 12:26:57 crc kubenswrapper[4703]: I1209 12:26:57.322997 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-cscwb" Dec 09 12:26:57 crc kubenswrapper[4703]: I1209 12:26:57.491900 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/2adbe90b-b3dc-480b-9ab1-f6084b5dee94-ring-data-devices\") pod \"2adbe90b-b3dc-480b-9ab1-f6084b5dee94\" (UID: \"2adbe90b-b3dc-480b-9ab1-f6084b5dee94\") " Dec 09 12:26:57 crc kubenswrapper[4703]: I1209 12:26:57.492002 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/2adbe90b-b3dc-480b-9ab1-f6084b5dee94-dispersionconf\") pod \"2adbe90b-b3dc-480b-9ab1-f6084b5dee94\" (UID: \"2adbe90b-b3dc-480b-9ab1-f6084b5dee94\") " Dec 09 12:26:57 crc kubenswrapper[4703]: I1209 12:26:57.492211 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/2adbe90b-b3dc-480b-9ab1-f6084b5dee94-swiftconf\") pod \"2adbe90b-b3dc-480b-9ab1-f6084b5dee94\" (UID: \"2adbe90b-b3dc-480b-9ab1-f6084b5dee94\") " Dec 09 12:26:57 crc kubenswrapper[4703]: I1209 12:26:57.492256 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2adbe90b-b3dc-480b-9ab1-f6084b5dee94-scripts\") pod \"2adbe90b-b3dc-480b-9ab1-f6084b5dee94\" (UID: \"2adbe90b-b3dc-480b-9ab1-f6084b5dee94\") " Dec 09 12:26:57 crc kubenswrapper[4703]: I1209 12:26:57.492285 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/2adbe90b-b3dc-480b-9ab1-f6084b5dee94-etc-swift\") pod \"2adbe90b-b3dc-480b-9ab1-f6084b5dee94\" (UID: \"2adbe90b-b3dc-480b-9ab1-f6084b5dee94\") " Dec 09 12:26:57 crc kubenswrapper[4703]: I1209 12:26:57.492347 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2adbe90b-b3dc-480b-9ab1-f6084b5dee94-combined-ca-bundle\") pod \"2adbe90b-b3dc-480b-9ab1-f6084b5dee94\" (UID: \"2adbe90b-b3dc-480b-9ab1-f6084b5dee94\") " Dec 09 12:26:57 crc kubenswrapper[4703]: I1209 12:26:57.492367 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h9t4m\" (UniqueName: \"kubernetes.io/projected/2adbe90b-b3dc-480b-9ab1-f6084b5dee94-kube-api-access-h9t4m\") pod \"2adbe90b-b3dc-480b-9ab1-f6084b5dee94\" (UID: \"2adbe90b-b3dc-480b-9ab1-f6084b5dee94\") " Dec 09 12:26:57 crc kubenswrapper[4703]: I1209 12:26:57.494239 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2adbe90b-b3dc-480b-9ab1-f6084b5dee94-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "2adbe90b-b3dc-480b-9ab1-f6084b5dee94" (UID: "2adbe90b-b3dc-480b-9ab1-f6084b5dee94"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:26:57 crc kubenswrapper[4703]: I1209 12:26:57.494538 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2adbe90b-b3dc-480b-9ab1-f6084b5dee94-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "2adbe90b-b3dc-480b-9ab1-f6084b5dee94" (UID: "2adbe90b-b3dc-480b-9ab1-f6084b5dee94"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:26:57 crc kubenswrapper[4703]: I1209 12:26:57.502853 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2adbe90b-b3dc-480b-9ab1-f6084b5dee94-kube-api-access-h9t4m" (OuterVolumeSpecName: "kube-api-access-h9t4m") pod "2adbe90b-b3dc-480b-9ab1-f6084b5dee94" (UID: "2adbe90b-b3dc-480b-9ab1-f6084b5dee94"). InnerVolumeSpecName "kube-api-access-h9t4m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:26:57 crc kubenswrapper[4703]: I1209 12:26:57.505136 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2adbe90b-b3dc-480b-9ab1-f6084b5dee94-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "2adbe90b-b3dc-480b-9ab1-f6084b5dee94" (UID: "2adbe90b-b3dc-480b-9ab1-f6084b5dee94"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:26:57 crc kubenswrapper[4703]: I1209 12:26:57.531728 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2adbe90b-b3dc-480b-9ab1-f6084b5dee94-scripts" (OuterVolumeSpecName: "scripts") pod "2adbe90b-b3dc-480b-9ab1-f6084b5dee94" (UID: "2adbe90b-b3dc-480b-9ab1-f6084b5dee94"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:26:57 crc kubenswrapper[4703]: I1209 12:26:57.568471 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2adbe90b-b3dc-480b-9ab1-f6084b5dee94-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2adbe90b-b3dc-480b-9ab1-f6084b5dee94" (UID: "2adbe90b-b3dc-480b-9ab1-f6084b5dee94"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:26:57 crc kubenswrapper[4703]: I1209 12:26:57.575414 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2adbe90b-b3dc-480b-9ab1-f6084b5dee94-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "2adbe90b-b3dc-480b-9ab1-f6084b5dee94" (UID: "2adbe90b-b3dc-480b-9ab1-f6084b5dee94"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:26:57 crc kubenswrapper[4703]: I1209 12:26:57.597392 4703 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2adbe90b-b3dc-480b-9ab1-f6084b5dee94-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 12:26:57 crc kubenswrapper[4703]: I1209 12:26:57.597425 4703 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/2adbe90b-b3dc-480b-9ab1-f6084b5dee94-etc-swift\") on node \"crc\" DevicePath \"\"" Dec 09 12:26:57 crc kubenswrapper[4703]: I1209 12:26:57.597435 4703 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2adbe90b-b3dc-480b-9ab1-f6084b5dee94-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:26:57 crc kubenswrapper[4703]: I1209 12:26:57.597448 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h9t4m\" (UniqueName: \"kubernetes.io/projected/2adbe90b-b3dc-480b-9ab1-f6084b5dee94-kube-api-access-h9t4m\") on node \"crc\" DevicePath \"\"" Dec 09 12:26:57 crc kubenswrapper[4703]: I1209 12:26:57.597458 4703 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/2adbe90b-b3dc-480b-9ab1-f6084b5dee94-ring-data-devices\") on node \"crc\" DevicePath \"\"" Dec 09 12:26:57 crc kubenswrapper[4703]: I1209 12:26:57.597470 4703 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/2adbe90b-b3dc-480b-9ab1-f6084b5dee94-dispersionconf\") on node \"crc\" DevicePath \"\"" Dec 09 12:26:57 crc kubenswrapper[4703]: I1209 12:26:57.597483 4703 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/2adbe90b-b3dc-480b-9ab1-f6084b5dee94-swiftconf\") on node \"crc\" DevicePath \"\"" Dec 09 12:26:57 crc kubenswrapper[4703]: I1209 12:26:57.800297 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-cscwb" Dec 09 12:26:57 crc kubenswrapper[4703]: I1209 12:26:57.813132 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-cscwb" event={"ID":"2adbe90b-b3dc-480b-9ab1-f6084b5dee94","Type":"ContainerDied","Data":"19cfaa20be13e6c3f204858971397f33f5898083c5b82e4e66a25ab03e4e9148"} Dec 09 12:26:57 crc kubenswrapper[4703]: I1209 12:26:57.813233 4703 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="19cfaa20be13e6c3f204858971397f33f5898083c5b82e4e66a25ab03e4e9148" Dec 09 12:26:59 crc kubenswrapper[4703]: I1209 12:26:59.076712 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-lsj78-config-rg8rv" Dec 09 12:26:59 crc kubenswrapper[4703]: I1209 12:26:59.216150 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/503838a9-bcc9-4360-adac-b9d27a5607cc-var-run-ovn\") pod \"503838a9-bcc9-4360-adac-b9d27a5607cc\" (UID: \"503838a9-bcc9-4360-adac-b9d27a5607cc\") " Dec 09 12:26:59 crc kubenswrapper[4703]: I1209 12:26:59.216275 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vq7x4\" (UniqueName: \"kubernetes.io/projected/503838a9-bcc9-4360-adac-b9d27a5607cc-kube-api-access-vq7x4\") pod \"503838a9-bcc9-4360-adac-b9d27a5607cc\" (UID: \"503838a9-bcc9-4360-adac-b9d27a5607cc\") " Dec 09 12:26:59 crc kubenswrapper[4703]: I1209 12:26:59.216332 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/503838a9-bcc9-4360-adac-b9d27a5607cc-var-run\") pod \"503838a9-bcc9-4360-adac-b9d27a5607cc\" (UID: \"503838a9-bcc9-4360-adac-b9d27a5607cc\") " Dec 09 12:26:59 crc kubenswrapper[4703]: I1209 12:26:59.216449 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/503838a9-bcc9-4360-adac-b9d27a5607cc-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "503838a9-bcc9-4360-adac-b9d27a5607cc" (UID: "503838a9-bcc9-4360-adac-b9d27a5607cc"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 12:26:59 crc kubenswrapper[4703]: I1209 12:26:59.216528 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/503838a9-bcc9-4360-adac-b9d27a5607cc-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "503838a9-bcc9-4360-adac-b9d27a5607cc" (UID: "503838a9-bcc9-4360-adac-b9d27a5607cc"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 12:26:59 crc kubenswrapper[4703]: I1209 12:26:59.216492 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/503838a9-bcc9-4360-adac-b9d27a5607cc-var-log-ovn\") pod \"503838a9-bcc9-4360-adac-b9d27a5607cc\" (UID: \"503838a9-bcc9-4360-adac-b9d27a5607cc\") " Dec 09 12:26:59 crc kubenswrapper[4703]: I1209 12:26:59.216603 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/503838a9-bcc9-4360-adac-b9d27a5607cc-var-run" (OuterVolumeSpecName: "var-run") pod "503838a9-bcc9-4360-adac-b9d27a5607cc" (UID: "503838a9-bcc9-4360-adac-b9d27a5607cc"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 12:26:59 crc kubenswrapper[4703]: I1209 12:26:59.216810 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/503838a9-bcc9-4360-adac-b9d27a5607cc-scripts\") pod \"503838a9-bcc9-4360-adac-b9d27a5607cc\" (UID: \"503838a9-bcc9-4360-adac-b9d27a5607cc\") " Dec 09 12:26:59 crc kubenswrapper[4703]: I1209 12:26:59.216983 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/503838a9-bcc9-4360-adac-b9d27a5607cc-additional-scripts\") pod \"503838a9-bcc9-4360-adac-b9d27a5607cc\" (UID: \"503838a9-bcc9-4360-adac-b9d27a5607cc\") " Dec 09 12:26:59 crc kubenswrapper[4703]: I1209 12:26:59.217667 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/503838a9-bcc9-4360-adac-b9d27a5607cc-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "503838a9-bcc9-4360-adac-b9d27a5607cc" (UID: "503838a9-bcc9-4360-adac-b9d27a5607cc"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:26:59 crc kubenswrapper[4703]: I1209 12:26:59.217993 4703 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/503838a9-bcc9-4360-adac-b9d27a5607cc-var-log-ovn\") on node \"crc\" DevicePath \"\"" Dec 09 12:26:59 crc kubenswrapper[4703]: I1209 12:26:59.218014 4703 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/503838a9-bcc9-4360-adac-b9d27a5607cc-additional-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 12:26:59 crc kubenswrapper[4703]: I1209 12:26:59.218026 4703 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/503838a9-bcc9-4360-adac-b9d27a5607cc-var-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 09 12:26:59 crc kubenswrapper[4703]: I1209 12:26:59.218028 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/503838a9-bcc9-4360-adac-b9d27a5607cc-scripts" (OuterVolumeSpecName: "scripts") pod "503838a9-bcc9-4360-adac-b9d27a5607cc" (UID: "503838a9-bcc9-4360-adac-b9d27a5607cc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:26:59 crc kubenswrapper[4703]: I1209 12:26:59.218040 4703 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/503838a9-bcc9-4360-adac-b9d27a5607cc-var-run\") on node \"crc\" DevicePath \"\"" Dec 09 12:26:59 crc kubenswrapper[4703]: I1209 12:26:59.225060 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/503838a9-bcc9-4360-adac-b9d27a5607cc-kube-api-access-vq7x4" (OuterVolumeSpecName: "kube-api-access-vq7x4") pod "503838a9-bcc9-4360-adac-b9d27a5607cc" (UID: "503838a9-bcc9-4360-adac-b9d27a5607cc"). InnerVolumeSpecName "kube-api-access-vq7x4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:26:59 crc kubenswrapper[4703]: I1209 12:26:59.319363 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vq7x4\" (UniqueName: \"kubernetes.io/projected/503838a9-bcc9-4360-adac-b9d27a5607cc-kube-api-access-vq7x4\") on node \"crc\" DevicePath \"\"" Dec 09 12:26:59 crc kubenswrapper[4703]: I1209 12:26:59.319402 4703 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/503838a9-bcc9-4360-adac-b9d27a5607cc-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 12:26:59 crc kubenswrapper[4703]: I1209 12:26:59.826046 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-lsj78-config-rg8rv" event={"ID":"503838a9-bcc9-4360-adac-b9d27a5607cc","Type":"ContainerDied","Data":"9464887615e2ee34f81ed673b06b53719cbbe960909b3f4f5184b156eef2c3a9"} Dec 09 12:26:59 crc kubenswrapper[4703]: I1209 12:26:59.826093 4703 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9464887615e2ee34f81ed673b06b53719cbbe960909b3f4f5184b156eef2c3a9" Dec 09 12:26:59 crc kubenswrapper[4703]: I1209 12:26:59.826152 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-lsj78-config-rg8rv" Dec 09 12:27:00 crc kubenswrapper[4703]: I1209 12:27:00.083162 4703 patch_prober.go:28] interesting pod/machine-config-daemon-q8sfk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 12:27:00 crc kubenswrapper[4703]: I1209 12:27:00.083258 4703 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 12:27:00 crc kubenswrapper[4703]: I1209 12:27:00.182960 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-lsj78-config-rg8rv"] Dec 09 12:27:00 crc kubenswrapper[4703]: I1209 12:27:00.196920 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-lsj78-config-rg8rv"] Dec 09 12:27:00 crc kubenswrapper[4703]: I1209 12:27:00.750243 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7ad4bf88-77c9-4a0a-8fad-75f44d1b4f44-etc-swift\") pod \"swift-storage-0\" (UID: \"7ad4bf88-77c9-4a0a-8fad-75f44d1b4f44\") " pod="openstack/swift-storage-0" Dec 09 12:27:00 crc kubenswrapper[4703]: I1209 12:27:00.767057 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7ad4bf88-77c9-4a0a-8fad-75f44d1b4f44-etc-swift\") pod \"swift-storage-0\" (UID: \"7ad4bf88-77c9-4a0a-8fad-75f44d1b4f44\") " pod="openstack/swift-storage-0" Dec 09 12:27:00 crc kubenswrapper[4703]: I1209 12:27:00.819636 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Dec 09 12:27:00 crc kubenswrapper[4703]: I1209 12:27:00.863024 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"30ba4b6c-b025-4ab0-b589-a2c72caf1997","Type":"ContainerStarted","Data":"43e85dbc8494b92aec2888918996d45c961ec4d3c62b035f3921f8d01fc1840a"} Dec 09 12:27:01 crc kubenswrapper[4703]: I1209 12:27:01.118809 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="503838a9-bcc9-4360-adac-b9d27a5607cc" path="/var/lib/kubelet/pods/503838a9-bcc9-4360-adac-b9d27a5607cc/volumes" Dec 09 12:27:01 crc kubenswrapper[4703]: I1209 12:27:01.625826 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Dec 09 12:27:01 crc kubenswrapper[4703]: W1209 12:27:01.629671 4703 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ad4bf88_77c9_4a0a_8fad_75f44d1b4f44.slice/crio-80b8986ea5f002bb41da03771f76452a5441d63c541bcc2ae467d05e364a223a WatchSource:0}: Error finding container 80b8986ea5f002bb41da03771f76452a5441d63c541bcc2ae467d05e364a223a: Status 404 returned error can't find the container with id 80b8986ea5f002bb41da03771f76452a5441d63c541bcc2ae467d05e364a223a Dec 09 12:27:01 crc kubenswrapper[4703]: I1209 12:27:01.781429 4703 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="be3c1046-2c78-46ab-a62f-f4270561ca1c" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 09 12:27:01 crc kubenswrapper[4703]: I1209 12:27:01.882770 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7ad4bf88-77c9-4a0a-8fad-75f44d1b4f44","Type":"ContainerStarted","Data":"80b8986ea5f002bb41da03771f76452a5441d63c541bcc2ae467d05e364a223a"} Dec 09 12:27:02 crc kubenswrapper[4703]: I1209 12:27:02.933428 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Dec 09 12:27:03 crc kubenswrapper[4703]: I1209 12:27:03.358463 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:27:03 crc kubenswrapper[4703]: I1209 12:27:03.420608 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-8tkw9"] Dec 09 12:27:03 crc kubenswrapper[4703]: E1209 12:27:03.421144 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="503838a9-bcc9-4360-adac-b9d27a5607cc" containerName="ovn-config" Dec 09 12:27:03 crc kubenswrapper[4703]: I1209 12:27:03.421171 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="503838a9-bcc9-4360-adac-b9d27a5607cc" containerName="ovn-config" Dec 09 12:27:03 crc kubenswrapper[4703]: E1209 12:27:03.421209 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2adbe90b-b3dc-480b-9ab1-f6084b5dee94" containerName="swift-ring-rebalance" Dec 09 12:27:03 crc kubenswrapper[4703]: I1209 12:27:03.421220 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="2adbe90b-b3dc-480b-9ab1-f6084b5dee94" containerName="swift-ring-rebalance" Dec 09 12:27:03 crc kubenswrapper[4703]: I1209 12:27:03.421425 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="2adbe90b-b3dc-480b-9ab1-f6084b5dee94" containerName="swift-ring-rebalance" Dec 09 12:27:03 crc kubenswrapper[4703]: I1209 12:27:03.421467 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="503838a9-bcc9-4360-adac-b9d27a5607cc" containerName="ovn-config" Dec 09 12:27:03 crc kubenswrapper[4703]: I1209 12:27:03.422517 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-8tkw9" Dec 09 12:27:03 crc kubenswrapper[4703]: I1209 12:27:03.445576 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-8tkw9"] Dec 09 12:27:03 crc kubenswrapper[4703]: I1209 12:27:03.488582 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-b7ce-account-create-update-c4zgt"] Dec 09 12:27:03 crc kubenswrapper[4703]: I1209 12:27:03.490312 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-b7ce-account-create-update-c4zgt" Dec 09 12:27:03 crc kubenswrapper[4703]: I1209 12:27:03.499922 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Dec 09 12:27:03 crc kubenswrapper[4703]: I1209 12:27:03.504514 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-b7ce-account-create-update-c4zgt"] Dec 09 12:27:03 crc kubenswrapper[4703]: I1209 12:27:03.520008 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e7648d9a-d5c5-4854-ad02-ddf686748d6a-operator-scripts\") pod \"barbican-db-create-8tkw9\" (UID: \"e7648d9a-d5c5-4854-ad02-ddf686748d6a\") " pod="openstack/barbican-db-create-8tkw9" Dec 09 12:27:03 crc kubenswrapper[4703]: I1209 12:27:03.520048 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxkqx\" (UniqueName: \"kubernetes.io/projected/e7648d9a-d5c5-4854-ad02-ddf686748d6a-kube-api-access-bxkqx\") pod \"barbican-db-create-8tkw9\" (UID: \"e7648d9a-d5c5-4854-ad02-ddf686748d6a\") " pod="openstack/barbican-db-create-8tkw9" Dec 09 12:27:03 crc kubenswrapper[4703]: I1209 12:27:03.596622 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-7693-account-create-update-2gdqt"] Dec 09 12:27:03 crc kubenswrapper[4703]: I1209 12:27:03.603275 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-7693-account-create-update-2gdqt" Dec 09 12:27:03 crc kubenswrapper[4703]: I1209 12:27:03.606383 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Dec 09 12:27:03 crc kubenswrapper[4703]: I1209 12:27:03.621516 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xgqf\" (UniqueName: \"kubernetes.io/projected/5c0dec58-a2d5-44ef-8a01-2d7b555aed11-kube-api-access-9xgqf\") pod \"barbican-b7ce-account-create-update-c4zgt\" (UID: \"5c0dec58-a2d5-44ef-8a01-2d7b555aed11\") " pod="openstack/barbican-b7ce-account-create-update-c4zgt" Dec 09 12:27:03 crc kubenswrapper[4703]: I1209 12:27:03.621630 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e7648d9a-d5c5-4854-ad02-ddf686748d6a-operator-scripts\") pod \"barbican-db-create-8tkw9\" (UID: \"e7648d9a-d5c5-4854-ad02-ddf686748d6a\") " pod="openstack/barbican-db-create-8tkw9" Dec 09 12:27:03 crc kubenswrapper[4703]: I1209 12:27:03.621663 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxkqx\" (UniqueName: \"kubernetes.io/projected/e7648d9a-d5c5-4854-ad02-ddf686748d6a-kube-api-access-bxkqx\") pod \"barbican-db-create-8tkw9\" (UID: \"e7648d9a-d5c5-4854-ad02-ddf686748d6a\") " pod="openstack/barbican-db-create-8tkw9" Dec 09 12:27:03 crc kubenswrapper[4703]: I1209 12:27:03.621761 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c0dec58-a2d5-44ef-8a01-2d7b555aed11-operator-scripts\") pod \"barbican-b7ce-account-create-update-c4zgt\" (UID: \"5c0dec58-a2d5-44ef-8a01-2d7b555aed11\") " pod="openstack/barbican-b7ce-account-create-update-c4zgt" Dec 09 12:27:03 crc kubenswrapper[4703]: I1209 12:27:03.622636 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e7648d9a-d5c5-4854-ad02-ddf686748d6a-operator-scripts\") pod \"barbican-db-create-8tkw9\" (UID: \"e7648d9a-d5c5-4854-ad02-ddf686748d6a\") " pod="openstack/barbican-db-create-8tkw9" Dec 09 12:27:03 crc kubenswrapper[4703]: I1209 12:27:03.627792 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-7693-account-create-update-2gdqt"] Dec 09 12:27:03 crc kubenswrapper[4703]: I1209 12:27:03.664233 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxkqx\" (UniqueName: \"kubernetes.io/projected/e7648d9a-d5c5-4854-ad02-ddf686748d6a-kube-api-access-bxkqx\") pod \"barbican-db-create-8tkw9\" (UID: \"e7648d9a-d5c5-4854-ad02-ddf686748d6a\") " pod="openstack/barbican-db-create-8tkw9" Dec 09 12:27:03 crc kubenswrapper[4703]: I1209 12:27:03.726883 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-4wl7p"] Dec 09 12:27:03 crc kubenswrapper[4703]: I1209 12:27:03.727686 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c0dec58-a2d5-44ef-8a01-2d7b555aed11-operator-scripts\") pod \"barbican-b7ce-account-create-update-c4zgt\" (UID: \"5c0dec58-a2d5-44ef-8a01-2d7b555aed11\") " pod="openstack/barbican-b7ce-account-create-update-c4zgt" Dec 09 12:27:03 crc kubenswrapper[4703]: I1209 12:27:03.727823 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zc5gl\" (UniqueName: \"kubernetes.io/projected/9640f126-3eb2-4402-a141-f3bb22b15f40-kube-api-access-zc5gl\") pod \"cinder-7693-account-create-update-2gdqt\" (UID: \"9640f126-3eb2-4402-a141-f3bb22b15f40\") " pod="openstack/cinder-7693-account-create-update-2gdqt" Dec 09 12:27:03 crc kubenswrapper[4703]: I1209 12:27:03.728276 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xgqf\" (UniqueName: \"kubernetes.io/projected/5c0dec58-a2d5-44ef-8a01-2d7b555aed11-kube-api-access-9xgqf\") pod \"barbican-b7ce-account-create-update-c4zgt\" (UID: \"5c0dec58-a2d5-44ef-8a01-2d7b555aed11\") " pod="openstack/barbican-b7ce-account-create-update-c4zgt" Dec 09 12:27:03 crc kubenswrapper[4703]: I1209 12:27:03.728354 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9640f126-3eb2-4402-a141-f3bb22b15f40-operator-scripts\") pod \"cinder-7693-account-create-update-2gdqt\" (UID: \"9640f126-3eb2-4402-a141-f3bb22b15f40\") " pod="openstack/cinder-7693-account-create-update-2gdqt" Dec 09 12:27:03 crc kubenswrapper[4703]: I1209 12:27:03.732827 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-4wl7p" Dec 09 12:27:03 crc kubenswrapper[4703]: I1209 12:27:03.751794 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c0dec58-a2d5-44ef-8a01-2d7b555aed11-operator-scripts\") pod \"barbican-b7ce-account-create-update-c4zgt\" (UID: \"5c0dec58-a2d5-44ef-8a01-2d7b555aed11\") " pod="openstack/barbican-b7ce-account-create-update-c4zgt" Dec 09 12:27:03 crc kubenswrapper[4703]: I1209 12:27:03.757543 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-8tkw9" Dec 09 12:27:03 crc kubenswrapper[4703]: I1209 12:27:03.758078 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-4wl7p"] Dec 09 12:27:03 crc kubenswrapper[4703]: I1209 12:27:03.793232 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-4fqbv"] Dec 09 12:27:03 crc kubenswrapper[4703]: I1209 12:27:03.796833 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-4fqbv" Dec 09 12:27:03 crc kubenswrapper[4703]: I1209 12:27:03.806236 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xgqf\" (UniqueName: \"kubernetes.io/projected/5c0dec58-a2d5-44ef-8a01-2d7b555aed11-kube-api-access-9xgqf\") pod \"barbican-b7ce-account-create-update-c4zgt\" (UID: \"5c0dec58-a2d5-44ef-8a01-2d7b555aed11\") " pod="openstack/barbican-b7ce-account-create-update-c4zgt" Dec 09 12:27:03 crc kubenswrapper[4703]: I1209 12:27:03.806376 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 09 12:27:03 crc kubenswrapper[4703]: I1209 12:27:03.806377 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 09 12:27:03 crc kubenswrapper[4703]: I1209 12:27:03.806627 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-s28wx" Dec 09 12:27:03 crc kubenswrapper[4703]: I1209 12:27:03.822461 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-b7ce-account-create-update-c4zgt" Dec 09 12:27:03 crc kubenswrapper[4703]: I1209 12:27:03.830609 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 09 12:27:03 crc kubenswrapper[4703]: I1209 12:27:03.832170 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zc5gl\" (UniqueName: \"kubernetes.io/projected/9640f126-3eb2-4402-a141-f3bb22b15f40-kube-api-access-zc5gl\") pod \"cinder-7693-account-create-update-2gdqt\" (UID: \"9640f126-3eb2-4402-a141-f3bb22b15f40\") " pod="openstack/cinder-7693-account-create-update-2gdqt" Dec 09 12:27:03 crc kubenswrapper[4703]: I1209 12:27:03.832234 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf475bf7-37dd-4920-888c-b53e8065591e-operator-scripts\") pod \"cinder-db-create-4wl7p\" (UID: \"bf475bf7-37dd-4920-888c-b53e8065591e\") " pod="openstack/cinder-db-create-4wl7p" Dec 09 12:27:03 crc kubenswrapper[4703]: I1209 12:27:03.832286 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m92d5\" (UniqueName: \"kubernetes.io/projected/bf475bf7-37dd-4920-888c-b53e8065591e-kube-api-access-m92d5\") pod \"cinder-db-create-4wl7p\" (UID: \"bf475bf7-37dd-4920-888c-b53e8065591e\") " pod="openstack/cinder-db-create-4wl7p" Dec 09 12:27:03 crc kubenswrapper[4703]: I1209 12:27:03.832343 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9640f126-3eb2-4402-a141-f3bb22b15f40-operator-scripts\") pod \"cinder-7693-account-create-update-2gdqt\" (UID: \"9640f126-3eb2-4402-a141-f3bb22b15f40\") " pod="openstack/cinder-7693-account-create-update-2gdqt" Dec 09 12:27:03 crc kubenswrapper[4703]: I1209 12:27:03.851040 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9640f126-3eb2-4402-a141-f3bb22b15f40-operator-scripts\") pod \"cinder-7693-account-create-update-2gdqt\" (UID: \"9640f126-3eb2-4402-a141-f3bb22b15f40\") " pod="openstack/cinder-7693-account-create-update-2gdqt" Dec 09 12:27:03 crc kubenswrapper[4703]: I1209 12:27:03.853124 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-4fqbv"] Dec 09 12:27:03 crc kubenswrapper[4703]: I1209 12:27:03.903085 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zc5gl\" (UniqueName: \"kubernetes.io/projected/9640f126-3eb2-4402-a141-f3bb22b15f40-kube-api-access-zc5gl\") pod \"cinder-7693-account-create-update-2gdqt\" (UID: \"9640f126-3eb2-4402-a141-f3bb22b15f40\") " pod="openstack/cinder-7693-account-create-update-2gdqt" Dec 09 12:27:03 crc kubenswrapper[4703]: I1209 12:27:03.911909 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-ed5b-account-create-update-2nct2"] Dec 09 12:27:03 crc kubenswrapper[4703]: I1209 12:27:03.913459 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-ed5b-account-create-update-2nct2" Dec 09 12:27:03 crc kubenswrapper[4703]: I1209 12:27:03.926531 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-db-secret" Dec 09 12:27:03 crc kubenswrapper[4703]: I1209 12:27:03.930200 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-7693-account-create-update-2gdqt" Dec 09 12:27:03 crc kubenswrapper[4703]: I1209 12:27:03.935792 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf475bf7-37dd-4920-888c-b53e8065591e-operator-scripts\") pod \"cinder-db-create-4wl7p\" (UID: \"bf475bf7-37dd-4920-888c-b53e8065591e\") " pod="openstack/cinder-db-create-4wl7p" Dec 09 12:27:03 crc kubenswrapper[4703]: I1209 12:27:03.935862 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m92d5\" (UniqueName: \"kubernetes.io/projected/bf475bf7-37dd-4920-888c-b53e8065591e-kube-api-access-m92d5\") pod \"cinder-db-create-4wl7p\" (UID: \"bf475bf7-37dd-4920-888c-b53e8065591e\") " pod="openstack/cinder-db-create-4wl7p" Dec 09 12:27:03 crc kubenswrapper[4703]: I1209 12:27:03.935900 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5253b89-3c89-45a9-8806-dfabf281ebbb-combined-ca-bundle\") pod \"keystone-db-sync-4fqbv\" (UID: \"d5253b89-3c89-45a9-8806-dfabf281ebbb\") " pod="openstack/keystone-db-sync-4fqbv" Dec 09 12:27:03 crc kubenswrapper[4703]: I1209 12:27:03.935927 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpvpm\" (UniqueName: \"kubernetes.io/projected/d5253b89-3c89-45a9-8806-dfabf281ebbb-kube-api-access-jpvpm\") pod \"keystone-db-sync-4fqbv\" (UID: \"d5253b89-3c89-45a9-8806-dfabf281ebbb\") " pod="openstack/keystone-db-sync-4fqbv" Dec 09 12:27:03 crc kubenswrapper[4703]: I1209 12:27:03.935958 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5253b89-3c89-45a9-8806-dfabf281ebbb-config-data\") pod \"keystone-db-sync-4fqbv\" (UID: \"d5253b89-3c89-45a9-8806-dfabf281ebbb\") " pod="openstack/keystone-db-sync-4fqbv" Dec 09 12:27:03 crc kubenswrapper[4703]: I1209 12:27:03.936922 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf475bf7-37dd-4920-888c-b53e8065591e-operator-scripts\") pod \"cinder-db-create-4wl7p\" (UID: \"bf475bf7-37dd-4920-888c-b53e8065591e\") " pod="openstack/cinder-db-create-4wl7p" Dec 09 12:27:03 crc kubenswrapper[4703]: I1209 12:27:03.940539 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-db-create-bfgfz"] Dec 09 12:27:03 crc kubenswrapper[4703]: I1209 12:27:03.942027 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-create-bfgfz" Dec 09 12:27:03 crc kubenswrapper[4703]: I1209 12:27:03.960358 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-ed5b-account-create-update-2nct2"] Dec 09 12:27:03 crc kubenswrapper[4703]: I1209 12:27:03.968105 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-create-bfgfz"] Dec 09 12:27:03 crc kubenswrapper[4703]: I1209 12:27:03.999817 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m92d5\" (UniqueName: \"kubernetes.io/projected/bf475bf7-37dd-4920-888c-b53e8065591e-kube-api-access-m92d5\") pod \"cinder-db-create-4wl7p\" (UID: \"bf475bf7-37dd-4920-888c-b53e8065591e\") " pod="openstack/cinder-db-create-4wl7p" Dec 09 12:27:04 crc kubenswrapper[4703]: I1209 12:27:04.008457 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-0f3e-account-create-update-2cnnf"] Dec 09 12:27:04 crc kubenswrapper[4703]: I1209 12:27:04.009787 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-0f3e-account-create-update-2cnnf" Dec 09 12:27:04 crc kubenswrapper[4703]: I1209 12:27:04.012320 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Dec 09 12:27:04 crc kubenswrapper[4703]: I1209 12:27:04.038610 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a9ded47-bcfd-4613-9fec-80c12e9a64b0-operator-scripts\") pod \"cloudkitty-ed5b-account-create-update-2nct2\" (UID: \"8a9ded47-bcfd-4613-9fec-80c12e9a64b0\") " pod="openstack/cloudkitty-ed5b-account-create-update-2nct2" Dec 09 12:27:04 crc kubenswrapper[4703]: I1209 12:27:04.038673 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/72429324-7660-4f13-8658-ede06684d423-operator-scripts\") pod \"cloudkitty-db-create-bfgfz\" (UID: \"72429324-7660-4f13-8658-ede06684d423\") " pod="openstack/cloudkitty-db-create-bfgfz" Dec 09 12:27:04 crc kubenswrapper[4703]: I1209 12:27:04.038706 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrggw\" (UniqueName: \"kubernetes.io/projected/72429324-7660-4f13-8658-ede06684d423-kube-api-access-mrggw\") pod \"cloudkitty-db-create-bfgfz\" (UID: \"72429324-7660-4f13-8658-ede06684d423\") " pod="openstack/cloudkitty-db-create-bfgfz" Dec 09 12:27:04 crc kubenswrapper[4703]: I1209 12:27:04.038727 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snvkj\" (UniqueName: \"kubernetes.io/projected/8a9ded47-bcfd-4613-9fec-80c12e9a64b0-kube-api-access-snvkj\") pod \"cloudkitty-ed5b-account-create-update-2nct2\" (UID: \"8a9ded47-bcfd-4613-9fec-80c12e9a64b0\") " pod="openstack/cloudkitty-ed5b-account-create-update-2nct2" Dec 09 12:27:04 crc kubenswrapper[4703]: I1209 12:27:04.038766 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5253b89-3c89-45a9-8806-dfabf281ebbb-combined-ca-bundle\") pod \"keystone-db-sync-4fqbv\" (UID: \"d5253b89-3c89-45a9-8806-dfabf281ebbb\") " pod="openstack/keystone-db-sync-4fqbv" Dec 09 12:27:04 crc kubenswrapper[4703]: I1209 12:27:04.038785 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpvpm\" (UniqueName: \"kubernetes.io/projected/d5253b89-3c89-45a9-8806-dfabf281ebbb-kube-api-access-jpvpm\") pod \"keystone-db-sync-4fqbv\" (UID: \"d5253b89-3c89-45a9-8806-dfabf281ebbb\") " pod="openstack/keystone-db-sync-4fqbv" Dec 09 12:27:04 crc kubenswrapper[4703]: I1209 12:27:04.038808 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5253b89-3c89-45a9-8806-dfabf281ebbb-config-data\") pod \"keystone-db-sync-4fqbv\" (UID: \"d5253b89-3c89-45a9-8806-dfabf281ebbb\") " pod="openstack/keystone-db-sync-4fqbv" Dec 09 12:27:04 crc kubenswrapper[4703]: I1209 12:27:04.043583 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5253b89-3c89-45a9-8806-dfabf281ebbb-combined-ca-bundle\") pod \"keystone-db-sync-4fqbv\" (UID: \"d5253b89-3c89-45a9-8806-dfabf281ebbb\") " pod="openstack/keystone-db-sync-4fqbv" Dec 09 12:27:04 crc kubenswrapper[4703]: I1209 12:27:04.051758 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-0f3e-account-create-update-2cnnf"] Dec 09 12:27:04 crc kubenswrapper[4703]: I1209 12:27:04.052789 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5253b89-3c89-45a9-8806-dfabf281ebbb-config-data\") pod \"keystone-db-sync-4fqbv\" (UID: \"d5253b89-3c89-45a9-8806-dfabf281ebbb\") " pod="openstack/keystone-db-sync-4fqbv" Dec 09 12:27:04 crc kubenswrapper[4703]: I1209 12:27:04.082415 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpvpm\" (UniqueName: \"kubernetes.io/projected/d5253b89-3c89-45a9-8806-dfabf281ebbb-kube-api-access-jpvpm\") pod \"keystone-db-sync-4fqbv\" (UID: \"d5253b89-3c89-45a9-8806-dfabf281ebbb\") " pod="openstack/keystone-db-sync-4fqbv" Dec 09 12:27:04 crc kubenswrapper[4703]: I1209 12:27:04.088394 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-4wl7p" Dec 09 12:27:04 crc kubenswrapper[4703]: I1209 12:27:04.094673 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-7rnfn"] Dec 09 12:27:04 crc kubenswrapper[4703]: I1209 12:27:04.097478 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-7rnfn" Dec 09 12:27:04 crc kubenswrapper[4703]: I1209 12:27:04.101031 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-7rnfn"] Dec 09 12:27:04 crc kubenswrapper[4703]: I1209 12:27:04.140920 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4231ddd2-a973-43ee-bfa0-3ab32515beb8-operator-scripts\") pod \"neutron-0f3e-account-create-update-2cnnf\" (UID: \"4231ddd2-a973-43ee-bfa0-3ab32515beb8\") " pod="openstack/neutron-0f3e-account-create-update-2cnnf" Dec 09 12:27:04 crc kubenswrapper[4703]: I1209 12:27:04.141006 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a9ded47-bcfd-4613-9fec-80c12e9a64b0-operator-scripts\") pod \"cloudkitty-ed5b-account-create-update-2nct2\" (UID: \"8a9ded47-bcfd-4613-9fec-80c12e9a64b0\") " pod="openstack/cloudkitty-ed5b-account-create-update-2nct2" Dec 09 12:27:04 crc kubenswrapper[4703]: I1209 12:27:04.141057 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/72429324-7660-4f13-8658-ede06684d423-operator-scripts\") pod \"cloudkitty-db-create-bfgfz\" (UID: \"72429324-7660-4f13-8658-ede06684d423\") " pod="openstack/cloudkitty-db-create-bfgfz" Dec 09 12:27:04 crc kubenswrapper[4703]: I1209 12:27:04.141108 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrggw\" (UniqueName: \"kubernetes.io/projected/72429324-7660-4f13-8658-ede06684d423-kube-api-access-mrggw\") pod \"cloudkitty-db-create-bfgfz\" (UID: \"72429324-7660-4f13-8658-ede06684d423\") " pod="openstack/cloudkitty-db-create-bfgfz" Dec 09 12:27:04 crc kubenswrapper[4703]: I1209 12:27:04.141136 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snvkj\" (UniqueName: \"kubernetes.io/projected/8a9ded47-bcfd-4613-9fec-80c12e9a64b0-kube-api-access-snvkj\") pod \"cloudkitty-ed5b-account-create-update-2nct2\" (UID: \"8a9ded47-bcfd-4613-9fec-80c12e9a64b0\") " pod="openstack/cloudkitty-ed5b-account-create-update-2nct2" Dec 09 12:27:04 crc kubenswrapper[4703]: I1209 12:27:04.141167 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qmbs\" (UniqueName: \"kubernetes.io/projected/4231ddd2-a973-43ee-bfa0-3ab32515beb8-kube-api-access-7qmbs\") pod \"neutron-0f3e-account-create-update-2cnnf\" (UID: \"4231ddd2-a973-43ee-bfa0-3ab32515beb8\") " pod="openstack/neutron-0f3e-account-create-update-2cnnf" Dec 09 12:27:04 crc kubenswrapper[4703]: I1209 12:27:04.142014 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/72429324-7660-4f13-8658-ede06684d423-operator-scripts\") pod \"cloudkitty-db-create-bfgfz\" (UID: \"72429324-7660-4f13-8658-ede06684d423\") " pod="openstack/cloudkitty-db-create-bfgfz" Dec 09 12:27:04 crc kubenswrapper[4703]: I1209 12:27:04.142588 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a9ded47-bcfd-4613-9fec-80c12e9a64b0-operator-scripts\") pod \"cloudkitty-ed5b-account-create-update-2nct2\" (UID: \"8a9ded47-bcfd-4613-9fec-80c12e9a64b0\") " pod="openstack/cloudkitty-ed5b-account-create-update-2nct2" Dec 09 12:27:04 crc kubenswrapper[4703]: I1209 12:27:04.196884 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrggw\" (UniqueName: \"kubernetes.io/projected/72429324-7660-4f13-8658-ede06684d423-kube-api-access-mrggw\") pod \"cloudkitty-db-create-bfgfz\" (UID: \"72429324-7660-4f13-8658-ede06684d423\") " pod="openstack/cloudkitty-db-create-bfgfz" Dec 09 12:27:04 crc kubenswrapper[4703]: I1209 12:27:04.214984 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snvkj\" (UniqueName: \"kubernetes.io/projected/8a9ded47-bcfd-4613-9fec-80c12e9a64b0-kube-api-access-snvkj\") pod \"cloudkitty-ed5b-account-create-update-2nct2\" (UID: \"8a9ded47-bcfd-4613-9fec-80c12e9a64b0\") " pod="openstack/cloudkitty-ed5b-account-create-update-2nct2" Dec 09 12:27:04 crc kubenswrapper[4703]: I1209 12:27:04.242687 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90165483-6678-444e-bda2-249f59813ba9-operator-scripts\") pod \"neutron-db-create-7rnfn\" (UID: \"90165483-6678-444e-bda2-249f59813ba9\") " pod="openstack/neutron-db-create-7rnfn" Dec 09 12:27:04 crc kubenswrapper[4703]: I1209 12:27:04.242820 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4231ddd2-a973-43ee-bfa0-3ab32515beb8-operator-scripts\") pod \"neutron-0f3e-account-create-update-2cnnf\" (UID: \"4231ddd2-a973-43ee-bfa0-3ab32515beb8\") " pod="openstack/neutron-0f3e-account-create-update-2cnnf" Dec 09 12:27:04 crc kubenswrapper[4703]: I1209 12:27:04.242919 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzq6q\" (UniqueName: \"kubernetes.io/projected/90165483-6678-444e-bda2-249f59813ba9-kube-api-access-fzq6q\") pod \"neutron-db-create-7rnfn\" (UID: \"90165483-6678-444e-bda2-249f59813ba9\") " pod="openstack/neutron-db-create-7rnfn" Dec 09 12:27:04 crc kubenswrapper[4703]: I1209 12:27:04.242942 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qmbs\" (UniqueName: \"kubernetes.io/projected/4231ddd2-a973-43ee-bfa0-3ab32515beb8-kube-api-access-7qmbs\") pod \"neutron-0f3e-account-create-update-2cnnf\" (UID: \"4231ddd2-a973-43ee-bfa0-3ab32515beb8\") " pod="openstack/neutron-0f3e-account-create-update-2cnnf" Dec 09 12:27:04 crc kubenswrapper[4703]: I1209 12:27:04.244004 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4231ddd2-a973-43ee-bfa0-3ab32515beb8-operator-scripts\") pod \"neutron-0f3e-account-create-update-2cnnf\" (UID: \"4231ddd2-a973-43ee-bfa0-3ab32515beb8\") " pod="openstack/neutron-0f3e-account-create-update-2cnnf" Dec 09 12:27:04 crc kubenswrapper[4703]: I1209 12:27:04.262710 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qmbs\" (UniqueName: \"kubernetes.io/projected/4231ddd2-a973-43ee-bfa0-3ab32515beb8-kube-api-access-7qmbs\") pod \"neutron-0f3e-account-create-update-2cnnf\" (UID: \"4231ddd2-a973-43ee-bfa0-3ab32515beb8\") " pod="openstack/neutron-0f3e-account-create-update-2cnnf" Dec 09 12:27:04 crc kubenswrapper[4703]: I1209 12:27:04.263674 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-4fqbv" Dec 09 12:27:04 crc kubenswrapper[4703]: I1209 12:27:04.336854 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-ed5b-account-create-update-2nct2" Dec 09 12:27:04 crc kubenswrapper[4703]: I1209 12:27:04.346403 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzq6q\" (UniqueName: \"kubernetes.io/projected/90165483-6678-444e-bda2-249f59813ba9-kube-api-access-fzq6q\") pod \"neutron-db-create-7rnfn\" (UID: \"90165483-6678-444e-bda2-249f59813ba9\") " pod="openstack/neutron-db-create-7rnfn" Dec 09 12:27:04 crc kubenswrapper[4703]: I1209 12:27:04.346470 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90165483-6678-444e-bda2-249f59813ba9-operator-scripts\") pod \"neutron-db-create-7rnfn\" (UID: \"90165483-6678-444e-bda2-249f59813ba9\") " pod="openstack/neutron-db-create-7rnfn" Dec 09 12:27:04 crc kubenswrapper[4703]: I1209 12:27:04.347335 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90165483-6678-444e-bda2-249f59813ba9-operator-scripts\") pod \"neutron-db-create-7rnfn\" (UID: \"90165483-6678-444e-bda2-249f59813ba9\") " pod="openstack/neutron-db-create-7rnfn" Dec 09 12:27:04 crc kubenswrapper[4703]: I1209 12:27:04.374219 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzq6q\" (UniqueName: \"kubernetes.io/projected/90165483-6678-444e-bda2-249f59813ba9-kube-api-access-fzq6q\") pod \"neutron-db-create-7rnfn\" (UID: \"90165483-6678-444e-bda2-249f59813ba9\") " pod="openstack/neutron-db-create-7rnfn" Dec 09 12:27:04 crc kubenswrapper[4703]: I1209 12:27:04.400361 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-create-bfgfz" Dec 09 12:27:04 crc kubenswrapper[4703]: I1209 12:27:04.418787 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-0f3e-account-create-update-2cnnf" Dec 09 12:27:04 crc kubenswrapper[4703]: I1209 12:27:04.440149 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-7rnfn" Dec 09 12:27:07 crc kubenswrapper[4703]: I1209 12:27:07.948718 4703 generic.go:334] "Generic (PLEG): container finished" podID="30ba4b6c-b025-4ab0-b589-a2c72caf1997" containerID="43e85dbc8494b92aec2888918996d45c961ec4d3c62b035f3921f8d01fc1840a" exitCode=0 Dec 09 12:27:07 crc kubenswrapper[4703]: I1209 12:27:07.949433 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"30ba4b6c-b025-4ab0-b589-a2c72caf1997","Type":"ContainerDied","Data":"43e85dbc8494b92aec2888918996d45c961ec4d3c62b035f3921f8d01fc1840a"} Dec 09 12:27:11 crc kubenswrapper[4703]: E1209 12:27:11.175155 4703 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" Dec 09 12:27:11 crc kubenswrapper[4703]: E1209 12:27:11.175842 4703 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-brmls,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-29stw_openstack(e4e5cccd-9c7c-467b-a10b-4a989ea688e3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 12:27:11 crc kubenswrapper[4703]: E1209 12:27:11.177215 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-29stw" podUID="e4e5cccd-9c7c-467b-a10b-4a989ea688e3" Dec 09 12:27:11 crc kubenswrapper[4703]: I1209 12:27:11.684075 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-8tkw9"] Dec 09 12:27:11 crc kubenswrapper[4703]: I1209 12:27:11.780536 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-ingester-0" Dec 09 12:27:11 crc kubenswrapper[4703]: I1209 12:27:11.905740 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-4fqbv"] Dec 09 12:27:12 crc kubenswrapper[4703]: I1209 12:27:12.011110 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-4fqbv" event={"ID":"d5253b89-3c89-45a9-8806-dfabf281ebbb","Type":"ContainerStarted","Data":"f78ae3f0a9d2a34e6e80a4f520b632a45d9ed52b0ffed8a5ad3cfdcc52ac4564"} Dec 09 12:27:12 crc kubenswrapper[4703]: I1209 12:27:12.013689 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-8tkw9" event={"ID":"e7648d9a-d5c5-4854-ad02-ddf686748d6a","Type":"ContainerStarted","Data":"29a5e0615e2787cf25773706a29faeb2a4d8601546904c8fdfe07d6a790e5463"} Dec 09 12:27:12 crc kubenswrapper[4703]: I1209 12:27:12.019294 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"30ba4b6c-b025-4ab0-b589-a2c72caf1997","Type":"ContainerStarted","Data":"4263c4d0ae77bbdfedbe281bd5e368f6b88ef6850d01c8838b0e702a94398a53"} Dec 09 12:27:12 crc kubenswrapper[4703]: E1209 12:27:12.024507 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api:current-podified\\\"\"" pod="openstack/glance-db-sync-29stw" podUID="e4e5cccd-9c7c-467b-a10b-4a989ea688e3" Dec 09 12:27:12 crc kubenswrapper[4703]: I1209 12:27:12.225757 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-7693-account-create-update-2gdqt"] Dec 09 12:27:12 crc kubenswrapper[4703]: I1209 12:27:12.248628 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-7rnfn"] Dec 09 12:27:12 crc kubenswrapper[4703]: I1209 12:27:12.310806 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-4wl7p"] Dec 09 12:27:12 crc kubenswrapper[4703]: I1209 12:27:12.319675 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-ed5b-account-create-update-2nct2"] Dec 09 12:27:12 crc kubenswrapper[4703]: I1209 12:27:12.330833 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-create-bfgfz"] Dec 09 12:27:12 crc kubenswrapper[4703]: I1209 12:27:12.346420 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-0f3e-account-create-update-2cnnf"] Dec 09 12:27:12 crc kubenswrapper[4703]: I1209 12:27:12.360633 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-b7ce-account-create-update-c4zgt"] Dec 09 12:27:13 crc kubenswrapper[4703]: I1209 12:27:13.049290 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-create-bfgfz" event={"ID":"72429324-7660-4f13-8658-ede06684d423","Type":"ContainerStarted","Data":"b6160557e8a257a5b09019da1693875c6ef6aaa5f78491ea3ff92b8e8647cdcc"} Dec 09 12:27:13 crc kubenswrapper[4703]: I1209 12:27:13.054480 4703 generic.go:334] "Generic (PLEG): container finished" podID="e7648d9a-d5c5-4854-ad02-ddf686748d6a" containerID="05069b6086c0d5d0c7d734afa0ed5f28ddde9eb9e989d5d1c9b476975f9a73b2" exitCode=0 Dec 09 12:27:13 crc kubenswrapper[4703]: I1209 12:27:13.054565 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-8tkw9" event={"ID":"e7648d9a-d5c5-4854-ad02-ddf686748d6a","Type":"ContainerDied","Data":"05069b6086c0d5d0c7d734afa0ed5f28ddde9eb9e989d5d1c9b476975f9a73b2"} Dec 09 12:27:13 crc kubenswrapper[4703]: I1209 12:27:13.059406 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-4wl7p" event={"ID":"bf475bf7-37dd-4920-888c-b53e8065591e","Type":"ContainerStarted","Data":"c5a7ec5501c7e3387ab6338b56b039be4337cda33dd048a1a5394a38dd61a458"} Dec 09 12:27:13 crc kubenswrapper[4703]: I1209 12:27:13.063780 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-ed5b-account-create-update-2nct2" event={"ID":"8a9ded47-bcfd-4613-9fec-80c12e9a64b0","Type":"ContainerStarted","Data":"ed9c554c8a1c0f2664cecba17cdce136e56b59fa050e83089913a6eeeabbacff"} Dec 09 12:27:13 crc kubenswrapper[4703]: I1209 12:27:13.065698 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-0f3e-account-create-update-2cnnf" event={"ID":"4231ddd2-a973-43ee-bfa0-3ab32515beb8","Type":"ContainerStarted","Data":"d343c5b443d17ede4887b15ea4eae5492f48bcff5b8890f396f256b2a7473c9f"} Dec 09 12:27:13 crc kubenswrapper[4703]: I1209 12:27:13.068022 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-b7ce-account-create-update-c4zgt" event={"ID":"5c0dec58-a2d5-44ef-8a01-2d7b555aed11","Type":"ContainerStarted","Data":"15564143959d11f5aaedd44501d037adb9f075132e1bb8cb91562451652ea4d6"} Dec 09 12:27:13 crc kubenswrapper[4703]: I1209 12:27:13.085462 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-7rnfn" event={"ID":"90165483-6678-444e-bda2-249f59813ba9","Type":"ContainerStarted","Data":"34b45f38fd4d1e1e83628b3a5387533a77a5aa9a8265f0b32dea31334e7c81a1"} Dec 09 12:27:13 crc kubenswrapper[4703]: I1209 12:27:13.085529 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-7693-account-create-update-2gdqt" event={"ID":"9640f126-3eb2-4402-a141-f3bb22b15f40","Type":"ContainerStarted","Data":"ea0e7a8b640b1bef076002873018f918cbcc816ff878096f3519d22918676dda"} Dec 09 12:27:14 crc kubenswrapper[4703]: I1209 12:27:14.093161 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-b7ce-account-create-update-c4zgt" event={"ID":"5c0dec58-a2d5-44ef-8a01-2d7b555aed11","Type":"ContainerStarted","Data":"cc004dec1884e7bde620a684ee12b6a54fda82586a1a849048e930e961a48e98"} Dec 09 12:27:14 crc kubenswrapper[4703]: I1209 12:27:14.096562 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-7rnfn" event={"ID":"90165483-6678-444e-bda2-249f59813ba9","Type":"ContainerStarted","Data":"c93d36ae95a018ec0cad8bb7d4331b801f4ee97e8ec9fde125cf2ee8564bded6"} Dec 09 12:27:14 crc kubenswrapper[4703]: I1209 12:27:14.100033 4703 generic.go:334] "Generic (PLEG): container finished" podID="bf475bf7-37dd-4920-888c-b53e8065591e" containerID="f2aabc9a8ae2ebe702d2c1fa8a9cf62a8a17b1b6a75187b7d06fdbebde54602b" exitCode=0 Dec 09 12:27:14 crc kubenswrapper[4703]: I1209 12:27:14.100152 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-4wl7p" event={"ID":"bf475bf7-37dd-4920-888c-b53e8065591e","Type":"ContainerDied","Data":"f2aabc9a8ae2ebe702d2c1fa8a9cf62a8a17b1b6a75187b7d06fdbebde54602b"} Dec 09 12:27:14 crc kubenswrapper[4703]: I1209 12:27:14.105330 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-0f3e-account-create-update-2cnnf" event={"ID":"4231ddd2-a973-43ee-bfa0-3ab32515beb8","Type":"ContainerStarted","Data":"d7be62cb0c4ea0c1d8b1cb1b97e23d227b57b22d5ef05ada651409b6e39a4e1f"} Dec 09 12:27:14 crc kubenswrapper[4703]: I1209 12:27:14.111021 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-7693-account-create-update-2gdqt" event={"ID":"9640f126-3eb2-4402-a141-f3bb22b15f40","Type":"ContainerStarted","Data":"92be794a9d02e8f84283d05339272c02db8f36ab4afebcd8203b7e9346dc941a"} Dec 09 12:27:14 crc kubenswrapper[4703]: I1209 12:27:14.117239 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7ad4bf88-77c9-4a0a-8fad-75f44d1b4f44","Type":"ContainerStarted","Data":"3f005d6cc6d9c9dccd75daf700cee0f2e724a88bf4928b91e7249d3784f41ac3"} Dec 09 12:27:14 crc kubenswrapper[4703]: I1209 12:27:14.127161 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-create-bfgfz" event={"ID":"72429324-7660-4f13-8658-ede06684d423","Type":"ContainerStarted","Data":"cc4bdd8d09115be028816c8cc89586b2c346a0671b25aa43d6e71daed0bad9b9"} Dec 09 12:27:14 crc kubenswrapper[4703]: I1209 12:27:14.143322 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-ed5b-account-create-update-2nct2" event={"ID":"8a9ded47-bcfd-4613-9fec-80c12e9a64b0","Type":"ContainerStarted","Data":"5317d50e770dc7f504e942603be31254e3829b1453e26c80c41adea181fc1596"} Dec 09 12:27:14 crc kubenswrapper[4703]: I1209 12:27:14.144582 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-b7ce-account-create-update-c4zgt" podStartSLOduration=11.14455494 podStartE2EDuration="11.14455494s" podCreationTimestamp="2025-12-09 12:27:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:27:14.117034607 +0000 UTC m=+1333.365798136" watchObservedRunningTime="2025-12-09 12:27:14.14455494 +0000 UTC m=+1333.393318459" Dec 09 12:27:14 crc kubenswrapper[4703]: I1209 12:27:14.144945 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-0f3e-account-create-update-2cnnf" podStartSLOduration=11.144937191 podStartE2EDuration="11.144937191s" podCreationTimestamp="2025-12-09 12:27:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:27:14.143598705 +0000 UTC m=+1333.392362224" watchObservedRunningTime="2025-12-09 12:27:14.144937191 +0000 UTC m=+1333.393700710" Dec 09 12:27:14 crc kubenswrapper[4703]: I1209 12:27:14.171012 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-7rnfn" podStartSLOduration=10.170987155 podStartE2EDuration="10.170987155s" podCreationTimestamp="2025-12-09 12:27:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:27:14.16163583 +0000 UTC m=+1333.410399369" watchObservedRunningTime="2025-12-09 12:27:14.170987155 +0000 UTC m=+1333.419750674" Dec 09 12:27:14 crc kubenswrapper[4703]: I1209 12:27:14.224365 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-7693-account-create-update-2gdqt" podStartSLOduration=11.224345687 podStartE2EDuration="11.224345687s" podCreationTimestamp="2025-12-09 12:27:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:27:14.20805306 +0000 UTC m=+1333.456816589" watchObservedRunningTime="2025-12-09 12:27:14.224345687 +0000 UTC m=+1333.473109206" Dec 09 12:27:14 crc kubenswrapper[4703]: I1209 12:27:14.241201 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-ed5b-account-create-update-2nct2" podStartSLOduration=11.241154989 podStartE2EDuration="11.241154989s" podCreationTimestamp="2025-12-09 12:27:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:27:14.229601606 +0000 UTC m=+1333.478365135" watchObservedRunningTime="2025-12-09 12:27:14.241154989 +0000 UTC m=+1333.489918528" Dec 09 12:27:14 crc kubenswrapper[4703]: I1209 12:27:14.264007 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-db-create-bfgfz" podStartSLOduration=11.26398191 podStartE2EDuration="11.26398191s" podCreationTimestamp="2025-12-09 12:27:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:27:14.246526321 +0000 UTC m=+1333.495289840" watchObservedRunningTime="2025-12-09 12:27:14.26398191 +0000 UTC m=+1333.512745439" Dec 09 12:27:15 crc kubenswrapper[4703]: I1209 12:27:15.018235 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-8tkw9" Dec 09 12:27:15 crc kubenswrapper[4703]: I1209 12:27:15.162114 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-8tkw9" event={"ID":"e7648d9a-d5c5-4854-ad02-ddf686748d6a","Type":"ContainerDied","Data":"29a5e0615e2787cf25773706a29faeb2a4d8601546904c8fdfe07d6a790e5463"} Dec 09 12:27:15 crc kubenswrapper[4703]: I1209 12:27:15.162490 4703 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="29a5e0615e2787cf25773706a29faeb2a4d8601546904c8fdfe07d6a790e5463" Dec 09 12:27:15 crc kubenswrapper[4703]: I1209 12:27:15.162592 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-8tkw9" Dec 09 12:27:15 crc kubenswrapper[4703]: I1209 12:27:15.164489 4703 generic.go:334] "Generic (PLEG): container finished" podID="8a9ded47-bcfd-4613-9fec-80c12e9a64b0" containerID="5317d50e770dc7f504e942603be31254e3829b1453e26c80c41adea181fc1596" exitCode=0 Dec 09 12:27:15 crc kubenswrapper[4703]: I1209 12:27:15.164557 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-ed5b-account-create-update-2nct2" event={"ID":"8a9ded47-bcfd-4613-9fec-80c12e9a64b0","Type":"ContainerDied","Data":"5317d50e770dc7f504e942603be31254e3829b1453e26c80c41adea181fc1596"} Dec 09 12:27:15 crc kubenswrapper[4703]: I1209 12:27:15.167134 4703 generic.go:334] "Generic (PLEG): container finished" podID="4231ddd2-a973-43ee-bfa0-3ab32515beb8" containerID="d7be62cb0c4ea0c1d8b1cb1b97e23d227b57b22d5ef05ada651409b6e39a4e1f" exitCode=0 Dec 09 12:27:15 crc kubenswrapper[4703]: I1209 12:27:15.167280 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-0f3e-account-create-update-2cnnf" event={"ID":"4231ddd2-a973-43ee-bfa0-3ab32515beb8","Type":"ContainerDied","Data":"d7be62cb0c4ea0c1d8b1cb1b97e23d227b57b22d5ef05ada651409b6e39a4e1f"} Dec 09 12:27:15 crc kubenswrapper[4703]: I1209 12:27:15.169836 4703 generic.go:334] "Generic (PLEG): container finished" podID="5c0dec58-a2d5-44ef-8a01-2d7b555aed11" containerID="cc004dec1884e7bde620a684ee12b6a54fda82586a1a849048e930e961a48e98" exitCode=0 Dec 09 12:27:15 crc kubenswrapper[4703]: I1209 12:27:15.169910 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-b7ce-account-create-update-c4zgt" event={"ID":"5c0dec58-a2d5-44ef-8a01-2d7b555aed11","Type":"ContainerDied","Data":"cc004dec1884e7bde620a684ee12b6a54fda82586a1a849048e930e961a48e98"} Dec 09 12:27:15 crc kubenswrapper[4703]: I1209 12:27:15.181015 4703 generic.go:334] "Generic (PLEG): container finished" podID="90165483-6678-444e-bda2-249f59813ba9" containerID="c93d36ae95a018ec0cad8bb7d4331b801f4ee97e8ec9fde125cf2ee8564bded6" exitCode=0 Dec 09 12:27:15 crc kubenswrapper[4703]: I1209 12:27:15.181100 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-7rnfn" event={"ID":"90165483-6678-444e-bda2-249f59813ba9","Type":"ContainerDied","Data":"c93d36ae95a018ec0cad8bb7d4331b801f4ee97e8ec9fde125cf2ee8564bded6"} Dec 09 12:27:15 crc kubenswrapper[4703]: I1209 12:27:15.188225 4703 generic.go:334] "Generic (PLEG): container finished" podID="9640f126-3eb2-4402-a141-f3bb22b15f40" containerID="92be794a9d02e8f84283d05339272c02db8f36ab4afebcd8203b7e9346dc941a" exitCode=0 Dec 09 12:27:15 crc kubenswrapper[4703]: I1209 12:27:15.188744 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-7693-account-create-update-2gdqt" event={"ID":"9640f126-3eb2-4402-a141-f3bb22b15f40","Type":"ContainerDied","Data":"92be794a9d02e8f84283d05339272c02db8f36ab4afebcd8203b7e9346dc941a"} Dec 09 12:27:15 crc kubenswrapper[4703]: I1209 12:27:15.193458 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e7648d9a-d5c5-4854-ad02-ddf686748d6a-operator-scripts\") pod \"e7648d9a-d5c5-4854-ad02-ddf686748d6a\" (UID: \"e7648d9a-d5c5-4854-ad02-ddf686748d6a\") " Dec 09 12:27:15 crc kubenswrapper[4703]: I1209 12:27:15.193552 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bxkqx\" (UniqueName: \"kubernetes.io/projected/e7648d9a-d5c5-4854-ad02-ddf686748d6a-kube-api-access-bxkqx\") pod \"e7648d9a-d5c5-4854-ad02-ddf686748d6a\" (UID: \"e7648d9a-d5c5-4854-ad02-ddf686748d6a\") " Dec 09 12:27:15 crc kubenswrapper[4703]: I1209 12:27:15.194357 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7648d9a-d5c5-4854-ad02-ddf686748d6a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e7648d9a-d5c5-4854-ad02-ddf686748d6a" (UID: "e7648d9a-d5c5-4854-ad02-ddf686748d6a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:27:15 crc kubenswrapper[4703]: I1209 12:27:15.195371 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7ad4bf88-77c9-4a0a-8fad-75f44d1b4f44","Type":"ContainerStarted","Data":"0087a1c9e2d76f60a68806a77c0666be4d0a536c8feef4af81915479a4afcc9d"} Dec 09 12:27:15 crc kubenswrapper[4703]: I1209 12:27:15.195531 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7ad4bf88-77c9-4a0a-8fad-75f44d1b4f44","Type":"ContainerStarted","Data":"a52139218ced358ba95d4f226216f639586fcb4ccb10730b0b647f3d888c61cb"} Dec 09 12:27:15 crc kubenswrapper[4703]: I1209 12:27:15.197824 4703 generic.go:334] "Generic (PLEG): container finished" podID="72429324-7660-4f13-8658-ede06684d423" containerID="cc4bdd8d09115be028816c8cc89586b2c346a0671b25aa43d6e71daed0bad9b9" exitCode=0 Dec 09 12:27:15 crc kubenswrapper[4703]: I1209 12:27:15.198069 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-create-bfgfz" event={"ID":"72429324-7660-4f13-8658-ede06684d423","Type":"ContainerDied","Data":"cc4bdd8d09115be028816c8cc89586b2c346a0671b25aa43d6e71daed0bad9b9"} Dec 09 12:27:15 crc kubenswrapper[4703]: I1209 12:27:15.200713 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7648d9a-d5c5-4854-ad02-ddf686748d6a-kube-api-access-bxkqx" (OuterVolumeSpecName: "kube-api-access-bxkqx") pod "e7648d9a-d5c5-4854-ad02-ddf686748d6a" (UID: "e7648d9a-d5c5-4854-ad02-ddf686748d6a"). InnerVolumeSpecName "kube-api-access-bxkqx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:27:15 crc kubenswrapper[4703]: I1209 12:27:15.296481 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bxkqx\" (UniqueName: \"kubernetes.io/projected/e7648d9a-d5c5-4854-ad02-ddf686748d6a-kube-api-access-bxkqx\") on node \"crc\" DevicePath \"\"" Dec 09 12:27:15 crc kubenswrapper[4703]: I1209 12:27:15.296785 4703 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e7648d9a-d5c5-4854-ad02-ddf686748d6a-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 12:27:16 crc kubenswrapper[4703]: I1209 12:27:16.213520 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"30ba4b6c-b025-4ab0-b589-a2c72caf1997","Type":"ContainerStarted","Data":"41dd707ea9c19ced77a50b08e8e5eea6df3618d722707e54edf82ca2beb02d62"} Dec 09 12:27:20 crc kubenswrapper[4703]: I1209 12:27:20.975055 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-0f3e-account-create-update-2cnnf" Dec 09 12:27:21 crc kubenswrapper[4703]: I1209 12:27:21.006715 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-7rnfn" Dec 09 12:27:21 crc kubenswrapper[4703]: I1209 12:27:21.035473 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-4wl7p" Dec 09 12:27:21 crc kubenswrapper[4703]: I1209 12:27:21.086501 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-b7ce-account-create-update-c4zgt" Dec 09 12:27:21 crc kubenswrapper[4703]: I1209 12:27:21.107077 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-ed5b-account-create-update-2nct2" Dec 09 12:27:21 crc kubenswrapper[4703]: I1209 12:27:21.121418 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-7693-account-create-update-2gdqt" Dec 09 12:27:21 crc kubenswrapper[4703]: I1209 12:27:21.136069 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7qmbs\" (UniqueName: \"kubernetes.io/projected/4231ddd2-a973-43ee-bfa0-3ab32515beb8-kube-api-access-7qmbs\") pod \"4231ddd2-a973-43ee-bfa0-3ab32515beb8\" (UID: \"4231ddd2-a973-43ee-bfa0-3ab32515beb8\") " Dec 09 12:27:21 crc kubenswrapper[4703]: I1209 12:27:21.136168 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4231ddd2-a973-43ee-bfa0-3ab32515beb8-operator-scripts\") pod \"4231ddd2-a973-43ee-bfa0-3ab32515beb8\" (UID: \"4231ddd2-a973-43ee-bfa0-3ab32515beb8\") " Dec 09 12:27:21 crc kubenswrapper[4703]: I1209 12:27:21.136219 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m92d5\" (UniqueName: \"kubernetes.io/projected/bf475bf7-37dd-4920-888c-b53e8065591e-kube-api-access-m92d5\") pod \"bf475bf7-37dd-4920-888c-b53e8065591e\" (UID: \"bf475bf7-37dd-4920-888c-b53e8065591e\") " Dec 09 12:27:21 crc kubenswrapper[4703]: I1209 12:27:21.136250 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90165483-6678-444e-bda2-249f59813ba9-operator-scripts\") pod \"90165483-6678-444e-bda2-249f59813ba9\" (UID: \"90165483-6678-444e-bda2-249f59813ba9\") " Dec 09 12:27:21 crc kubenswrapper[4703]: I1209 12:27:21.136300 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf475bf7-37dd-4920-888c-b53e8065591e-operator-scripts\") pod \"bf475bf7-37dd-4920-888c-b53e8065591e\" (UID: \"bf475bf7-37dd-4920-888c-b53e8065591e\") " Dec 09 12:27:21 crc kubenswrapper[4703]: I1209 12:27:21.136838 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fzq6q\" (UniqueName: \"kubernetes.io/projected/90165483-6678-444e-bda2-249f59813ba9-kube-api-access-fzq6q\") pod \"90165483-6678-444e-bda2-249f59813ba9\" (UID: \"90165483-6678-444e-bda2-249f59813ba9\") " Dec 09 12:27:21 crc kubenswrapper[4703]: I1209 12:27:21.138068 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90165483-6678-444e-bda2-249f59813ba9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "90165483-6678-444e-bda2-249f59813ba9" (UID: "90165483-6678-444e-bda2-249f59813ba9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:27:21 crc kubenswrapper[4703]: I1209 12:27:21.138168 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4231ddd2-a973-43ee-bfa0-3ab32515beb8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4231ddd2-a973-43ee-bfa0-3ab32515beb8" (UID: "4231ddd2-a973-43ee-bfa0-3ab32515beb8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:27:21 crc kubenswrapper[4703]: I1209 12:27:21.138997 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf475bf7-37dd-4920-888c-b53e8065591e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bf475bf7-37dd-4920-888c-b53e8065591e" (UID: "bf475bf7-37dd-4920-888c-b53e8065591e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:27:21 crc kubenswrapper[4703]: I1209 12:27:21.140371 4703 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4231ddd2-a973-43ee-bfa0-3ab32515beb8-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 12:27:21 crc kubenswrapper[4703]: I1209 12:27:21.140434 4703 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90165483-6678-444e-bda2-249f59813ba9-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 12:27:21 crc kubenswrapper[4703]: I1209 12:27:21.140446 4703 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf475bf7-37dd-4920-888c-b53e8065591e-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 12:27:21 crc kubenswrapper[4703]: I1209 12:27:21.157687 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-create-bfgfz" Dec 09 12:27:21 crc kubenswrapper[4703]: I1209 12:27:21.158454 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf475bf7-37dd-4920-888c-b53e8065591e-kube-api-access-m92d5" (OuterVolumeSpecName: "kube-api-access-m92d5") pod "bf475bf7-37dd-4920-888c-b53e8065591e" (UID: "bf475bf7-37dd-4920-888c-b53e8065591e"). InnerVolumeSpecName "kube-api-access-m92d5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:27:21 crc kubenswrapper[4703]: I1209 12:27:21.160332 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90165483-6678-444e-bda2-249f59813ba9-kube-api-access-fzq6q" (OuterVolumeSpecName: "kube-api-access-fzq6q") pod "90165483-6678-444e-bda2-249f59813ba9" (UID: "90165483-6678-444e-bda2-249f59813ba9"). InnerVolumeSpecName "kube-api-access-fzq6q". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:27:21 crc kubenswrapper[4703]: I1209 12:27:21.186091 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4231ddd2-a973-43ee-bfa0-3ab32515beb8-kube-api-access-7qmbs" (OuterVolumeSpecName: "kube-api-access-7qmbs") pod "4231ddd2-a973-43ee-bfa0-3ab32515beb8" (UID: "4231ddd2-a973-43ee-bfa0-3ab32515beb8"). InnerVolumeSpecName "kube-api-access-7qmbs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:27:21 crc kubenswrapper[4703]: I1209 12:27:21.242726 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c0dec58-a2d5-44ef-8a01-2d7b555aed11-operator-scripts\") pod \"5c0dec58-a2d5-44ef-8a01-2d7b555aed11\" (UID: \"5c0dec58-a2d5-44ef-8a01-2d7b555aed11\") " Dec 09 12:27:21 crc kubenswrapper[4703]: I1209 12:27:21.242928 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zc5gl\" (UniqueName: \"kubernetes.io/projected/9640f126-3eb2-4402-a141-f3bb22b15f40-kube-api-access-zc5gl\") pod \"9640f126-3eb2-4402-a141-f3bb22b15f40\" (UID: \"9640f126-3eb2-4402-a141-f3bb22b15f40\") " Dec 09 12:27:21 crc kubenswrapper[4703]: I1209 12:27:21.243043 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xgqf\" (UniqueName: \"kubernetes.io/projected/5c0dec58-a2d5-44ef-8a01-2d7b555aed11-kube-api-access-9xgqf\") pod \"5c0dec58-a2d5-44ef-8a01-2d7b555aed11\" (UID: \"5c0dec58-a2d5-44ef-8a01-2d7b555aed11\") " Dec 09 12:27:21 crc kubenswrapper[4703]: I1209 12:27:21.243105 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mrggw\" (UniqueName: \"kubernetes.io/projected/72429324-7660-4f13-8658-ede06684d423-kube-api-access-mrggw\") pod \"72429324-7660-4f13-8658-ede06684d423\" (UID: \"72429324-7660-4f13-8658-ede06684d423\") " Dec 09 12:27:21 crc kubenswrapper[4703]: I1209 12:27:21.243297 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/72429324-7660-4f13-8658-ede06684d423-operator-scripts\") pod \"72429324-7660-4f13-8658-ede06684d423\" (UID: \"72429324-7660-4f13-8658-ede06684d423\") " Dec 09 12:27:21 crc kubenswrapper[4703]: I1209 12:27:21.243655 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a9ded47-bcfd-4613-9fec-80c12e9a64b0-operator-scripts\") pod \"8a9ded47-bcfd-4613-9fec-80c12e9a64b0\" (UID: \"8a9ded47-bcfd-4613-9fec-80c12e9a64b0\") " Dec 09 12:27:21 crc kubenswrapper[4703]: I1209 12:27:21.243716 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-snvkj\" (UniqueName: \"kubernetes.io/projected/8a9ded47-bcfd-4613-9fec-80c12e9a64b0-kube-api-access-snvkj\") pod \"8a9ded47-bcfd-4613-9fec-80c12e9a64b0\" (UID: \"8a9ded47-bcfd-4613-9fec-80c12e9a64b0\") " Dec 09 12:27:21 crc kubenswrapper[4703]: I1209 12:27:21.243796 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9640f126-3eb2-4402-a141-f3bb22b15f40-operator-scripts\") pod \"9640f126-3eb2-4402-a141-f3bb22b15f40\" (UID: \"9640f126-3eb2-4402-a141-f3bb22b15f40\") " Dec 09 12:27:21 crc kubenswrapper[4703]: I1209 12:27:21.245769 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fzq6q\" (UniqueName: \"kubernetes.io/projected/90165483-6678-444e-bda2-249f59813ba9-kube-api-access-fzq6q\") on node \"crc\" DevicePath \"\"" Dec 09 12:27:21 crc kubenswrapper[4703]: I1209 12:27:21.245802 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7qmbs\" (UniqueName: \"kubernetes.io/projected/4231ddd2-a973-43ee-bfa0-3ab32515beb8-kube-api-access-7qmbs\") on node \"crc\" DevicePath \"\"" Dec 09 12:27:21 crc kubenswrapper[4703]: I1209 12:27:21.245814 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m92d5\" (UniqueName: \"kubernetes.io/projected/bf475bf7-37dd-4920-888c-b53e8065591e-kube-api-access-m92d5\") on node \"crc\" DevicePath \"\"" Dec 09 12:27:21 crc kubenswrapper[4703]: I1209 12:27:21.247263 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a9ded47-bcfd-4613-9fec-80c12e9a64b0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8a9ded47-bcfd-4613-9fec-80c12e9a64b0" (UID: "8a9ded47-bcfd-4613-9fec-80c12e9a64b0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:27:21 crc kubenswrapper[4703]: I1209 12:27:21.247387 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c0dec58-a2d5-44ef-8a01-2d7b555aed11-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5c0dec58-a2d5-44ef-8a01-2d7b555aed11" (UID: "5c0dec58-a2d5-44ef-8a01-2d7b555aed11"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:27:21 crc kubenswrapper[4703]: I1209 12:27:21.251926 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a9ded47-bcfd-4613-9fec-80c12e9a64b0-kube-api-access-snvkj" (OuterVolumeSpecName: "kube-api-access-snvkj") pod "8a9ded47-bcfd-4613-9fec-80c12e9a64b0" (UID: "8a9ded47-bcfd-4613-9fec-80c12e9a64b0"). InnerVolumeSpecName "kube-api-access-snvkj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:27:21 crc kubenswrapper[4703]: I1209 12:27:21.252029 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9640f126-3eb2-4402-a141-f3bb22b15f40-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9640f126-3eb2-4402-a141-f3bb22b15f40" (UID: "9640f126-3eb2-4402-a141-f3bb22b15f40"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:27:21 crc kubenswrapper[4703]: I1209 12:27:21.252813 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/72429324-7660-4f13-8658-ede06684d423-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "72429324-7660-4f13-8658-ede06684d423" (UID: "72429324-7660-4f13-8658-ede06684d423"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:27:21 crc kubenswrapper[4703]: I1209 12:27:21.253744 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9640f126-3eb2-4402-a141-f3bb22b15f40-kube-api-access-zc5gl" (OuterVolumeSpecName: "kube-api-access-zc5gl") pod "9640f126-3eb2-4402-a141-f3bb22b15f40" (UID: "9640f126-3eb2-4402-a141-f3bb22b15f40"). InnerVolumeSpecName "kube-api-access-zc5gl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:27:21 crc kubenswrapper[4703]: I1209 12:27:21.254796 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72429324-7660-4f13-8658-ede06684d423-kube-api-access-mrggw" (OuterVolumeSpecName: "kube-api-access-mrggw") pod "72429324-7660-4f13-8658-ede06684d423" (UID: "72429324-7660-4f13-8658-ede06684d423"). InnerVolumeSpecName "kube-api-access-mrggw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:27:21 crc kubenswrapper[4703]: I1209 12:27:21.285091 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c0dec58-a2d5-44ef-8a01-2d7b555aed11-kube-api-access-9xgqf" (OuterVolumeSpecName: "kube-api-access-9xgqf") pod "5c0dec58-a2d5-44ef-8a01-2d7b555aed11" (UID: "5c0dec58-a2d5-44ef-8a01-2d7b555aed11"). InnerVolumeSpecName "kube-api-access-9xgqf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:27:21 crc kubenswrapper[4703]: I1209 12:27:21.294805 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-7693-account-create-update-2gdqt" event={"ID":"9640f126-3eb2-4402-a141-f3bb22b15f40","Type":"ContainerDied","Data":"ea0e7a8b640b1bef076002873018f918cbcc816ff878096f3519d22918676dda"} Dec 09 12:27:21 crc kubenswrapper[4703]: I1209 12:27:21.295250 4703 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea0e7a8b640b1bef076002873018f918cbcc816ff878096f3519d22918676dda" Dec 09 12:27:21 crc kubenswrapper[4703]: I1209 12:27:21.295361 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-7693-account-create-update-2gdqt" Dec 09 12:27:21 crc kubenswrapper[4703]: I1209 12:27:21.309626 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7ad4bf88-77c9-4a0a-8fad-75f44d1b4f44","Type":"ContainerStarted","Data":"2122b13033142b5eba250289e5c638e93dbbf670cecc5250a92383b1e01ff260"} Dec 09 12:27:21 crc kubenswrapper[4703]: I1209 12:27:21.316334 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-ed5b-account-create-update-2nct2" Dec 09 12:27:21 crc kubenswrapper[4703]: I1209 12:27:21.316454 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-ed5b-account-create-update-2nct2" event={"ID":"8a9ded47-bcfd-4613-9fec-80c12e9a64b0","Type":"ContainerDied","Data":"ed9c554c8a1c0f2664cecba17cdce136e56b59fa050e83089913a6eeeabbacff"} Dec 09 12:27:21 crc kubenswrapper[4703]: I1209 12:27:21.316497 4703 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed9c554c8a1c0f2664cecba17cdce136e56b59fa050e83089913a6eeeabbacff" Dec 09 12:27:21 crc kubenswrapper[4703]: I1209 12:27:21.319650 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-b7ce-account-create-update-c4zgt" Dec 09 12:27:21 crc kubenswrapper[4703]: I1209 12:27:21.319625 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-b7ce-account-create-update-c4zgt" event={"ID":"5c0dec58-a2d5-44ef-8a01-2d7b555aed11","Type":"ContainerDied","Data":"15564143959d11f5aaedd44501d037adb9f075132e1bb8cb91562451652ea4d6"} Dec 09 12:27:21 crc kubenswrapper[4703]: I1209 12:27:21.319775 4703 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15564143959d11f5aaedd44501d037adb9f075132e1bb8cb91562451652ea4d6" Dec 09 12:27:21 crc kubenswrapper[4703]: I1209 12:27:21.326256 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-7rnfn" Dec 09 12:27:21 crc kubenswrapper[4703]: I1209 12:27:21.326854 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-7rnfn" event={"ID":"90165483-6678-444e-bda2-249f59813ba9","Type":"ContainerDied","Data":"34b45f38fd4d1e1e83628b3a5387533a77a5aa9a8265f0b32dea31334e7c81a1"} Dec 09 12:27:21 crc kubenswrapper[4703]: I1209 12:27:21.327142 4703 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="34b45f38fd4d1e1e83628b3a5387533a77a5aa9a8265f0b32dea31334e7c81a1" Dec 09 12:27:21 crc kubenswrapper[4703]: I1209 12:27:21.330330 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-create-bfgfz" Dec 09 12:27:21 crc kubenswrapper[4703]: I1209 12:27:21.331054 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-create-bfgfz" event={"ID":"72429324-7660-4f13-8658-ede06684d423","Type":"ContainerDied","Data":"b6160557e8a257a5b09019da1693875c6ef6aaa5f78491ea3ff92b8e8647cdcc"} Dec 09 12:27:21 crc kubenswrapper[4703]: I1209 12:27:21.331088 4703 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b6160557e8a257a5b09019da1693875c6ef6aaa5f78491ea3ff92b8e8647cdcc" Dec 09 12:27:21 crc kubenswrapper[4703]: I1209 12:27:21.335486 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-4wl7p" event={"ID":"bf475bf7-37dd-4920-888c-b53e8065591e","Type":"ContainerDied","Data":"c5a7ec5501c7e3387ab6338b56b039be4337cda33dd048a1a5394a38dd61a458"} Dec 09 12:27:21 crc kubenswrapper[4703]: I1209 12:27:21.335514 4703 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5a7ec5501c7e3387ab6338b56b039be4337cda33dd048a1a5394a38dd61a458" Dec 09 12:27:21 crc kubenswrapper[4703]: I1209 12:27:21.335628 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-4wl7p" Dec 09 12:27:21 crc kubenswrapper[4703]: I1209 12:27:21.337232 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-0f3e-account-create-update-2cnnf" event={"ID":"4231ddd2-a973-43ee-bfa0-3ab32515beb8","Type":"ContainerDied","Data":"d343c5b443d17ede4887b15ea4eae5492f48bcff5b8890f396f256b2a7473c9f"} Dec 09 12:27:21 crc kubenswrapper[4703]: I1209 12:27:21.337279 4703 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d343c5b443d17ede4887b15ea4eae5492f48bcff5b8890f396f256b2a7473c9f" Dec 09 12:27:21 crc kubenswrapper[4703]: I1209 12:27:21.337355 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-0f3e-account-create-update-2cnnf" Dec 09 12:27:21 crc kubenswrapper[4703]: I1209 12:27:21.343469 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"30ba4b6c-b025-4ab0-b589-a2c72caf1997","Type":"ContainerStarted","Data":"cb7a812651cdd085d741c3329eb8e66e127990e50ae01607b8db3591050aadc3"} Dec 09 12:27:21 crc kubenswrapper[4703]: I1209 12:27:21.348572 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xgqf\" (UniqueName: \"kubernetes.io/projected/5c0dec58-a2d5-44ef-8a01-2d7b555aed11-kube-api-access-9xgqf\") on node \"crc\" DevicePath \"\"" Dec 09 12:27:21 crc kubenswrapper[4703]: I1209 12:27:21.348623 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mrggw\" (UniqueName: \"kubernetes.io/projected/72429324-7660-4f13-8658-ede06684d423-kube-api-access-mrggw\") on node \"crc\" DevicePath \"\"" Dec 09 12:27:21 crc kubenswrapper[4703]: I1209 12:27:21.348638 4703 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/72429324-7660-4f13-8658-ede06684d423-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 12:27:21 crc kubenswrapper[4703]: I1209 12:27:21.348657 4703 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a9ded47-bcfd-4613-9fec-80c12e9a64b0-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 12:27:21 crc kubenswrapper[4703]: I1209 12:27:21.348670 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-snvkj\" (UniqueName: \"kubernetes.io/projected/8a9ded47-bcfd-4613-9fec-80c12e9a64b0-kube-api-access-snvkj\") on node \"crc\" DevicePath \"\"" Dec 09 12:27:21 crc kubenswrapper[4703]: I1209 12:27:21.348682 4703 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9640f126-3eb2-4402-a141-f3bb22b15f40-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 12:27:21 crc kubenswrapper[4703]: I1209 12:27:21.348697 4703 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c0dec58-a2d5-44ef-8a01-2d7b555aed11-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 12:27:21 crc kubenswrapper[4703]: I1209 12:27:21.348710 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zc5gl\" (UniqueName: \"kubernetes.io/projected/9640f126-3eb2-4402-a141-f3bb22b15f40-kube-api-access-zc5gl\") on node \"crc\" DevicePath \"\"" Dec 09 12:27:21 crc kubenswrapper[4703]: I1209 12:27:21.373174 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=27.373149215 podStartE2EDuration="27.373149215s" podCreationTimestamp="2025-12-09 12:26:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:27:21.368588955 +0000 UTC m=+1340.617352484" watchObservedRunningTime="2025-12-09 12:27:21.373149215 +0000 UTC m=+1340.621912734" Dec 09 12:27:22 crc kubenswrapper[4703]: I1209 12:27:22.532542 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-4fqbv" event={"ID":"d5253b89-3c89-45a9-8806-dfabf281ebbb","Type":"ContainerStarted","Data":"a26689d288ea613e0334ef9a66f35a89866a4f4a77d1792b39715b0d18789404"} Dec 09 12:27:22 crc kubenswrapper[4703]: I1209 12:27:22.570703 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-4fqbv" podStartSLOduration=10.375866902 podStartE2EDuration="19.570676945s" podCreationTimestamp="2025-12-09 12:27:03 +0000 UTC" firstStartedPulling="2025-12-09 12:27:11.960680441 +0000 UTC m=+1331.209443960" lastFinishedPulling="2025-12-09 12:27:21.155490484 +0000 UTC m=+1340.404254003" observedRunningTime="2025-12-09 12:27:22.563851816 +0000 UTC m=+1341.812615345" watchObservedRunningTime="2025-12-09 12:27:22.570676945 +0000 UTC m=+1341.819440464" Dec 09 12:27:23 crc kubenswrapper[4703]: I1209 12:27:23.547611 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7ad4bf88-77c9-4a0a-8fad-75f44d1b4f44","Type":"ContainerStarted","Data":"73b89c8ac13275e791c9e39956f2c2a3ad2a908c687e81637c3d4ed1b47dfa77"} Dec 09 12:27:23 crc kubenswrapper[4703]: I1209 12:27:23.548036 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7ad4bf88-77c9-4a0a-8fad-75f44d1b4f44","Type":"ContainerStarted","Data":"e8248f46967e0d80764205679811e6083fe76bddbe7b817466d3639ec83c553a"} Dec 09 12:27:23 crc kubenswrapper[4703]: I1209 12:27:23.548054 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7ad4bf88-77c9-4a0a-8fad-75f44d1b4f44","Type":"ContainerStarted","Data":"af1a133907aeb669fef943bbe05155f76e5768c9a192e6c85ab4baa307cd6570"} Dec 09 12:27:24 crc kubenswrapper[4703]: I1209 12:27:24.561426 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7ad4bf88-77c9-4a0a-8fad-75f44d1b4f44","Type":"ContainerStarted","Data":"a10f597590bacf9caf0a678af13d69e1a4d7b849c89b53f6630493e547089f50"} Dec 09 12:27:25 crc kubenswrapper[4703]: I1209 12:27:25.572694 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Dec 09 12:27:25 crc kubenswrapper[4703]: I1209 12:27:25.574182 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Dec 09 12:27:25 crc kubenswrapper[4703]: I1209 12:27:25.580241 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7ad4bf88-77c9-4a0a-8fad-75f44d1b4f44","Type":"ContainerStarted","Data":"a3abf1023dbdb922136994ff447976a653b1ccca600546a6932100ff3d954fda"} Dec 09 12:27:25 crc kubenswrapper[4703]: I1209 12:27:25.580301 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7ad4bf88-77c9-4a0a-8fad-75f44d1b4f44","Type":"ContainerStarted","Data":"3c2d61423ac8f5105d7392fab7913512211c6d05e1b06128df2e2251c7fc127a"} Dec 09 12:27:25 crc kubenswrapper[4703]: I1209 12:27:25.580313 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7ad4bf88-77c9-4a0a-8fad-75f44d1b4f44","Type":"ContainerStarted","Data":"45c3c7b91197c1506ad4200aa037d1a2b59d48148d7d722114a35056553cddcd"} Dec 09 12:27:25 crc kubenswrapper[4703]: I1209 12:27:25.582938 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Dec 09 12:27:25 crc kubenswrapper[4703]: I1209 12:27:25.584349 4703 generic.go:334] "Generic (PLEG): container finished" podID="d5253b89-3c89-45a9-8806-dfabf281ebbb" containerID="a26689d288ea613e0334ef9a66f35a89866a4f4a77d1792b39715b0d18789404" exitCode=0 Dec 09 12:27:25 crc kubenswrapper[4703]: I1209 12:27:25.584406 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-4fqbv" event={"ID":"d5253b89-3c89-45a9-8806-dfabf281ebbb","Type":"ContainerDied","Data":"a26689d288ea613e0334ef9a66f35a89866a4f4a77d1792b39715b0d18789404"} Dec 09 12:27:26 crc kubenswrapper[4703]: I1209 12:27:26.608963 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7ad4bf88-77c9-4a0a-8fad-75f44d1b4f44","Type":"ContainerStarted","Data":"cd99f4654e840b4bccbb87663c67a277cadce7bdad9183957f7741ce02910303"} Dec 09 12:27:26 crc kubenswrapper[4703]: I1209 12:27:26.609570 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7ad4bf88-77c9-4a0a-8fad-75f44d1b4f44","Type":"ContainerStarted","Data":"9a92d3ccabfef70f37203b441b57b657164623a25e0840483526a6d699b5ce90"} Dec 09 12:27:26 crc kubenswrapper[4703]: I1209 12:27:26.610166 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7ad4bf88-77c9-4a0a-8fad-75f44d1b4f44","Type":"ContainerStarted","Data":"0d2000c52e3ae684839565838d2a320f76e045e8ab7df333f8d36b68c5e65793"} Dec 09 12:27:26 crc kubenswrapper[4703]: I1209 12:27:26.610205 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7ad4bf88-77c9-4a0a-8fad-75f44d1b4f44","Type":"ContainerStarted","Data":"4e62037b3f869b8f7c5b48ddd8f5bc75aaa0f4153925b650da9192394cd8405f"} Dec 09 12:27:26 crc kubenswrapper[4703]: I1209 12:27:26.616436 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Dec 09 12:27:26 crc kubenswrapper[4703]: I1209 12:27:26.655074 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=36.493870002 podStartE2EDuration="59.655047184s" podCreationTimestamp="2025-12-09 12:26:27 +0000 UTC" firstStartedPulling="2025-12-09 12:27:01.63243786 +0000 UTC m=+1320.881201379" lastFinishedPulling="2025-12-09 12:27:24.793615042 +0000 UTC m=+1344.042378561" observedRunningTime="2025-12-09 12:27:26.649654523 +0000 UTC m=+1345.898418042" watchObservedRunningTime="2025-12-09 12:27:26.655047184 +0000 UTC m=+1345.903810703" Dec 09 12:27:27 crc kubenswrapper[4703]: I1209 12:27:27.125054 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-4fqbv" Dec 09 12:27:27 crc kubenswrapper[4703]: I1209 12:27:27.126004 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-x57zj"] Dec 09 12:27:27 crc kubenswrapper[4703]: E1209 12:27:27.126788 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c0dec58-a2d5-44ef-8a01-2d7b555aed11" containerName="mariadb-account-create-update" Dec 09 12:27:27 crc kubenswrapper[4703]: I1209 12:27:27.126811 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c0dec58-a2d5-44ef-8a01-2d7b555aed11" containerName="mariadb-account-create-update" Dec 09 12:27:27 crc kubenswrapper[4703]: E1209 12:27:27.126821 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4231ddd2-a973-43ee-bfa0-3ab32515beb8" containerName="mariadb-account-create-update" Dec 09 12:27:27 crc kubenswrapper[4703]: I1209 12:27:27.126830 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="4231ddd2-a973-43ee-bfa0-3ab32515beb8" containerName="mariadb-account-create-update" Dec 09 12:27:27 crc kubenswrapper[4703]: E1209 12:27:27.126852 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9640f126-3eb2-4402-a141-f3bb22b15f40" containerName="mariadb-account-create-update" Dec 09 12:27:27 crc kubenswrapper[4703]: I1209 12:27:27.126858 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="9640f126-3eb2-4402-a141-f3bb22b15f40" containerName="mariadb-account-create-update" Dec 09 12:27:27 crc kubenswrapper[4703]: E1209 12:27:27.126872 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90165483-6678-444e-bda2-249f59813ba9" containerName="mariadb-database-create" Dec 09 12:27:27 crc kubenswrapper[4703]: I1209 12:27:27.126878 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="90165483-6678-444e-bda2-249f59813ba9" containerName="mariadb-database-create" Dec 09 12:27:27 crc kubenswrapper[4703]: E1209 12:27:27.126890 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5253b89-3c89-45a9-8806-dfabf281ebbb" containerName="keystone-db-sync" Dec 09 12:27:27 crc kubenswrapper[4703]: I1209 12:27:27.126898 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5253b89-3c89-45a9-8806-dfabf281ebbb" containerName="keystone-db-sync" Dec 09 12:27:27 crc kubenswrapper[4703]: E1209 12:27:27.126916 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf475bf7-37dd-4920-888c-b53e8065591e" containerName="mariadb-database-create" Dec 09 12:27:27 crc kubenswrapper[4703]: I1209 12:27:27.126922 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf475bf7-37dd-4920-888c-b53e8065591e" containerName="mariadb-database-create" Dec 09 12:27:27 crc kubenswrapper[4703]: E1209 12:27:27.126953 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a9ded47-bcfd-4613-9fec-80c12e9a64b0" containerName="mariadb-account-create-update" Dec 09 12:27:27 crc kubenswrapper[4703]: I1209 12:27:27.126963 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a9ded47-bcfd-4613-9fec-80c12e9a64b0" containerName="mariadb-account-create-update" Dec 09 12:27:27 crc kubenswrapper[4703]: E1209 12:27:27.126971 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7648d9a-d5c5-4854-ad02-ddf686748d6a" containerName="mariadb-database-create" Dec 09 12:27:27 crc kubenswrapper[4703]: I1209 12:27:27.126977 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7648d9a-d5c5-4854-ad02-ddf686748d6a" containerName="mariadb-database-create" Dec 09 12:27:27 crc kubenswrapper[4703]: E1209 12:27:27.127004 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72429324-7660-4f13-8658-ede06684d423" containerName="mariadb-database-create" Dec 09 12:27:27 crc kubenswrapper[4703]: I1209 12:27:27.127022 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="72429324-7660-4f13-8658-ede06684d423" containerName="mariadb-database-create" Dec 09 12:27:27 crc kubenswrapper[4703]: I1209 12:27:27.127475 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a9ded47-bcfd-4613-9fec-80c12e9a64b0" containerName="mariadb-account-create-update" Dec 09 12:27:27 crc kubenswrapper[4703]: I1209 12:27:27.127524 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="4231ddd2-a973-43ee-bfa0-3ab32515beb8" containerName="mariadb-account-create-update" Dec 09 12:27:27 crc kubenswrapper[4703]: I1209 12:27:27.127539 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="9640f126-3eb2-4402-a141-f3bb22b15f40" containerName="mariadb-account-create-update" Dec 09 12:27:27 crc kubenswrapper[4703]: I1209 12:27:27.127553 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7648d9a-d5c5-4854-ad02-ddf686748d6a" containerName="mariadb-database-create" Dec 09 12:27:27 crc kubenswrapper[4703]: I1209 12:27:27.127573 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="72429324-7660-4f13-8658-ede06684d423" containerName="mariadb-database-create" Dec 09 12:27:27 crc kubenswrapper[4703]: I1209 12:27:27.127594 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf475bf7-37dd-4920-888c-b53e8065591e" containerName="mariadb-database-create" Dec 09 12:27:27 crc kubenswrapper[4703]: I1209 12:27:27.127606 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c0dec58-a2d5-44ef-8a01-2d7b555aed11" containerName="mariadb-account-create-update" Dec 09 12:27:27 crc kubenswrapper[4703]: I1209 12:27:27.127626 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5253b89-3c89-45a9-8806-dfabf281ebbb" containerName="keystone-db-sync" Dec 09 12:27:27 crc kubenswrapper[4703]: I1209 12:27:27.127640 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="90165483-6678-444e-bda2-249f59813ba9" containerName="mariadb-database-create" Dec 09 12:27:27 crc kubenswrapper[4703]: I1209 12:27:27.137309 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-x57zj"] Dec 09 12:27:27 crc kubenswrapper[4703]: I1209 12:27:27.137462 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-x57zj" Dec 09 12:27:27 crc kubenswrapper[4703]: I1209 12:27:27.144552 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Dec 09 12:27:27 crc kubenswrapper[4703]: I1209 12:27:27.197301 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5253b89-3c89-45a9-8806-dfabf281ebbb-config-data\") pod \"d5253b89-3c89-45a9-8806-dfabf281ebbb\" (UID: \"d5253b89-3c89-45a9-8806-dfabf281ebbb\") " Dec 09 12:27:27 crc kubenswrapper[4703]: I1209 12:27:27.197523 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5253b89-3c89-45a9-8806-dfabf281ebbb-combined-ca-bundle\") pod \"d5253b89-3c89-45a9-8806-dfabf281ebbb\" (UID: \"d5253b89-3c89-45a9-8806-dfabf281ebbb\") " Dec 09 12:27:27 crc kubenswrapper[4703]: I1209 12:27:27.197621 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jpvpm\" (UniqueName: \"kubernetes.io/projected/d5253b89-3c89-45a9-8806-dfabf281ebbb-kube-api-access-jpvpm\") pod \"d5253b89-3c89-45a9-8806-dfabf281ebbb\" (UID: \"d5253b89-3c89-45a9-8806-dfabf281ebbb\") " Dec 09 12:27:27 crc kubenswrapper[4703]: I1209 12:27:27.198137 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58ebb093-696a-4703-8e50-0b19695f15d5-config\") pod \"dnsmasq-dns-764c5664d7-x57zj\" (UID: \"58ebb093-696a-4703-8e50-0b19695f15d5\") " pod="openstack/dnsmasq-dns-764c5664d7-x57zj" Dec 09 12:27:27 crc kubenswrapper[4703]: I1209 12:27:27.198206 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/58ebb093-696a-4703-8e50-0b19695f15d5-dns-svc\") pod \"dnsmasq-dns-764c5664d7-x57zj\" (UID: \"58ebb093-696a-4703-8e50-0b19695f15d5\") " pod="openstack/dnsmasq-dns-764c5664d7-x57zj" Dec 09 12:27:27 crc kubenswrapper[4703]: I1209 12:27:27.198226 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/58ebb093-696a-4703-8e50-0b19695f15d5-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-x57zj\" (UID: \"58ebb093-696a-4703-8e50-0b19695f15d5\") " pod="openstack/dnsmasq-dns-764c5664d7-x57zj" Dec 09 12:27:27 crc kubenswrapper[4703]: I1209 12:27:27.198269 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/58ebb093-696a-4703-8e50-0b19695f15d5-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-x57zj\" (UID: \"58ebb093-696a-4703-8e50-0b19695f15d5\") " pod="openstack/dnsmasq-dns-764c5664d7-x57zj" Dec 09 12:27:27 crc kubenswrapper[4703]: I1209 12:27:27.198405 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnws5\" (UniqueName: \"kubernetes.io/projected/58ebb093-696a-4703-8e50-0b19695f15d5-kube-api-access-nnws5\") pod \"dnsmasq-dns-764c5664d7-x57zj\" (UID: \"58ebb093-696a-4703-8e50-0b19695f15d5\") " pod="openstack/dnsmasq-dns-764c5664d7-x57zj" Dec 09 12:27:27 crc kubenswrapper[4703]: I1209 12:27:27.198463 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/58ebb093-696a-4703-8e50-0b19695f15d5-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-x57zj\" (UID: \"58ebb093-696a-4703-8e50-0b19695f15d5\") " pod="openstack/dnsmasq-dns-764c5664d7-x57zj" Dec 09 12:27:27 crc kubenswrapper[4703]: I1209 12:27:27.207572 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5253b89-3c89-45a9-8806-dfabf281ebbb-kube-api-access-jpvpm" (OuterVolumeSpecName: "kube-api-access-jpvpm") pod "d5253b89-3c89-45a9-8806-dfabf281ebbb" (UID: "d5253b89-3c89-45a9-8806-dfabf281ebbb"). InnerVolumeSpecName "kube-api-access-jpvpm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:27:27 crc kubenswrapper[4703]: I1209 12:27:27.248580 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5253b89-3c89-45a9-8806-dfabf281ebbb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d5253b89-3c89-45a9-8806-dfabf281ebbb" (UID: "d5253b89-3c89-45a9-8806-dfabf281ebbb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:27:27 crc kubenswrapper[4703]: I1209 12:27:27.256536 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5253b89-3c89-45a9-8806-dfabf281ebbb-config-data" (OuterVolumeSpecName: "config-data") pod "d5253b89-3c89-45a9-8806-dfabf281ebbb" (UID: "d5253b89-3c89-45a9-8806-dfabf281ebbb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:27:27 crc kubenswrapper[4703]: I1209 12:27:27.302078 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/58ebb093-696a-4703-8e50-0b19695f15d5-dns-svc\") pod \"dnsmasq-dns-764c5664d7-x57zj\" (UID: \"58ebb093-696a-4703-8e50-0b19695f15d5\") " pod="openstack/dnsmasq-dns-764c5664d7-x57zj" Dec 09 12:27:27 crc kubenswrapper[4703]: I1209 12:27:27.302128 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/58ebb093-696a-4703-8e50-0b19695f15d5-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-x57zj\" (UID: \"58ebb093-696a-4703-8e50-0b19695f15d5\") " pod="openstack/dnsmasq-dns-764c5664d7-x57zj" Dec 09 12:27:27 crc kubenswrapper[4703]: I1209 12:27:27.302181 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/58ebb093-696a-4703-8e50-0b19695f15d5-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-x57zj\" (UID: \"58ebb093-696a-4703-8e50-0b19695f15d5\") " pod="openstack/dnsmasq-dns-764c5664d7-x57zj" Dec 09 12:27:27 crc kubenswrapper[4703]: I1209 12:27:27.302376 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnws5\" (UniqueName: \"kubernetes.io/projected/58ebb093-696a-4703-8e50-0b19695f15d5-kube-api-access-nnws5\") pod \"dnsmasq-dns-764c5664d7-x57zj\" (UID: \"58ebb093-696a-4703-8e50-0b19695f15d5\") " pod="openstack/dnsmasq-dns-764c5664d7-x57zj" Dec 09 12:27:27 crc kubenswrapper[4703]: I1209 12:27:27.302425 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/58ebb093-696a-4703-8e50-0b19695f15d5-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-x57zj\" (UID: \"58ebb093-696a-4703-8e50-0b19695f15d5\") " pod="openstack/dnsmasq-dns-764c5664d7-x57zj" Dec 09 12:27:27 crc kubenswrapper[4703]: I1209 12:27:27.302456 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58ebb093-696a-4703-8e50-0b19695f15d5-config\") pod \"dnsmasq-dns-764c5664d7-x57zj\" (UID: \"58ebb093-696a-4703-8e50-0b19695f15d5\") " pod="openstack/dnsmasq-dns-764c5664d7-x57zj" Dec 09 12:27:27 crc kubenswrapper[4703]: I1209 12:27:27.302523 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jpvpm\" (UniqueName: \"kubernetes.io/projected/d5253b89-3c89-45a9-8806-dfabf281ebbb-kube-api-access-jpvpm\") on node \"crc\" DevicePath \"\"" Dec 09 12:27:27 crc kubenswrapper[4703]: I1209 12:27:27.302538 4703 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5253b89-3c89-45a9-8806-dfabf281ebbb-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 12:27:27 crc kubenswrapper[4703]: I1209 12:27:27.302547 4703 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5253b89-3c89-45a9-8806-dfabf281ebbb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:27:27 crc kubenswrapper[4703]: I1209 12:27:27.303234 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/58ebb093-696a-4703-8e50-0b19695f15d5-dns-svc\") pod \"dnsmasq-dns-764c5664d7-x57zj\" (UID: \"58ebb093-696a-4703-8e50-0b19695f15d5\") " pod="openstack/dnsmasq-dns-764c5664d7-x57zj" Dec 09 12:27:27 crc kubenswrapper[4703]: I1209 12:27:27.303683 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58ebb093-696a-4703-8e50-0b19695f15d5-config\") pod \"dnsmasq-dns-764c5664d7-x57zj\" (UID: \"58ebb093-696a-4703-8e50-0b19695f15d5\") " pod="openstack/dnsmasq-dns-764c5664d7-x57zj" Dec 09 12:27:27 crc kubenswrapper[4703]: I1209 12:27:27.303896 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/58ebb093-696a-4703-8e50-0b19695f15d5-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-x57zj\" (UID: \"58ebb093-696a-4703-8e50-0b19695f15d5\") " pod="openstack/dnsmasq-dns-764c5664d7-x57zj" Dec 09 12:27:27 crc kubenswrapper[4703]: I1209 12:27:27.304563 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/58ebb093-696a-4703-8e50-0b19695f15d5-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-x57zj\" (UID: \"58ebb093-696a-4703-8e50-0b19695f15d5\") " pod="openstack/dnsmasq-dns-764c5664d7-x57zj" Dec 09 12:27:27 crc kubenswrapper[4703]: I1209 12:27:27.304987 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/58ebb093-696a-4703-8e50-0b19695f15d5-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-x57zj\" (UID: \"58ebb093-696a-4703-8e50-0b19695f15d5\") " pod="openstack/dnsmasq-dns-764c5664d7-x57zj" Dec 09 12:27:27 crc kubenswrapper[4703]: I1209 12:27:27.323882 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnws5\" (UniqueName: \"kubernetes.io/projected/58ebb093-696a-4703-8e50-0b19695f15d5-kube-api-access-nnws5\") pod \"dnsmasq-dns-764c5664d7-x57zj\" (UID: \"58ebb093-696a-4703-8e50-0b19695f15d5\") " pod="openstack/dnsmasq-dns-764c5664d7-x57zj" Dec 09 12:27:27 crc kubenswrapper[4703]: I1209 12:27:27.468401 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-x57zj" Dec 09 12:27:27 crc kubenswrapper[4703]: I1209 12:27:27.619594 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-4fqbv" event={"ID":"d5253b89-3c89-45a9-8806-dfabf281ebbb","Type":"ContainerDied","Data":"f78ae3f0a9d2a34e6e80a4f520b632a45d9ed52b0ffed8a5ad3cfdcc52ac4564"} Dec 09 12:27:27 crc kubenswrapper[4703]: I1209 12:27:27.619656 4703 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f78ae3f0a9d2a34e6e80a4f520b632a45d9ed52b0ffed8a5ad3cfdcc52ac4564" Dec 09 12:27:27 crc kubenswrapper[4703]: I1209 12:27:27.619723 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-4fqbv" Dec 09 12:27:27 crc kubenswrapper[4703]: I1209 12:27:27.622372 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-29stw" event={"ID":"e4e5cccd-9c7c-467b-a10b-4a989ea688e3","Type":"ContainerStarted","Data":"e8d2763d1b2c74ab50dd51c74ca6a9e741daa257fa609a0dac2037996b9d20f0"} Dec 09 12:27:27 crc kubenswrapper[4703]: I1209 12:27:27.676876 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-29stw" podStartSLOduration=2.778174216 podStartE2EDuration="36.676840706s" podCreationTimestamp="2025-12-09 12:26:51 +0000 UTC" firstStartedPulling="2025-12-09 12:26:52.612423451 +0000 UTC m=+1311.861186970" lastFinishedPulling="2025-12-09 12:27:26.511089941 +0000 UTC m=+1345.759853460" observedRunningTime="2025-12-09 12:27:27.648519051 +0000 UTC m=+1346.897282560" watchObservedRunningTime="2025-12-09 12:27:27.676840706 +0000 UTC m=+1346.925604235" Dec 09 12:27:27 crc kubenswrapper[4703]: I1209 12:27:27.947860 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-x57zj"] Dec 09 12:27:27 crc kubenswrapper[4703]: I1209 12:27:27.965878 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-lqvgg"] Dec 09 12:27:27 crc kubenswrapper[4703]: I1209 12:27:27.967354 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-lqvgg" Dec 09 12:27:27 crc kubenswrapper[4703]: I1209 12:27:27.974148 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 09 12:27:27 crc kubenswrapper[4703]: I1209 12:27:27.974506 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 09 12:27:27 crc kubenswrapper[4703]: I1209 12:27:27.974667 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-s28wx" Dec 09 12:27:27 crc kubenswrapper[4703]: I1209 12:27:27.979362 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 09 12:27:27 crc kubenswrapper[4703]: I1209 12:27:27.979571 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Dec 09 12:27:27 crc kubenswrapper[4703]: I1209 12:27:27.991696 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-lqvgg"] Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.027598 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/13a09cf1-99ce-4245-9e5c-260b02aeff14-scripts\") pod \"keystone-bootstrap-lqvgg\" (UID: \"13a09cf1-99ce-4245-9e5c-260b02aeff14\") " pod="openstack/keystone-bootstrap-lqvgg" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.027689 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vr4kk\" (UniqueName: \"kubernetes.io/projected/13a09cf1-99ce-4245-9e5c-260b02aeff14-kube-api-access-vr4kk\") pod \"keystone-bootstrap-lqvgg\" (UID: \"13a09cf1-99ce-4245-9e5c-260b02aeff14\") " pod="openstack/keystone-bootstrap-lqvgg" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.027759 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13a09cf1-99ce-4245-9e5c-260b02aeff14-combined-ca-bundle\") pod \"keystone-bootstrap-lqvgg\" (UID: \"13a09cf1-99ce-4245-9e5c-260b02aeff14\") " pod="openstack/keystone-bootstrap-lqvgg" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.027793 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/13a09cf1-99ce-4245-9e5c-260b02aeff14-fernet-keys\") pod \"keystone-bootstrap-lqvgg\" (UID: \"13a09cf1-99ce-4245-9e5c-260b02aeff14\") " pod="openstack/keystone-bootstrap-lqvgg" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.027852 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13a09cf1-99ce-4245-9e5c-260b02aeff14-config-data\") pod \"keystone-bootstrap-lqvgg\" (UID: \"13a09cf1-99ce-4245-9e5c-260b02aeff14\") " pod="openstack/keystone-bootstrap-lqvgg" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.027879 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/13a09cf1-99ce-4245-9e5c-260b02aeff14-credential-keys\") pod \"keystone-bootstrap-lqvgg\" (UID: \"13a09cf1-99ce-4245-9e5c-260b02aeff14\") " pod="openstack/keystone-bootstrap-lqvgg" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.028056 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-j7cmw"] Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.030446 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-j7cmw" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.124079 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-j7cmw"] Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.131831 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1d66be2-3a97-487f-99d4-59985c569689-config\") pod \"dnsmasq-dns-5959f8865f-j7cmw\" (UID: \"f1d66be2-3a97-487f-99d4-59985c569689\") " pod="openstack/dnsmasq-dns-5959f8865f-j7cmw" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.131943 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/13a09cf1-99ce-4245-9e5c-260b02aeff14-scripts\") pod \"keystone-bootstrap-lqvgg\" (UID: \"13a09cf1-99ce-4245-9e5c-260b02aeff14\") " pod="openstack/keystone-bootstrap-lqvgg" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.131994 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vr4kk\" (UniqueName: \"kubernetes.io/projected/13a09cf1-99ce-4245-9e5c-260b02aeff14-kube-api-access-vr4kk\") pod \"keystone-bootstrap-lqvgg\" (UID: \"13a09cf1-99ce-4245-9e5c-260b02aeff14\") " pod="openstack/keystone-bootstrap-lqvgg" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.132033 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f1d66be2-3a97-487f-99d4-59985c569689-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-j7cmw\" (UID: \"f1d66be2-3a97-487f-99d4-59985c569689\") " pod="openstack/dnsmasq-dns-5959f8865f-j7cmw" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.132079 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13a09cf1-99ce-4245-9e5c-260b02aeff14-combined-ca-bundle\") pod \"keystone-bootstrap-lqvgg\" (UID: \"13a09cf1-99ce-4245-9e5c-260b02aeff14\") " pod="openstack/keystone-bootstrap-lqvgg" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.132130 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/13a09cf1-99ce-4245-9e5c-260b02aeff14-fernet-keys\") pod \"keystone-bootstrap-lqvgg\" (UID: \"13a09cf1-99ce-4245-9e5c-260b02aeff14\") " pod="openstack/keystone-bootstrap-lqvgg" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.132172 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f1d66be2-3a97-487f-99d4-59985c569689-dns-svc\") pod \"dnsmasq-dns-5959f8865f-j7cmw\" (UID: \"f1d66be2-3a97-487f-99d4-59985c569689\") " pod="openstack/dnsmasq-dns-5959f8865f-j7cmw" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.132238 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13a09cf1-99ce-4245-9e5c-260b02aeff14-config-data\") pod \"keystone-bootstrap-lqvgg\" (UID: \"13a09cf1-99ce-4245-9e5c-260b02aeff14\") " pod="openstack/keystone-bootstrap-lqvgg" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.132262 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f1d66be2-3a97-487f-99d4-59985c569689-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-j7cmw\" (UID: \"f1d66be2-3a97-487f-99d4-59985c569689\") " pod="openstack/dnsmasq-dns-5959f8865f-j7cmw" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.132284 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/13a09cf1-99ce-4245-9e5c-260b02aeff14-credential-keys\") pod \"keystone-bootstrap-lqvgg\" (UID: \"13a09cf1-99ce-4245-9e5c-260b02aeff14\") " pod="openstack/keystone-bootstrap-lqvgg" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.132306 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f1d66be2-3a97-487f-99d4-59985c569689-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-j7cmw\" (UID: \"f1d66be2-3a97-487f-99d4-59985c569689\") " pod="openstack/dnsmasq-dns-5959f8865f-j7cmw" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.132347 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngc42\" (UniqueName: \"kubernetes.io/projected/f1d66be2-3a97-487f-99d4-59985c569689-kube-api-access-ngc42\") pod \"dnsmasq-dns-5959f8865f-j7cmw\" (UID: \"f1d66be2-3a97-487f-99d4-59985c569689\") " pod="openstack/dnsmasq-dns-5959f8865f-j7cmw" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.156036 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/13a09cf1-99ce-4245-9e5c-260b02aeff14-fernet-keys\") pod \"keystone-bootstrap-lqvgg\" (UID: \"13a09cf1-99ce-4245-9e5c-260b02aeff14\") " pod="openstack/keystone-bootstrap-lqvgg" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.160948 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/13a09cf1-99ce-4245-9e5c-260b02aeff14-scripts\") pod \"keystone-bootstrap-lqvgg\" (UID: \"13a09cf1-99ce-4245-9e5c-260b02aeff14\") " pod="openstack/keystone-bootstrap-lqvgg" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.167828 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13a09cf1-99ce-4245-9e5c-260b02aeff14-combined-ca-bundle\") pod \"keystone-bootstrap-lqvgg\" (UID: \"13a09cf1-99ce-4245-9e5c-260b02aeff14\") " pod="openstack/keystone-bootstrap-lqvgg" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.168224 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/13a09cf1-99ce-4245-9e5c-260b02aeff14-credential-keys\") pod \"keystone-bootstrap-lqvgg\" (UID: \"13a09cf1-99ce-4245-9e5c-260b02aeff14\") " pod="openstack/keystone-bootstrap-lqvgg" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.169340 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13a09cf1-99ce-4245-9e5c-260b02aeff14-config-data\") pod \"keystone-bootstrap-lqvgg\" (UID: \"13a09cf1-99ce-4245-9e5c-260b02aeff14\") " pod="openstack/keystone-bootstrap-lqvgg" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.179775 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-x57zj"] Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.187164 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vr4kk\" (UniqueName: \"kubernetes.io/projected/13a09cf1-99ce-4245-9e5c-260b02aeff14-kube-api-access-vr4kk\") pod \"keystone-bootstrap-lqvgg\" (UID: \"13a09cf1-99ce-4245-9e5c-260b02aeff14\") " pod="openstack/keystone-bootstrap-lqvgg" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.238061 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f1d66be2-3a97-487f-99d4-59985c569689-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-j7cmw\" (UID: \"f1d66be2-3a97-487f-99d4-59985c569689\") " pod="openstack/dnsmasq-dns-5959f8865f-j7cmw" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.238118 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f1d66be2-3a97-487f-99d4-59985c569689-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-j7cmw\" (UID: \"f1d66be2-3a97-487f-99d4-59985c569689\") " pod="openstack/dnsmasq-dns-5959f8865f-j7cmw" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.238178 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngc42\" (UniqueName: \"kubernetes.io/projected/f1d66be2-3a97-487f-99d4-59985c569689-kube-api-access-ngc42\") pod \"dnsmasq-dns-5959f8865f-j7cmw\" (UID: \"f1d66be2-3a97-487f-99d4-59985c569689\") " pod="openstack/dnsmasq-dns-5959f8865f-j7cmw" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.238231 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1d66be2-3a97-487f-99d4-59985c569689-config\") pod \"dnsmasq-dns-5959f8865f-j7cmw\" (UID: \"f1d66be2-3a97-487f-99d4-59985c569689\") " pod="openstack/dnsmasq-dns-5959f8865f-j7cmw" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.238305 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f1d66be2-3a97-487f-99d4-59985c569689-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-j7cmw\" (UID: \"f1d66be2-3a97-487f-99d4-59985c569689\") " pod="openstack/dnsmasq-dns-5959f8865f-j7cmw" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.238360 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f1d66be2-3a97-487f-99d4-59985c569689-dns-svc\") pod \"dnsmasq-dns-5959f8865f-j7cmw\" (UID: \"f1d66be2-3a97-487f-99d4-59985c569689\") " pod="openstack/dnsmasq-dns-5959f8865f-j7cmw" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.239645 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f1d66be2-3a97-487f-99d4-59985c569689-dns-svc\") pod \"dnsmasq-dns-5959f8865f-j7cmw\" (UID: \"f1d66be2-3a97-487f-99d4-59985c569689\") " pod="openstack/dnsmasq-dns-5959f8865f-j7cmw" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.240040 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f1d66be2-3a97-487f-99d4-59985c569689-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-j7cmw\" (UID: \"f1d66be2-3a97-487f-99d4-59985c569689\") " pod="openstack/dnsmasq-dns-5959f8865f-j7cmw" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.240355 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1d66be2-3a97-487f-99d4-59985c569689-config\") pod \"dnsmasq-dns-5959f8865f-j7cmw\" (UID: \"f1d66be2-3a97-487f-99d4-59985c569689\") " pod="openstack/dnsmasq-dns-5959f8865f-j7cmw" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.240751 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f1d66be2-3a97-487f-99d4-59985c569689-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-j7cmw\" (UID: \"f1d66be2-3a97-487f-99d4-59985c569689\") " pod="openstack/dnsmasq-dns-5959f8865f-j7cmw" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.241007 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f1d66be2-3a97-487f-99d4-59985c569689-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-j7cmw\" (UID: \"f1d66be2-3a97-487f-99d4-59985c569689\") " pod="openstack/dnsmasq-dns-5959f8865f-j7cmw" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.277320 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-55f4w"] Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.277787 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngc42\" (UniqueName: \"kubernetes.io/projected/f1d66be2-3a97-487f-99d4-59985c569689-kube-api-access-ngc42\") pod \"dnsmasq-dns-5959f8865f-j7cmw\" (UID: \"f1d66be2-3a97-487f-99d4-59985c569689\") " pod="openstack/dnsmasq-dns-5959f8865f-j7cmw" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.292950 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-55f4w" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.304523 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.304794 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.304610 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-jnktl" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.317107 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-j7cmw" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.335335 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-5tddl"] Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.337580 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-5tddl" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.352208 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0d3e75ac-9860-42b3-b442-f9bd60c82e58-config\") pod \"neutron-db-sync-55f4w\" (UID: \"0d3e75ac-9860-42b3-b442-f9bd60c82e58\") " pod="openstack/neutron-db-sync-55f4w" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.352260 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d3e75ac-9860-42b3-b442-f9bd60c82e58-combined-ca-bundle\") pod \"neutron-db-sync-55f4w\" (UID: \"0d3e75ac-9860-42b3-b442-f9bd60c82e58\") " pod="openstack/neutron-db-sync-55f4w" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.352353 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65rqn\" (UniqueName: \"kubernetes.io/projected/0d3e75ac-9860-42b3-b442-f9bd60c82e58-kube-api-access-65rqn\") pod \"neutron-db-sync-55f4w\" (UID: \"0d3e75ac-9860-42b3-b442-f9bd60c82e58\") " pod="openstack/neutron-db-sync-55f4w" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.354134 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.354934 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.363990 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-5ckqr" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.364926 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-lqvgg" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.417769 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-55f4w"] Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.449885 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-5tddl"] Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.462125 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d3e75ac-9860-42b3-b442-f9bd60c82e58-combined-ca-bundle\") pod \"neutron-db-sync-55f4w\" (UID: \"0d3e75ac-9860-42b3-b442-f9bd60c82e58\") " pod="openstack/neutron-db-sync-55f4w" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.462491 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65rqn\" (UniqueName: \"kubernetes.io/projected/0d3e75ac-9860-42b3-b442-f9bd60c82e58-kube-api-access-65rqn\") pod \"neutron-db-sync-55f4w\" (UID: \"0d3e75ac-9860-42b3-b442-f9bd60c82e58\") " pod="openstack/neutron-db-sync-55f4w" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.462572 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fk6px\" (UniqueName: \"kubernetes.io/projected/24ebaba5-65a6-4be5-8112-10c77a6d986c-kube-api-access-fk6px\") pod \"cinder-db-sync-5tddl\" (UID: \"24ebaba5-65a6-4be5-8112-10c77a6d986c\") " pod="openstack/cinder-db-sync-5tddl" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.462675 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/24ebaba5-65a6-4be5-8112-10c77a6d986c-etc-machine-id\") pod \"cinder-db-sync-5tddl\" (UID: \"24ebaba5-65a6-4be5-8112-10c77a6d986c\") " pod="openstack/cinder-db-sync-5tddl" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.462836 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24ebaba5-65a6-4be5-8112-10c77a6d986c-combined-ca-bundle\") pod \"cinder-db-sync-5tddl\" (UID: \"24ebaba5-65a6-4be5-8112-10c77a6d986c\") " pod="openstack/cinder-db-sync-5tddl" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.462893 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/24ebaba5-65a6-4be5-8112-10c77a6d986c-db-sync-config-data\") pod \"cinder-db-sync-5tddl\" (UID: \"24ebaba5-65a6-4be5-8112-10c77a6d986c\") " pod="openstack/cinder-db-sync-5tddl" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.462925 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24ebaba5-65a6-4be5-8112-10c77a6d986c-scripts\") pod \"cinder-db-sync-5tddl\" (UID: \"24ebaba5-65a6-4be5-8112-10c77a6d986c\") " pod="openstack/cinder-db-sync-5tddl" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.463028 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0d3e75ac-9860-42b3-b442-f9bd60c82e58-config\") pod \"neutron-db-sync-55f4w\" (UID: \"0d3e75ac-9860-42b3-b442-f9bd60c82e58\") " pod="openstack/neutron-db-sync-55f4w" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.463066 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24ebaba5-65a6-4be5-8112-10c77a6d986c-config-data\") pod \"cinder-db-sync-5tddl\" (UID: \"24ebaba5-65a6-4be5-8112-10c77a6d986c\") " pod="openstack/cinder-db-sync-5tddl" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.481274 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/0d3e75ac-9860-42b3-b442-f9bd60c82e58-config\") pod \"neutron-db-sync-55f4w\" (UID: \"0d3e75ac-9860-42b3-b442-f9bd60c82e58\") " pod="openstack/neutron-db-sync-55f4w" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.481942 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d3e75ac-9860-42b3-b442-f9bd60c82e58-combined-ca-bundle\") pod \"neutron-db-sync-55f4w\" (UID: \"0d3e75ac-9860-42b3-b442-f9bd60c82e58\") " pod="openstack/neutron-db-sync-55f4w" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.485268 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-q85xp"] Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.486835 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-q85xp" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.500996 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.509774 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-vbdwz" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.527960 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65rqn\" (UniqueName: \"kubernetes.io/projected/0d3e75ac-9860-42b3-b442-f9bd60c82e58-kube-api-access-65rqn\") pod \"neutron-db-sync-55f4w\" (UID: \"0d3e75ac-9860-42b3-b442-f9bd60c82e58\") " pod="openstack/neutron-db-sync-55f4w" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.539245 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-db-sync-jhwbv"] Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.540686 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-jhwbv" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.546972 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-scripts" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.562739 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-client-internal" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.563048 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-config-data" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.563296 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-cloudkitty-dockercfg-smxct" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.565471 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbp2f\" (UniqueName: \"kubernetes.io/projected/75741f58-7780-4daa-a61b-6985e8baeb5b-kube-api-access-dbp2f\") pod \"barbican-db-sync-q85xp\" (UID: \"75741f58-7780-4daa-a61b-6985e8baeb5b\") " pod="openstack/barbican-db-sync-q85xp" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.565556 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24ebaba5-65a6-4be5-8112-10c77a6d986c-combined-ca-bundle\") pod \"cinder-db-sync-5tddl\" (UID: \"24ebaba5-65a6-4be5-8112-10c77a6d986c\") " pod="openstack/cinder-db-sync-5tddl" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.565586 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/24ebaba5-65a6-4be5-8112-10c77a6d986c-db-sync-config-data\") pod \"cinder-db-sync-5tddl\" (UID: \"24ebaba5-65a6-4be5-8112-10c77a6d986c\") " pod="openstack/cinder-db-sync-5tddl" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.565604 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24ebaba5-65a6-4be5-8112-10c77a6d986c-scripts\") pod \"cinder-db-sync-5tddl\" (UID: \"24ebaba5-65a6-4be5-8112-10c77a6d986c\") " pod="openstack/cinder-db-sync-5tddl" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.565624 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75741f58-7780-4daa-a61b-6985e8baeb5b-combined-ca-bundle\") pod \"barbican-db-sync-q85xp\" (UID: \"75741f58-7780-4daa-a61b-6985e8baeb5b\") " pod="openstack/barbican-db-sync-q85xp" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.565669 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24ebaba5-65a6-4be5-8112-10c77a6d986c-config-data\") pod \"cinder-db-sync-5tddl\" (UID: \"24ebaba5-65a6-4be5-8112-10c77a6d986c\") " pod="openstack/cinder-db-sync-5tddl" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.565767 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/75741f58-7780-4daa-a61b-6985e8baeb5b-db-sync-config-data\") pod \"barbican-db-sync-q85xp\" (UID: \"75741f58-7780-4daa-a61b-6985e8baeb5b\") " pod="openstack/barbican-db-sync-q85xp" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.565810 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fk6px\" (UniqueName: \"kubernetes.io/projected/24ebaba5-65a6-4be5-8112-10c77a6d986c-kube-api-access-fk6px\") pod \"cinder-db-sync-5tddl\" (UID: \"24ebaba5-65a6-4be5-8112-10c77a6d986c\") " pod="openstack/cinder-db-sync-5tddl" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.565866 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/24ebaba5-65a6-4be5-8112-10c77a6d986c-etc-machine-id\") pod \"cinder-db-sync-5tddl\" (UID: \"24ebaba5-65a6-4be5-8112-10c77a6d986c\") " pod="openstack/cinder-db-sync-5tddl" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.565953 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/24ebaba5-65a6-4be5-8112-10c77a6d986c-etc-machine-id\") pod \"cinder-db-sync-5tddl\" (UID: \"24ebaba5-65a6-4be5-8112-10c77a6d986c\") " pod="openstack/cinder-db-sync-5tddl" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.605921 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/24ebaba5-65a6-4be5-8112-10c77a6d986c-db-sync-config-data\") pod \"cinder-db-sync-5tddl\" (UID: \"24ebaba5-65a6-4be5-8112-10c77a6d986c\") " pod="openstack/cinder-db-sync-5tddl" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.606109 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24ebaba5-65a6-4be5-8112-10c77a6d986c-config-data\") pod \"cinder-db-sync-5tddl\" (UID: \"24ebaba5-65a6-4be5-8112-10c77a6d986c\") " pod="openstack/cinder-db-sync-5tddl" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.611273 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24ebaba5-65a6-4be5-8112-10c77a6d986c-scripts\") pod \"cinder-db-sync-5tddl\" (UID: \"24ebaba5-65a6-4be5-8112-10c77a6d986c\") " pod="openstack/cinder-db-sync-5tddl" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.620179 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24ebaba5-65a6-4be5-8112-10c77a6d986c-combined-ca-bundle\") pod \"cinder-db-sync-5tddl\" (UID: \"24ebaba5-65a6-4be5-8112-10c77a6d986c\") " pod="openstack/cinder-db-sync-5tddl" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.633915 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fk6px\" (UniqueName: \"kubernetes.io/projected/24ebaba5-65a6-4be5-8112-10c77a6d986c-kube-api-access-fk6px\") pod \"cinder-db-sync-5tddl\" (UID: \"24ebaba5-65a6-4be5-8112-10c77a6d986c\") " pod="openstack/cinder-db-sync-5tddl" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.676884 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.678106 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-55f4w" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.682712 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.684389 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8dc2a644-ad99-4857-9913-562b6ed7371f-combined-ca-bundle\") pod \"cloudkitty-db-sync-jhwbv\" (UID: \"8dc2a644-ad99-4857-9913-562b6ed7371f\") " pod="openstack/cloudkitty-db-sync-jhwbv" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.684473 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8dc2a644-ad99-4857-9913-562b6ed7371f-scripts\") pod \"cloudkitty-db-sync-jhwbv\" (UID: \"8dc2a644-ad99-4857-9913-562b6ed7371f\") " pod="openstack/cloudkitty-db-sync-jhwbv" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.684504 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbp2f\" (UniqueName: \"kubernetes.io/projected/75741f58-7780-4daa-a61b-6985e8baeb5b-kube-api-access-dbp2f\") pod \"barbican-db-sync-q85xp\" (UID: \"75741f58-7780-4daa-a61b-6985e8baeb5b\") " pod="openstack/barbican-db-sync-q85xp" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.684638 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75741f58-7780-4daa-a61b-6985e8baeb5b-combined-ca-bundle\") pod \"barbican-db-sync-q85xp\" (UID: \"75741f58-7780-4daa-a61b-6985e8baeb5b\") " pod="openstack/barbican-db-sync-q85xp" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.684703 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xljc\" (UniqueName: \"kubernetes.io/projected/8dc2a644-ad99-4857-9913-562b6ed7371f-kube-api-access-8xljc\") pod \"cloudkitty-db-sync-jhwbv\" (UID: \"8dc2a644-ad99-4857-9913-562b6ed7371f\") " pod="openstack/cloudkitty-db-sync-jhwbv" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.684890 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/8dc2a644-ad99-4857-9913-562b6ed7371f-certs\") pod \"cloudkitty-db-sync-jhwbv\" (UID: \"8dc2a644-ad99-4857-9913-562b6ed7371f\") " pod="openstack/cloudkitty-db-sync-jhwbv" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.684920 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8dc2a644-ad99-4857-9913-562b6ed7371f-config-data\") pod \"cloudkitty-db-sync-jhwbv\" (UID: \"8dc2a644-ad99-4857-9913-562b6ed7371f\") " pod="openstack/cloudkitty-db-sync-jhwbv" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.684946 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/75741f58-7780-4daa-a61b-6985e8baeb5b-db-sync-config-data\") pod \"barbican-db-sync-q85xp\" (UID: \"75741f58-7780-4daa-a61b-6985e8baeb5b\") " pod="openstack/barbican-db-sync-q85xp" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.712024 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-q85xp"] Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.720452 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/75741f58-7780-4daa-a61b-6985e8baeb5b-db-sync-config-data\") pod \"barbican-db-sync-q85xp\" (UID: \"75741f58-7780-4daa-a61b-6985e8baeb5b\") " pod="openstack/barbican-db-sync-q85xp" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.720633 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.721464 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.724435 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-5tddl" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.763286 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-x57zj" event={"ID":"58ebb093-696a-4703-8e50-0b19695f15d5","Type":"ContainerStarted","Data":"ef9de1c4d6fd102f783c58c6cf0b0877b84fab1d08024b599ab684cc4b97c87c"} Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.799592 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75741f58-7780-4daa-a61b-6985e8baeb5b-combined-ca-bundle\") pod \"barbican-db-sync-q85xp\" (UID: \"75741f58-7780-4daa-a61b-6985e8baeb5b\") " pod="openstack/barbican-db-sync-q85xp" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.805792 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8dc2a644-ad99-4857-9913-562b6ed7371f-combined-ca-bundle\") pod \"cloudkitty-db-sync-jhwbv\" (UID: \"8dc2a644-ad99-4857-9913-562b6ed7371f\") " pod="openstack/cloudkitty-db-sync-jhwbv" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.805923 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8dc2a644-ad99-4857-9913-562b6ed7371f-scripts\") pod \"cloudkitty-db-sync-jhwbv\" (UID: \"8dc2a644-ad99-4857-9913-562b6ed7371f\") " pod="openstack/cloudkitty-db-sync-jhwbv" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.805984 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8439e49e-7ee2-4be7-b4a2-f437a2124bd9-log-httpd\") pod \"ceilometer-0\" (UID: \"8439e49e-7ee2-4be7-b4a2-f437a2124bd9\") " pod="openstack/ceilometer-0" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.806132 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkpxz\" (UniqueName: \"kubernetes.io/projected/8439e49e-7ee2-4be7-b4a2-f437a2124bd9-kube-api-access-zkpxz\") pod \"ceilometer-0\" (UID: \"8439e49e-7ee2-4be7-b4a2-f437a2124bd9\") " pod="openstack/ceilometer-0" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.806284 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xljc\" (UniqueName: \"kubernetes.io/projected/8dc2a644-ad99-4857-9913-562b6ed7371f-kube-api-access-8xljc\") pod \"cloudkitty-db-sync-jhwbv\" (UID: \"8dc2a644-ad99-4857-9913-562b6ed7371f\") " pod="openstack/cloudkitty-db-sync-jhwbv" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.806396 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8439e49e-7ee2-4be7-b4a2-f437a2124bd9-config-data\") pod \"ceilometer-0\" (UID: \"8439e49e-7ee2-4be7-b4a2-f437a2124bd9\") " pod="openstack/ceilometer-0" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.806424 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8439e49e-7ee2-4be7-b4a2-f437a2124bd9-scripts\") pod \"ceilometer-0\" (UID: \"8439e49e-7ee2-4be7-b4a2-f437a2124bd9\") " pod="openstack/ceilometer-0" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.806477 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8439e49e-7ee2-4be7-b4a2-f437a2124bd9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8439e49e-7ee2-4be7-b4a2-f437a2124bd9\") " pod="openstack/ceilometer-0" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.806560 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8439e49e-7ee2-4be7-b4a2-f437a2124bd9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8439e49e-7ee2-4be7-b4a2-f437a2124bd9\") " pod="openstack/ceilometer-0" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.806622 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/8dc2a644-ad99-4857-9913-562b6ed7371f-certs\") pod \"cloudkitty-db-sync-jhwbv\" (UID: \"8dc2a644-ad99-4857-9913-562b6ed7371f\") " pod="openstack/cloudkitty-db-sync-jhwbv" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.806641 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8439e49e-7ee2-4be7-b4a2-f437a2124bd9-run-httpd\") pod \"ceilometer-0\" (UID: \"8439e49e-7ee2-4be7-b4a2-f437a2124bd9\") " pod="openstack/ceilometer-0" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.806675 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8dc2a644-ad99-4857-9913-562b6ed7371f-config-data\") pod \"cloudkitty-db-sync-jhwbv\" (UID: \"8dc2a644-ad99-4857-9913-562b6ed7371f\") " pod="openstack/cloudkitty-db-sync-jhwbv" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.807867 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbp2f\" (UniqueName: \"kubernetes.io/projected/75741f58-7780-4daa-a61b-6985e8baeb5b-kube-api-access-dbp2f\") pod \"barbican-db-sync-q85xp\" (UID: \"75741f58-7780-4daa-a61b-6985e8baeb5b\") " pod="openstack/barbican-db-sync-q85xp" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.845972 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8dc2a644-ad99-4857-9913-562b6ed7371f-config-data\") pod \"cloudkitty-db-sync-jhwbv\" (UID: \"8dc2a644-ad99-4857-9913-562b6ed7371f\") " pod="openstack/cloudkitty-db-sync-jhwbv" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.856295 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8dc2a644-ad99-4857-9913-562b6ed7371f-combined-ca-bundle\") pod \"cloudkitty-db-sync-jhwbv\" (UID: \"8dc2a644-ad99-4857-9913-562b6ed7371f\") " pod="openstack/cloudkitty-db-sync-jhwbv" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.862863 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8dc2a644-ad99-4857-9913-562b6ed7371f-scripts\") pod \"cloudkitty-db-sync-jhwbv\" (UID: \"8dc2a644-ad99-4857-9913-562b6ed7371f\") " pod="openstack/cloudkitty-db-sync-jhwbv" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.864643 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/8dc2a644-ad99-4857-9913-562b6ed7371f-certs\") pod \"cloudkitty-db-sync-jhwbv\" (UID: \"8dc2a644-ad99-4857-9913-562b6ed7371f\") " pod="openstack/cloudkitty-db-sync-jhwbv" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.865423 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xljc\" (UniqueName: \"kubernetes.io/projected/8dc2a644-ad99-4857-9913-562b6ed7371f-kube-api-access-8xljc\") pod \"cloudkitty-db-sync-jhwbv\" (UID: \"8dc2a644-ad99-4857-9913-562b6ed7371f\") " pod="openstack/cloudkitty-db-sync-jhwbv" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.873357 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-q85xp" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.875599 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-sync-jhwbv"] Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.914392 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-jhwbv" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.915397 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.916472 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8439e49e-7ee2-4be7-b4a2-f437a2124bd9-log-httpd\") pod \"ceilometer-0\" (UID: \"8439e49e-7ee2-4be7-b4a2-f437a2124bd9\") " pod="openstack/ceilometer-0" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.916601 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkpxz\" (UniqueName: \"kubernetes.io/projected/8439e49e-7ee2-4be7-b4a2-f437a2124bd9-kube-api-access-zkpxz\") pod \"ceilometer-0\" (UID: \"8439e49e-7ee2-4be7-b4a2-f437a2124bd9\") " pod="openstack/ceilometer-0" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.916762 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8439e49e-7ee2-4be7-b4a2-f437a2124bd9-config-data\") pod \"ceilometer-0\" (UID: \"8439e49e-7ee2-4be7-b4a2-f437a2124bd9\") " pod="openstack/ceilometer-0" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.920055 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8439e49e-7ee2-4be7-b4a2-f437a2124bd9-log-httpd\") pod \"ceilometer-0\" (UID: \"8439e49e-7ee2-4be7-b4a2-f437a2124bd9\") " pod="openstack/ceilometer-0" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.928606 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8439e49e-7ee2-4be7-b4a2-f437a2124bd9-scripts\") pod \"ceilometer-0\" (UID: \"8439e49e-7ee2-4be7-b4a2-f437a2124bd9\") " pod="openstack/ceilometer-0" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.929472 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8439e49e-7ee2-4be7-b4a2-f437a2124bd9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8439e49e-7ee2-4be7-b4a2-f437a2124bd9\") " pod="openstack/ceilometer-0" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.929661 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8439e49e-7ee2-4be7-b4a2-f437a2124bd9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8439e49e-7ee2-4be7-b4a2-f437a2124bd9\") " pod="openstack/ceilometer-0" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.929792 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8439e49e-7ee2-4be7-b4a2-f437a2124bd9-run-httpd\") pod \"ceilometer-0\" (UID: \"8439e49e-7ee2-4be7-b4a2-f437a2124bd9\") " pod="openstack/ceilometer-0" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.940807 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8439e49e-7ee2-4be7-b4a2-f437a2124bd9-run-httpd\") pod \"ceilometer-0\" (UID: \"8439e49e-7ee2-4be7-b4a2-f437a2124bd9\") " pod="openstack/ceilometer-0" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.957407 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8439e49e-7ee2-4be7-b4a2-f437a2124bd9-scripts\") pod \"ceilometer-0\" (UID: \"8439e49e-7ee2-4be7-b4a2-f437a2124bd9\") " pod="openstack/ceilometer-0" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.958498 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8439e49e-7ee2-4be7-b4a2-f437a2124bd9-config-data\") pod \"ceilometer-0\" (UID: \"8439e49e-7ee2-4be7-b4a2-f437a2124bd9\") " pod="openstack/ceilometer-0" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.969834 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8439e49e-7ee2-4be7-b4a2-f437a2124bd9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8439e49e-7ee2-4be7-b4a2-f437a2124bd9\") " pod="openstack/ceilometer-0" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.974777 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8439e49e-7ee2-4be7-b4a2-f437a2124bd9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8439e49e-7ee2-4be7-b4a2-f437a2124bd9\") " pod="openstack/ceilometer-0" Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.985321 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-j7cmw"] Dec 09 12:27:28 crc kubenswrapper[4703]: I1209 12:27:28.993565 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkpxz\" (UniqueName: \"kubernetes.io/projected/8439e49e-7ee2-4be7-b4a2-f437a2124bd9-kube-api-access-zkpxz\") pod \"ceilometer-0\" (UID: \"8439e49e-7ee2-4be7-b4a2-f437a2124bd9\") " pod="openstack/ceilometer-0" Dec 09 12:27:29 crc kubenswrapper[4703]: I1209 12:27:29.160905 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 12:27:29 crc kubenswrapper[4703]: I1209 12:27:29.187161 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-wb7mt"] Dec 09 12:27:29 crc kubenswrapper[4703]: I1209 12:27:29.196639 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-lq8j2"] Dec 09 12:27:29 crc kubenswrapper[4703]: I1209 12:27:29.197745 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-wb7mt"] Dec 09 12:27:29 crc kubenswrapper[4703]: I1209 12:27:29.198112 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-wb7mt" Dec 09 12:27:29 crc kubenswrapper[4703]: I1209 12:27:29.200460 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-lq8j2" Dec 09 12:27:29 crc kubenswrapper[4703]: I1209 12:27:29.201662 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-lq8j2"] Dec 09 12:27:29 crc kubenswrapper[4703]: I1209 12:27:29.220962 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Dec 09 12:27:29 crc kubenswrapper[4703]: I1209 12:27:29.222499 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Dec 09 12:27:29 crc kubenswrapper[4703]: I1209 12:27:29.222563 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-wjzsp" Dec 09 12:27:29 crc kubenswrapper[4703]: I1209 12:27:29.280386 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6eb7e89e-dacb-4bd5-a8e5-ebe2a91807d7-scripts\") pod \"placement-db-sync-lq8j2\" (UID: \"6eb7e89e-dacb-4bd5-a8e5-ebe2a91807d7\") " pod="openstack/placement-db-sync-lq8j2" Dec 09 12:27:29 crc kubenswrapper[4703]: I1209 12:27:29.280484 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9f9f4ead-ef04-4275-82de-c42a903d8252-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-wb7mt\" (UID: \"9f9f4ead-ef04-4275-82de-c42a903d8252\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-wb7mt" Dec 09 12:27:29 crc kubenswrapper[4703]: I1209 12:27:29.280519 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6eb7e89e-dacb-4bd5-a8e5-ebe2a91807d7-combined-ca-bundle\") pod \"placement-db-sync-lq8j2\" (UID: \"6eb7e89e-dacb-4bd5-a8e5-ebe2a91807d7\") " pod="openstack/placement-db-sync-lq8j2" Dec 09 12:27:29 crc kubenswrapper[4703]: I1209 12:27:29.280553 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f9f4ead-ef04-4275-82de-c42a903d8252-config\") pod \"dnsmasq-dns-58dd9ff6bc-wb7mt\" (UID: \"9f9f4ead-ef04-4275-82de-c42a903d8252\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-wb7mt" Dec 09 12:27:29 crc kubenswrapper[4703]: I1209 12:27:29.280681 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9f9f4ead-ef04-4275-82de-c42a903d8252-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-wb7mt\" (UID: \"9f9f4ead-ef04-4275-82de-c42a903d8252\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-wb7mt" Dec 09 12:27:29 crc kubenswrapper[4703]: I1209 12:27:29.280748 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9f9f4ead-ef04-4275-82de-c42a903d8252-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-wb7mt\" (UID: \"9f9f4ead-ef04-4275-82de-c42a903d8252\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-wb7mt" Dec 09 12:27:29 crc kubenswrapper[4703]: I1209 12:27:29.280783 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9f9f4ead-ef04-4275-82de-c42a903d8252-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-wb7mt\" (UID: \"9f9f4ead-ef04-4275-82de-c42a903d8252\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-wb7mt" Dec 09 12:27:29 crc kubenswrapper[4703]: I1209 12:27:29.280820 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6eb7e89e-dacb-4bd5-a8e5-ebe2a91807d7-config-data\") pod \"placement-db-sync-lq8j2\" (UID: \"6eb7e89e-dacb-4bd5-a8e5-ebe2a91807d7\") " pod="openstack/placement-db-sync-lq8j2" Dec 09 12:27:29 crc kubenswrapper[4703]: I1209 12:27:29.280843 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skrx9\" (UniqueName: \"kubernetes.io/projected/9f9f4ead-ef04-4275-82de-c42a903d8252-kube-api-access-skrx9\") pod \"dnsmasq-dns-58dd9ff6bc-wb7mt\" (UID: \"9f9f4ead-ef04-4275-82de-c42a903d8252\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-wb7mt" Dec 09 12:27:29 crc kubenswrapper[4703]: I1209 12:27:29.280975 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6eb7e89e-dacb-4bd5-a8e5-ebe2a91807d7-logs\") pod \"placement-db-sync-lq8j2\" (UID: \"6eb7e89e-dacb-4bd5-a8e5-ebe2a91807d7\") " pod="openstack/placement-db-sync-lq8j2" Dec 09 12:27:29 crc kubenswrapper[4703]: I1209 12:27:29.281036 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gfj6\" (UniqueName: \"kubernetes.io/projected/6eb7e89e-dacb-4bd5-a8e5-ebe2a91807d7-kube-api-access-6gfj6\") pod \"placement-db-sync-lq8j2\" (UID: \"6eb7e89e-dacb-4bd5-a8e5-ebe2a91807d7\") " pod="openstack/placement-db-sync-lq8j2" Dec 09 12:27:29 crc kubenswrapper[4703]: I1209 12:27:29.383066 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6eb7e89e-dacb-4bd5-a8e5-ebe2a91807d7-logs\") pod \"placement-db-sync-lq8j2\" (UID: \"6eb7e89e-dacb-4bd5-a8e5-ebe2a91807d7\") " pod="openstack/placement-db-sync-lq8j2" Dec 09 12:27:29 crc kubenswrapper[4703]: I1209 12:27:29.383172 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gfj6\" (UniqueName: \"kubernetes.io/projected/6eb7e89e-dacb-4bd5-a8e5-ebe2a91807d7-kube-api-access-6gfj6\") pod \"placement-db-sync-lq8j2\" (UID: \"6eb7e89e-dacb-4bd5-a8e5-ebe2a91807d7\") " pod="openstack/placement-db-sync-lq8j2" Dec 09 12:27:29 crc kubenswrapper[4703]: I1209 12:27:29.383286 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6eb7e89e-dacb-4bd5-a8e5-ebe2a91807d7-scripts\") pod \"placement-db-sync-lq8j2\" (UID: \"6eb7e89e-dacb-4bd5-a8e5-ebe2a91807d7\") " pod="openstack/placement-db-sync-lq8j2" Dec 09 12:27:29 crc kubenswrapper[4703]: I1209 12:27:29.383316 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9f9f4ead-ef04-4275-82de-c42a903d8252-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-wb7mt\" (UID: \"9f9f4ead-ef04-4275-82de-c42a903d8252\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-wb7mt" Dec 09 12:27:29 crc kubenswrapper[4703]: I1209 12:27:29.383338 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6eb7e89e-dacb-4bd5-a8e5-ebe2a91807d7-combined-ca-bundle\") pod \"placement-db-sync-lq8j2\" (UID: \"6eb7e89e-dacb-4bd5-a8e5-ebe2a91807d7\") " pod="openstack/placement-db-sync-lq8j2" Dec 09 12:27:29 crc kubenswrapper[4703]: I1209 12:27:29.383368 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f9f4ead-ef04-4275-82de-c42a903d8252-config\") pod \"dnsmasq-dns-58dd9ff6bc-wb7mt\" (UID: \"9f9f4ead-ef04-4275-82de-c42a903d8252\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-wb7mt" Dec 09 12:27:29 crc kubenswrapper[4703]: I1209 12:27:29.383424 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9f9f4ead-ef04-4275-82de-c42a903d8252-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-wb7mt\" (UID: \"9f9f4ead-ef04-4275-82de-c42a903d8252\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-wb7mt" Dec 09 12:27:29 crc kubenswrapper[4703]: I1209 12:27:29.383461 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9f9f4ead-ef04-4275-82de-c42a903d8252-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-wb7mt\" (UID: \"9f9f4ead-ef04-4275-82de-c42a903d8252\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-wb7mt" Dec 09 12:27:29 crc kubenswrapper[4703]: I1209 12:27:29.383487 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9f9f4ead-ef04-4275-82de-c42a903d8252-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-wb7mt\" (UID: \"9f9f4ead-ef04-4275-82de-c42a903d8252\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-wb7mt" Dec 09 12:27:29 crc kubenswrapper[4703]: I1209 12:27:29.383511 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6eb7e89e-dacb-4bd5-a8e5-ebe2a91807d7-config-data\") pod \"placement-db-sync-lq8j2\" (UID: \"6eb7e89e-dacb-4bd5-a8e5-ebe2a91807d7\") " pod="openstack/placement-db-sync-lq8j2" Dec 09 12:27:29 crc kubenswrapper[4703]: I1209 12:27:29.383537 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skrx9\" (UniqueName: \"kubernetes.io/projected/9f9f4ead-ef04-4275-82de-c42a903d8252-kube-api-access-skrx9\") pod \"dnsmasq-dns-58dd9ff6bc-wb7mt\" (UID: \"9f9f4ead-ef04-4275-82de-c42a903d8252\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-wb7mt" Dec 09 12:27:29 crc kubenswrapper[4703]: I1209 12:27:29.383798 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6eb7e89e-dacb-4bd5-a8e5-ebe2a91807d7-logs\") pod \"placement-db-sync-lq8j2\" (UID: \"6eb7e89e-dacb-4bd5-a8e5-ebe2a91807d7\") " pod="openstack/placement-db-sync-lq8j2" Dec 09 12:27:29 crc kubenswrapper[4703]: I1209 12:27:29.408494 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6eb7e89e-dacb-4bd5-a8e5-ebe2a91807d7-combined-ca-bundle\") pod \"placement-db-sync-lq8j2\" (UID: \"6eb7e89e-dacb-4bd5-a8e5-ebe2a91807d7\") " pod="openstack/placement-db-sync-lq8j2" Dec 09 12:27:29 crc kubenswrapper[4703]: I1209 12:27:29.409605 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6eb7e89e-dacb-4bd5-a8e5-ebe2a91807d7-scripts\") pod \"placement-db-sync-lq8j2\" (UID: \"6eb7e89e-dacb-4bd5-a8e5-ebe2a91807d7\") " pod="openstack/placement-db-sync-lq8j2" Dec 09 12:27:29 crc kubenswrapper[4703]: I1209 12:27:29.410577 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6eb7e89e-dacb-4bd5-a8e5-ebe2a91807d7-config-data\") pod \"placement-db-sync-lq8j2\" (UID: \"6eb7e89e-dacb-4bd5-a8e5-ebe2a91807d7\") " pod="openstack/placement-db-sync-lq8j2" Dec 09 12:27:29 crc kubenswrapper[4703]: I1209 12:27:29.413501 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gfj6\" (UniqueName: \"kubernetes.io/projected/6eb7e89e-dacb-4bd5-a8e5-ebe2a91807d7-kube-api-access-6gfj6\") pod \"placement-db-sync-lq8j2\" (UID: \"6eb7e89e-dacb-4bd5-a8e5-ebe2a91807d7\") " pod="openstack/placement-db-sync-lq8j2" Dec 09 12:27:29 crc kubenswrapper[4703]: I1209 12:27:29.423354 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9f9f4ead-ef04-4275-82de-c42a903d8252-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-wb7mt\" (UID: \"9f9f4ead-ef04-4275-82de-c42a903d8252\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-wb7mt" Dec 09 12:27:29 crc kubenswrapper[4703]: I1209 12:27:29.423608 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f9f4ead-ef04-4275-82de-c42a903d8252-config\") pod \"dnsmasq-dns-58dd9ff6bc-wb7mt\" (UID: \"9f9f4ead-ef04-4275-82de-c42a903d8252\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-wb7mt" Dec 09 12:27:29 crc kubenswrapper[4703]: I1209 12:27:29.424028 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9f9f4ead-ef04-4275-82de-c42a903d8252-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-wb7mt\" (UID: \"9f9f4ead-ef04-4275-82de-c42a903d8252\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-wb7mt" Dec 09 12:27:29 crc kubenswrapper[4703]: I1209 12:27:29.424546 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9f9f4ead-ef04-4275-82de-c42a903d8252-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-wb7mt\" (UID: \"9f9f4ead-ef04-4275-82de-c42a903d8252\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-wb7mt" Dec 09 12:27:29 crc kubenswrapper[4703]: I1209 12:27:29.425149 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skrx9\" (UniqueName: \"kubernetes.io/projected/9f9f4ead-ef04-4275-82de-c42a903d8252-kube-api-access-skrx9\") pod \"dnsmasq-dns-58dd9ff6bc-wb7mt\" (UID: \"9f9f4ead-ef04-4275-82de-c42a903d8252\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-wb7mt" Dec 09 12:27:29 crc kubenswrapper[4703]: I1209 12:27:29.439998 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9f9f4ead-ef04-4275-82de-c42a903d8252-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-wb7mt\" (UID: \"9f9f4ead-ef04-4275-82de-c42a903d8252\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-wb7mt" Dec 09 12:27:29 crc kubenswrapper[4703]: I1209 12:27:29.509861 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-lq8j2" Dec 09 12:27:29 crc kubenswrapper[4703]: I1209 12:27:29.713558 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-wb7mt" Dec 09 12:27:29 crc kubenswrapper[4703]: I1209 12:27:29.785344 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-x57zj" event={"ID":"58ebb093-696a-4703-8e50-0b19695f15d5","Type":"ContainerStarted","Data":"cbdebe648b0ec9fc75a5fe93980de1d5af4c6f0b230f595bd44daf2a098f625e"} Dec 09 12:27:29 crc kubenswrapper[4703]: I1209 12:27:29.785595 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-764c5664d7-x57zj" podUID="58ebb093-696a-4703-8e50-0b19695f15d5" containerName="init" containerID="cri-o://cbdebe648b0ec9fc75a5fe93980de1d5af4c6f0b230f595bd44daf2a098f625e" gracePeriod=10 Dec 09 12:27:29 crc kubenswrapper[4703]: I1209 12:27:29.800943 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-j7cmw"] Dec 09 12:27:29 crc kubenswrapper[4703]: I1209 12:27:29.918272 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-lqvgg"] Dec 09 12:27:29 crc kubenswrapper[4703]: W1209 12:27:29.947621 4703 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod13a09cf1_99ce_4245_9e5c_260b02aeff14.slice/crio-cfd6cbbcc35ab00fc8ef3dc44ede981e6a1a2e0c3207fa2d60a9cd7f837149a3 WatchSource:0}: Error finding container cfd6cbbcc35ab00fc8ef3dc44ede981e6a1a2e0c3207fa2d60a9cd7f837149a3: Status 404 returned error can't find the container with id cfd6cbbcc35ab00fc8ef3dc44ede981e6a1a2e0c3207fa2d60a9cd7f837149a3 Dec 09 12:27:30 crc kubenswrapper[4703]: I1209 12:27:30.085370 4703 patch_prober.go:28] interesting pod/machine-config-daemon-q8sfk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 12:27:30 crc kubenswrapper[4703]: I1209 12:27:30.085787 4703 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 12:27:30 crc kubenswrapper[4703]: I1209 12:27:30.085841 4703 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" Dec 09 12:27:30 crc kubenswrapper[4703]: I1209 12:27:30.086750 4703 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"75604f5dbc97ce29a121f656b65fc7350b377b2e69e9598ea482a258333f6101"} pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 12:27:30 crc kubenswrapper[4703]: I1209 12:27:30.086816 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" containerName="machine-config-daemon" containerID="cri-o://75604f5dbc97ce29a121f656b65fc7350b377b2e69e9598ea482a258333f6101" gracePeriod=600 Dec 09 12:27:30 crc kubenswrapper[4703]: W1209 12:27:30.266665 4703 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0d3e75ac_9860_42b3_b442_f9bd60c82e58.slice/crio-71f367303afd88791e0e7d4c7193ca221367c23dcb538a6304714896d8b8ca10 WatchSource:0}: Error finding container 71f367303afd88791e0e7d4c7193ca221367c23dcb538a6304714896d8b8ca10: Status 404 returned error can't find the container with id 71f367303afd88791e0e7d4c7193ca221367c23dcb538a6304714896d8b8ca10 Dec 09 12:27:30 crc kubenswrapper[4703]: I1209 12:27:30.269468 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-55f4w"] Dec 09 12:27:30 crc kubenswrapper[4703]: I1209 12:27:30.356527 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-q85xp"] Dec 09 12:27:30 crc kubenswrapper[4703]: I1209 12:27:30.384910 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-5tddl"] Dec 09 12:27:30 crc kubenswrapper[4703]: I1209 12:27:30.414328 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-sync-jhwbv"] Dec 09 12:27:30 crc kubenswrapper[4703]: I1209 12:27:30.478363 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-x57zj" Dec 09 12:27:30 crc kubenswrapper[4703]: I1209 12:27:30.566125 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-lq8j2"] Dec 09 12:27:30 crc kubenswrapper[4703]: I1209 12:27:30.614592 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-wb7mt"] Dec 09 12:27:30 crc kubenswrapper[4703]: I1209 12:27:30.632500 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 09 12:27:30 crc kubenswrapper[4703]: W1209 12:27:30.649893 4703 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9f9f4ead_ef04_4275_82de_c42a903d8252.slice/crio-907f0b474d95fa78d278a4c2ace2a3b6c5214bada0d4b009e351b6c05d3d675b WatchSource:0}: Error finding container 907f0b474d95fa78d278a4c2ace2a3b6c5214bada0d4b009e351b6c05d3d675b: Status 404 returned error can't find the container with id 907f0b474d95fa78d278a4c2ace2a3b6c5214bada0d4b009e351b6c05d3d675b Dec 09 12:27:30 crc kubenswrapper[4703]: I1209 12:27:30.674605 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/58ebb093-696a-4703-8e50-0b19695f15d5-dns-svc\") pod \"58ebb093-696a-4703-8e50-0b19695f15d5\" (UID: \"58ebb093-696a-4703-8e50-0b19695f15d5\") " Dec 09 12:27:30 crc kubenswrapper[4703]: I1209 12:27:30.674814 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/58ebb093-696a-4703-8e50-0b19695f15d5-ovsdbserver-sb\") pod \"58ebb093-696a-4703-8e50-0b19695f15d5\" (UID: \"58ebb093-696a-4703-8e50-0b19695f15d5\") " Dec 09 12:27:30 crc kubenswrapper[4703]: I1209 12:27:30.674875 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/58ebb093-696a-4703-8e50-0b19695f15d5-dns-swift-storage-0\") pod \"58ebb093-696a-4703-8e50-0b19695f15d5\" (UID: \"58ebb093-696a-4703-8e50-0b19695f15d5\") " Dec 09 12:27:30 crc kubenswrapper[4703]: I1209 12:27:30.675065 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nnws5\" (UniqueName: \"kubernetes.io/projected/58ebb093-696a-4703-8e50-0b19695f15d5-kube-api-access-nnws5\") pod \"58ebb093-696a-4703-8e50-0b19695f15d5\" (UID: \"58ebb093-696a-4703-8e50-0b19695f15d5\") " Dec 09 12:27:30 crc kubenswrapper[4703]: I1209 12:27:30.675359 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/58ebb093-696a-4703-8e50-0b19695f15d5-ovsdbserver-nb\") pod \"58ebb093-696a-4703-8e50-0b19695f15d5\" (UID: \"58ebb093-696a-4703-8e50-0b19695f15d5\") " Dec 09 12:27:30 crc kubenswrapper[4703]: I1209 12:27:30.675424 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58ebb093-696a-4703-8e50-0b19695f15d5-config\") pod \"58ebb093-696a-4703-8e50-0b19695f15d5\" (UID: \"58ebb093-696a-4703-8e50-0b19695f15d5\") " Dec 09 12:27:30 crc kubenswrapper[4703]: I1209 12:27:30.708113 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58ebb093-696a-4703-8e50-0b19695f15d5-kube-api-access-nnws5" (OuterVolumeSpecName: "kube-api-access-nnws5") pod "58ebb093-696a-4703-8e50-0b19695f15d5" (UID: "58ebb093-696a-4703-8e50-0b19695f15d5"). InnerVolumeSpecName "kube-api-access-nnws5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:27:30 crc kubenswrapper[4703]: I1209 12:27:30.783353 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58ebb093-696a-4703-8e50-0b19695f15d5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "58ebb093-696a-4703-8e50-0b19695f15d5" (UID: "58ebb093-696a-4703-8e50-0b19695f15d5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:27:30 crc kubenswrapper[4703]: I1209 12:27:30.783600 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58ebb093-696a-4703-8e50-0b19695f15d5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "58ebb093-696a-4703-8e50-0b19695f15d5" (UID: "58ebb093-696a-4703-8e50-0b19695f15d5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:27:30 crc kubenswrapper[4703]: I1209 12:27:30.785742 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nnws5\" (UniqueName: \"kubernetes.io/projected/58ebb093-696a-4703-8e50-0b19695f15d5-kube-api-access-nnws5\") on node \"crc\" DevicePath \"\"" Dec 09 12:27:30 crc kubenswrapper[4703]: I1209 12:27:30.785845 4703 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/58ebb093-696a-4703-8e50-0b19695f15d5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 09 12:27:30 crc kubenswrapper[4703]: I1209 12:27:30.788338 4703 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/58ebb093-696a-4703-8e50-0b19695f15d5-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 09 12:27:30 crc kubenswrapper[4703]: I1209 12:27:30.814096 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-jhwbv" event={"ID":"8dc2a644-ad99-4857-9913-562b6ed7371f","Type":"ContainerStarted","Data":"e87f3c6a38f8c6c012e119ea1317570617efdd832050f77c61b166d71cf247a1"} Dec 09 12:27:30 crc kubenswrapper[4703]: I1209 12:27:30.818235 4703 generic.go:334] "Generic (PLEG): container finished" podID="58ebb093-696a-4703-8e50-0b19695f15d5" containerID="cbdebe648b0ec9fc75a5fe93980de1d5af4c6f0b230f595bd44daf2a098f625e" exitCode=0 Dec 09 12:27:30 crc kubenswrapper[4703]: I1209 12:27:30.818335 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-x57zj" event={"ID":"58ebb093-696a-4703-8e50-0b19695f15d5","Type":"ContainerDied","Data":"cbdebe648b0ec9fc75a5fe93980de1d5af4c6f0b230f595bd44daf2a098f625e"} Dec 09 12:27:30 crc kubenswrapper[4703]: I1209 12:27:30.818371 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-x57zj" event={"ID":"58ebb093-696a-4703-8e50-0b19695f15d5","Type":"ContainerDied","Data":"ef9de1c4d6fd102f783c58c6cf0b0877b84fab1d08024b599ab684cc4b97c87c"} Dec 09 12:27:30 crc kubenswrapper[4703]: I1209 12:27:30.818403 4703 scope.go:117] "RemoveContainer" containerID="cbdebe648b0ec9fc75a5fe93980de1d5af4c6f0b230f595bd44daf2a098f625e" Dec 09 12:27:30 crc kubenswrapper[4703]: I1209 12:27:30.818604 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-x57zj" Dec 09 12:27:30 crc kubenswrapper[4703]: I1209 12:27:30.819807 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58ebb093-696a-4703-8e50-0b19695f15d5-config" (OuterVolumeSpecName: "config") pod "58ebb093-696a-4703-8e50-0b19695f15d5" (UID: "58ebb093-696a-4703-8e50-0b19695f15d5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:27:30 crc kubenswrapper[4703]: I1209 12:27:30.844551 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-q85xp" event={"ID":"75741f58-7780-4daa-a61b-6985e8baeb5b","Type":"ContainerStarted","Data":"b540b48761bf247b0841e5ee3e9b7e2711d8a084cc6aaa764f520a2a57f405ef"} Dec 09 12:27:30 crc kubenswrapper[4703]: I1209 12:27:30.857866 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58ebb093-696a-4703-8e50-0b19695f15d5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "58ebb093-696a-4703-8e50-0b19695f15d5" (UID: "58ebb093-696a-4703-8e50-0b19695f15d5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:27:30 crc kubenswrapper[4703]: I1209 12:27:30.864521 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58ebb093-696a-4703-8e50-0b19695f15d5-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "58ebb093-696a-4703-8e50-0b19695f15d5" (UID: "58ebb093-696a-4703-8e50-0b19695f15d5"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:27:30 crc kubenswrapper[4703]: I1209 12:27:30.891646 4703 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58ebb093-696a-4703-8e50-0b19695f15d5-config\") on node \"crc\" DevicePath \"\"" Dec 09 12:27:30 crc kubenswrapper[4703]: I1209 12:27:30.891690 4703 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/58ebb093-696a-4703-8e50-0b19695f15d5-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 09 12:27:30 crc kubenswrapper[4703]: I1209 12:27:30.891706 4703 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/58ebb093-696a-4703-8e50-0b19695f15d5-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 09 12:27:30 crc kubenswrapper[4703]: I1209 12:27:30.893798 4703 scope.go:117] "RemoveContainer" containerID="cbdebe648b0ec9fc75a5fe93980de1d5af4c6f0b230f595bd44daf2a098f625e" Dec 09 12:27:30 crc kubenswrapper[4703]: E1209 12:27:30.894814 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cbdebe648b0ec9fc75a5fe93980de1d5af4c6f0b230f595bd44daf2a098f625e\": container with ID starting with cbdebe648b0ec9fc75a5fe93980de1d5af4c6f0b230f595bd44daf2a098f625e not found: ID does not exist" containerID="cbdebe648b0ec9fc75a5fe93980de1d5af4c6f0b230f595bd44daf2a098f625e" Dec 09 12:27:30 crc kubenswrapper[4703]: I1209 12:27:30.894870 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cbdebe648b0ec9fc75a5fe93980de1d5af4c6f0b230f595bd44daf2a098f625e"} err="failed to get container status \"cbdebe648b0ec9fc75a5fe93980de1d5af4c6f0b230f595bd44daf2a098f625e\": rpc error: code = NotFound desc = could not find container \"cbdebe648b0ec9fc75a5fe93980de1d5af4c6f0b230f595bd44daf2a098f625e\": container with ID starting with cbdebe648b0ec9fc75a5fe93980de1d5af4c6f0b230f595bd44daf2a098f625e not found: ID does not exist" Dec 09 12:27:30 crc kubenswrapper[4703]: I1209 12:27:30.899369 4703 generic.go:334] "Generic (PLEG): container finished" podID="32956ceb-8540-406e-8693-e86efb46cd42" containerID="75604f5dbc97ce29a121f656b65fc7350b377b2e69e9598ea482a258333f6101" exitCode=0 Dec 09 12:27:30 crc kubenswrapper[4703]: I1209 12:27:30.899639 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" event={"ID":"32956ceb-8540-406e-8693-e86efb46cd42","Type":"ContainerDied","Data":"75604f5dbc97ce29a121f656b65fc7350b377b2e69e9598ea482a258333f6101"} Dec 09 12:27:30 crc kubenswrapper[4703]: I1209 12:27:30.899711 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" event={"ID":"32956ceb-8540-406e-8693-e86efb46cd42","Type":"ContainerStarted","Data":"aec88ccdbd5bc6c7f7f9dd534b57b24318f2c545ead2d59998c5ffb08ae72b46"} Dec 09 12:27:30 crc kubenswrapper[4703]: I1209 12:27:30.899866 4703 scope.go:117] "RemoveContainer" containerID="852cd8ebe9b36e4877ac2f4fe135ba61b72af0fc110102ec40d7b7e1b7e0423f" Dec 09 12:27:30 crc kubenswrapper[4703]: I1209 12:27:30.909116 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-5tddl" event={"ID":"24ebaba5-65a6-4be5-8112-10c77a6d986c","Type":"ContainerStarted","Data":"64d9f24ae92cac0e4d79a040aca44254783ee6a024bb928510929bfc0804040a"} Dec 09 12:27:30 crc kubenswrapper[4703]: I1209 12:27:30.911994 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-lqvgg" event={"ID":"13a09cf1-99ce-4245-9e5c-260b02aeff14","Type":"ContainerStarted","Data":"53488f4d64c4a6e7e47e02f044e62c2880ce5a5c8f207c4e49d7f71fd0f1ba5b"} Dec 09 12:27:30 crc kubenswrapper[4703]: I1209 12:27:30.912050 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-lqvgg" event={"ID":"13a09cf1-99ce-4245-9e5c-260b02aeff14","Type":"ContainerStarted","Data":"cfd6cbbcc35ab00fc8ef3dc44ede981e6a1a2e0c3207fa2d60a9cd7f837149a3"} Dec 09 12:27:30 crc kubenswrapper[4703]: I1209 12:27:30.914605 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8439e49e-7ee2-4be7-b4a2-f437a2124bd9","Type":"ContainerStarted","Data":"9181fd1c18fec456c579317f8049bec85fe27fd3bbc835b729fa2c0c5a2558fe"} Dec 09 12:27:30 crc kubenswrapper[4703]: I1209 12:27:30.917346 4703 generic.go:334] "Generic (PLEG): container finished" podID="f1d66be2-3a97-487f-99d4-59985c569689" containerID="24bbe5a8129e4661258dcf30613023d5da7214c2acf2d82e80cc653a99bb7557" exitCode=0 Dec 09 12:27:30 crc kubenswrapper[4703]: I1209 12:27:30.917404 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-j7cmw" event={"ID":"f1d66be2-3a97-487f-99d4-59985c569689","Type":"ContainerDied","Data":"24bbe5a8129e4661258dcf30613023d5da7214c2acf2d82e80cc653a99bb7557"} Dec 09 12:27:30 crc kubenswrapper[4703]: I1209 12:27:30.917422 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-j7cmw" event={"ID":"f1d66be2-3a97-487f-99d4-59985c569689","Type":"ContainerStarted","Data":"ef5166bc72911f22852ffd18110fd243138bd5de7ed8d419287e43fda756cff4"} Dec 09 12:27:30 crc kubenswrapper[4703]: I1209 12:27:30.928166 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-55f4w" event={"ID":"0d3e75ac-9860-42b3-b442-f9bd60c82e58","Type":"ContainerStarted","Data":"71f367303afd88791e0e7d4c7193ca221367c23dcb538a6304714896d8b8ca10"} Dec 09 12:27:30 crc kubenswrapper[4703]: I1209 12:27:30.935571 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-lq8j2" event={"ID":"6eb7e89e-dacb-4bd5-a8e5-ebe2a91807d7","Type":"ContainerStarted","Data":"ba09cb2d539742949e9814c5fafa805e5c3b35c1ff55013f0ab8427ee47f04db"} Dec 09 12:27:30 crc kubenswrapper[4703]: I1209 12:27:30.937338 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-wb7mt" event={"ID":"9f9f4ead-ef04-4275-82de-c42a903d8252","Type":"ContainerStarted","Data":"907f0b474d95fa78d278a4c2ace2a3b6c5214bada0d4b009e351b6c05d3d675b"} Dec 09 12:27:31 crc kubenswrapper[4703]: I1209 12:27:31.052238 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-lqvgg" podStartSLOduration=4.052214668 podStartE2EDuration="4.052214668s" podCreationTimestamp="2025-12-09 12:27:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:27:30.942061682 +0000 UTC m=+1350.190825201" watchObservedRunningTime="2025-12-09 12:27:31.052214668 +0000 UTC m=+1350.300978187" Dec 09 12:27:31 crc kubenswrapper[4703]: E1209 12:27:31.397673 4703 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod58ebb093_696a_4703_8e50_0b19695f15d5.slice/crio-ef9de1c4d6fd102f783c58c6cf0b0877b84fab1d08024b599ab684cc4b97c87c\": RecentStats: unable to find data in memory cache]" Dec 09 12:27:31 crc kubenswrapper[4703]: I1209 12:27:31.454073 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-x57zj"] Dec 09 12:27:31 crc kubenswrapper[4703]: I1209 12:27:31.505950 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-x57zj"] Dec 09 12:27:31 crc kubenswrapper[4703]: I1209 12:27:31.679431 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-j7cmw" Dec 09 12:27:31 crc kubenswrapper[4703]: I1209 12:27:31.818774 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngc42\" (UniqueName: \"kubernetes.io/projected/f1d66be2-3a97-487f-99d4-59985c569689-kube-api-access-ngc42\") pod \"f1d66be2-3a97-487f-99d4-59985c569689\" (UID: \"f1d66be2-3a97-487f-99d4-59985c569689\") " Dec 09 12:27:31 crc kubenswrapper[4703]: I1209 12:27:31.818900 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f1d66be2-3a97-487f-99d4-59985c569689-ovsdbserver-sb\") pod \"f1d66be2-3a97-487f-99d4-59985c569689\" (UID: \"f1d66be2-3a97-487f-99d4-59985c569689\") " Dec 09 12:27:31 crc kubenswrapper[4703]: I1209 12:27:31.819178 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1d66be2-3a97-487f-99d4-59985c569689-config\") pod \"f1d66be2-3a97-487f-99d4-59985c569689\" (UID: \"f1d66be2-3a97-487f-99d4-59985c569689\") " Dec 09 12:27:31 crc kubenswrapper[4703]: I1209 12:27:31.819227 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f1d66be2-3a97-487f-99d4-59985c569689-dns-swift-storage-0\") pod \"f1d66be2-3a97-487f-99d4-59985c569689\" (UID: \"f1d66be2-3a97-487f-99d4-59985c569689\") " Dec 09 12:27:31 crc kubenswrapper[4703]: I1209 12:27:31.819309 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f1d66be2-3a97-487f-99d4-59985c569689-dns-svc\") pod \"f1d66be2-3a97-487f-99d4-59985c569689\" (UID: \"f1d66be2-3a97-487f-99d4-59985c569689\") " Dec 09 12:27:31 crc kubenswrapper[4703]: I1209 12:27:31.819393 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f1d66be2-3a97-487f-99d4-59985c569689-ovsdbserver-nb\") pod \"f1d66be2-3a97-487f-99d4-59985c569689\" (UID: \"f1d66be2-3a97-487f-99d4-59985c569689\") " Dec 09 12:27:31 crc kubenswrapper[4703]: I1209 12:27:31.836726 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1d66be2-3a97-487f-99d4-59985c569689-kube-api-access-ngc42" (OuterVolumeSpecName: "kube-api-access-ngc42") pod "f1d66be2-3a97-487f-99d4-59985c569689" (UID: "f1d66be2-3a97-487f-99d4-59985c569689"). InnerVolumeSpecName "kube-api-access-ngc42". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:27:31 crc kubenswrapper[4703]: I1209 12:27:31.885346 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1d66be2-3a97-487f-99d4-59985c569689-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f1d66be2-3a97-487f-99d4-59985c569689" (UID: "f1d66be2-3a97-487f-99d4-59985c569689"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:27:31 crc kubenswrapper[4703]: I1209 12:27:31.903178 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1d66be2-3a97-487f-99d4-59985c569689-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f1d66be2-3a97-487f-99d4-59985c569689" (UID: "f1d66be2-3a97-487f-99d4-59985c569689"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:27:31 crc kubenswrapper[4703]: I1209 12:27:31.908272 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1d66be2-3a97-487f-99d4-59985c569689-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f1d66be2-3a97-487f-99d4-59985c569689" (UID: "f1d66be2-3a97-487f-99d4-59985c569689"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:27:31 crc kubenswrapper[4703]: I1209 12:27:31.920457 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1d66be2-3a97-487f-99d4-59985c569689-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f1d66be2-3a97-487f-99d4-59985c569689" (UID: "f1d66be2-3a97-487f-99d4-59985c569689"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:27:31 crc kubenswrapper[4703]: I1209 12:27:31.928346 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngc42\" (UniqueName: \"kubernetes.io/projected/f1d66be2-3a97-487f-99d4-59985c569689-kube-api-access-ngc42\") on node \"crc\" DevicePath \"\"" Dec 09 12:27:31 crc kubenswrapper[4703]: I1209 12:27:31.928382 4703 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f1d66be2-3a97-487f-99d4-59985c569689-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 09 12:27:31 crc kubenswrapper[4703]: I1209 12:27:31.928397 4703 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f1d66be2-3a97-487f-99d4-59985c569689-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 09 12:27:31 crc kubenswrapper[4703]: I1209 12:27:31.928409 4703 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f1d66be2-3a97-487f-99d4-59985c569689-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 09 12:27:31 crc kubenswrapper[4703]: I1209 12:27:31.928425 4703 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f1d66be2-3a97-487f-99d4-59985c569689-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 09 12:27:31 crc kubenswrapper[4703]: I1209 12:27:31.939559 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1d66be2-3a97-487f-99d4-59985c569689-config" (OuterVolumeSpecName: "config") pod "f1d66be2-3a97-487f-99d4-59985c569689" (UID: "f1d66be2-3a97-487f-99d4-59985c569689"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:27:32 crc kubenswrapper[4703]: I1209 12:27:32.002681 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-j7cmw" event={"ID":"f1d66be2-3a97-487f-99d4-59985c569689","Type":"ContainerDied","Data":"ef5166bc72911f22852ffd18110fd243138bd5de7ed8d419287e43fda756cff4"} Dec 09 12:27:32 crc kubenswrapper[4703]: I1209 12:27:32.002737 4703 scope.go:117] "RemoveContainer" containerID="24bbe5a8129e4661258dcf30613023d5da7214c2acf2d82e80cc653a99bb7557" Dec 09 12:27:32 crc kubenswrapper[4703]: I1209 12:27:32.002691 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-j7cmw" Dec 09 12:27:32 crc kubenswrapper[4703]: I1209 12:27:32.010369 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-55f4w" event={"ID":"0d3e75ac-9860-42b3-b442-f9bd60c82e58","Type":"ContainerStarted","Data":"597b350cad8d8aacc535de81806eabf41e75c87bf67770c72b69ed15a3617f2c"} Dec 09 12:27:32 crc kubenswrapper[4703]: I1209 12:27:32.031582 4703 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1d66be2-3a97-487f-99d4-59985c569689-config\") on node \"crc\" DevicePath \"\"" Dec 09 12:27:32 crc kubenswrapper[4703]: I1209 12:27:32.090064 4703 generic.go:334] "Generic (PLEG): container finished" podID="9f9f4ead-ef04-4275-82de-c42a903d8252" containerID="11c07a92b483ecb8c3533a083dbd1142e196c3e185413489b5d95865823cbae0" exitCode=0 Dec 09 12:27:32 crc kubenswrapper[4703]: I1209 12:27:32.092229 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-wb7mt" event={"ID":"9f9f4ead-ef04-4275-82de-c42a903d8252","Type":"ContainerDied","Data":"11c07a92b483ecb8c3533a083dbd1142e196c3e185413489b5d95865823cbae0"} Dec 09 12:27:32 crc kubenswrapper[4703]: I1209 12:27:32.110341 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 09 12:27:32 crc kubenswrapper[4703]: I1209 12:27:32.242588 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-55f4w" podStartSLOduration=4.24256845 podStartE2EDuration="4.24256845s" podCreationTimestamp="2025-12-09 12:27:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:27:32.034882301 +0000 UTC m=+1351.283645830" watchObservedRunningTime="2025-12-09 12:27:32.24256845 +0000 UTC m=+1351.491331969" Dec 09 12:27:32 crc kubenswrapper[4703]: I1209 12:27:32.286932 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-j7cmw"] Dec 09 12:27:32 crc kubenswrapper[4703]: I1209 12:27:32.308447 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-j7cmw"] Dec 09 12:27:33 crc kubenswrapper[4703]: I1209 12:27:33.104289 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58ebb093-696a-4703-8e50-0b19695f15d5" path="/var/lib/kubelet/pods/58ebb093-696a-4703-8e50-0b19695f15d5/volumes" Dec 09 12:27:33 crc kubenswrapper[4703]: I1209 12:27:33.112673 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1d66be2-3a97-487f-99d4-59985c569689" path="/var/lib/kubelet/pods/f1d66be2-3a97-487f-99d4-59985c569689/volumes" Dec 09 12:27:33 crc kubenswrapper[4703]: I1209 12:27:33.113499 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-wb7mt" event={"ID":"9f9f4ead-ef04-4275-82de-c42a903d8252","Type":"ContainerStarted","Data":"bf916b9103a92ddce3d76c94f3ac4c7a6f28d494ba8a425857946f5961fa78d9"} Dec 09 12:27:33 crc kubenswrapper[4703]: I1209 12:27:33.113964 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-58dd9ff6bc-wb7mt" Dec 09 12:27:33 crc kubenswrapper[4703]: I1209 12:27:33.141225 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-58dd9ff6bc-wb7mt" podStartSLOduration=5.141203964 podStartE2EDuration="5.141203964s" podCreationTimestamp="2025-12-09 12:27:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:27:33.130844052 +0000 UTC m=+1352.379607571" watchObservedRunningTime="2025-12-09 12:27:33.141203964 +0000 UTC m=+1352.389967483" Dec 09 12:27:38 crc kubenswrapper[4703]: I1209 12:27:38.203096 4703 generic.go:334] "Generic (PLEG): container finished" podID="13a09cf1-99ce-4245-9e5c-260b02aeff14" containerID="53488f4d64c4a6e7e47e02f044e62c2880ce5a5c8f207c4e49d7f71fd0f1ba5b" exitCode=0 Dec 09 12:27:38 crc kubenswrapper[4703]: I1209 12:27:38.203387 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-lqvgg" event={"ID":"13a09cf1-99ce-4245-9e5c-260b02aeff14","Type":"ContainerDied","Data":"53488f4d64c4a6e7e47e02f044e62c2880ce5a5c8f207c4e49d7f71fd0f1ba5b"} Dec 09 12:27:39 crc kubenswrapper[4703]: I1209 12:27:39.715785 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-58dd9ff6bc-wb7mt" Dec 09 12:27:39 crc kubenswrapper[4703]: I1209 12:27:39.781042 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-78wn9"] Dec 09 12:27:39 crc kubenswrapper[4703]: I1209 12:27:39.784408 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-78wn9" podUID="cb943c46-2f32-43db-8192-d20d6e8059ea" containerName="dnsmasq-dns" containerID="cri-o://914e48a0dde8274b9f23ac340400c01aa3da20933e4be792329ffbd3f744f87d" gracePeriod=10 Dec 09 12:27:40 crc kubenswrapper[4703]: I1209 12:27:40.246725 4703 generic.go:334] "Generic (PLEG): container finished" podID="cb943c46-2f32-43db-8192-d20d6e8059ea" containerID="914e48a0dde8274b9f23ac340400c01aa3da20933e4be792329ffbd3f744f87d" exitCode=0 Dec 09 12:27:40 crc kubenswrapper[4703]: I1209 12:27:40.247569 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-78wn9" event={"ID":"cb943c46-2f32-43db-8192-d20d6e8059ea","Type":"ContainerDied","Data":"914e48a0dde8274b9f23ac340400c01aa3da20933e4be792329ffbd3f744f87d"} Dec 09 12:27:40 crc kubenswrapper[4703]: I1209 12:27:40.438975 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-lqvgg" Dec 09 12:27:40 crc kubenswrapper[4703]: I1209 12:27:40.623647 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/13a09cf1-99ce-4245-9e5c-260b02aeff14-credential-keys\") pod \"13a09cf1-99ce-4245-9e5c-260b02aeff14\" (UID: \"13a09cf1-99ce-4245-9e5c-260b02aeff14\") " Dec 09 12:27:40 crc kubenswrapper[4703]: I1209 12:27:40.623782 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13a09cf1-99ce-4245-9e5c-260b02aeff14-combined-ca-bundle\") pod \"13a09cf1-99ce-4245-9e5c-260b02aeff14\" (UID: \"13a09cf1-99ce-4245-9e5c-260b02aeff14\") " Dec 09 12:27:40 crc kubenswrapper[4703]: I1209 12:27:40.623897 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vr4kk\" (UniqueName: \"kubernetes.io/projected/13a09cf1-99ce-4245-9e5c-260b02aeff14-kube-api-access-vr4kk\") pod \"13a09cf1-99ce-4245-9e5c-260b02aeff14\" (UID: \"13a09cf1-99ce-4245-9e5c-260b02aeff14\") " Dec 09 12:27:40 crc kubenswrapper[4703]: I1209 12:27:40.623923 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/13a09cf1-99ce-4245-9e5c-260b02aeff14-fernet-keys\") pod \"13a09cf1-99ce-4245-9e5c-260b02aeff14\" (UID: \"13a09cf1-99ce-4245-9e5c-260b02aeff14\") " Dec 09 12:27:40 crc kubenswrapper[4703]: I1209 12:27:40.624842 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13a09cf1-99ce-4245-9e5c-260b02aeff14-config-data\") pod \"13a09cf1-99ce-4245-9e5c-260b02aeff14\" (UID: \"13a09cf1-99ce-4245-9e5c-260b02aeff14\") " Dec 09 12:27:40 crc kubenswrapper[4703]: I1209 12:27:40.624878 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/13a09cf1-99ce-4245-9e5c-260b02aeff14-scripts\") pod \"13a09cf1-99ce-4245-9e5c-260b02aeff14\" (UID: \"13a09cf1-99ce-4245-9e5c-260b02aeff14\") " Dec 09 12:27:40 crc kubenswrapper[4703]: I1209 12:27:40.640568 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13a09cf1-99ce-4245-9e5c-260b02aeff14-scripts" (OuterVolumeSpecName: "scripts") pod "13a09cf1-99ce-4245-9e5c-260b02aeff14" (UID: "13a09cf1-99ce-4245-9e5c-260b02aeff14"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:27:40 crc kubenswrapper[4703]: I1209 12:27:40.645247 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13a09cf1-99ce-4245-9e5c-260b02aeff14-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "13a09cf1-99ce-4245-9e5c-260b02aeff14" (UID: "13a09cf1-99ce-4245-9e5c-260b02aeff14"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:27:40 crc kubenswrapper[4703]: I1209 12:27:40.653937 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13a09cf1-99ce-4245-9e5c-260b02aeff14-kube-api-access-vr4kk" (OuterVolumeSpecName: "kube-api-access-vr4kk") pod "13a09cf1-99ce-4245-9e5c-260b02aeff14" (UID: "13a09cf1-99ce-4245-9e5c-260b02aeff14"). InnerVolumeSpecName "kube-api-access-vr4kk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:27:40 crc kubenswrapper[4703]: I1209 12:27:40.663488 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13a09cf1-99ce-4245-9e5c-260b02aeff14-config-data" (OuterVolumeSpecName: "config-data") pod "13a09cf1-99ce-4245-9e5c-260b02aeff14" (UID: "13a09cf1-99ce-4245-9e5c-260b02aeff14"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:27:40 crc kubenswrapper[4703]: I1209 12:27:40.680105 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13a09cf1-99ce-4245-9e5c-260b02aeff14-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "13a09cf1-99ce-4245-9e5c-260b02aeff14" (UID: "13a09cf1-99ce-4245-9e5c-260b02aeff14"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:27:40 crc kubenswrapper[4703]: I1209 12:27:40.729642 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13a09cf1-99ce-4245-9e5c-260b02aeff14-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "13a09cf1-99ce-4245-9e5c-260b02aeff14" (UID: "13a09cf1-99ce-4245-9e5c-260b02aeff14"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:27:40 crc kubenswrapper[4703]: I1209 12:27:40.733125 4703 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13a09cf1-99ce-4245-9e5c-260b02aeff14-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 12:27:40 crc kubenswrapper[4703]: I1209 12:27:40.733169 4703 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/13a09cf1-99ce-4245-9e5c-260b02aeff14-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 12:27:40 crc kubenswrapper[4703]: I1209 12:27:40.733202 4703 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/13a09cf1-99ce-4245-9e5c-260b02aeff14-credential-keys\") on node \"crc\" DevicePath \"\"" Dec 09 12:27:40 crc kubenswrapper[4703]: I1209 12:27:40.733215 4703 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13a09cf1-99ce-4245-9e5c-260b02aeff14-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:27:40 crc kubenswrapper[4703]: I1209 12:27:40.733228 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vr4kk\" (UniqueName: \"kubernetes.io/projected/13a09cf1-99ce-4245-9e5c-260b02aeff14-kube-api-access-vr4kk\") on node \"crc\" DevicePath \"\"" Dec 09 12:27:40 crc kubenswrapper[4703]: I1209 12:27:40.733240 4703 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/13a09cf1-99ce-4245-9e5c-260b02aeff14-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 09 12:27:41 crc kubenswrapper[4703]: I1209 12:27:41.259890 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-lqvgg" event={"ID":"13a09cf1-99ce-4245-9e5c-260b02aeff14","Type":"ContainerDied","Data":"cfd6cbbcc35ab00fc8ef3dc44ede981e6a1a2e0c3207fa2d60a9cd7f837149a3"} Dec 09 12:27:41 crc kubenswrapper[4703]: I1209 12:27:41.259940 4703 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cfd6cbbcc35ab00fc8ef3dc44ede981e6a1a2e0c3207fa2d60a9cd7f837149a3" Dec 09 12:27:41 crc kubenswrapper[4703]: I1209 12:27:41.260003 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-lqvgg" Dec 09 12:27:41 crc kubenswrapper[4703]: I1209 12:27:41.537778 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-lqvgg"] Dec 09 12:27:41 crc kubenswrapper[4703]: I1209 12:27:41.554137 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-lqvgg"] Dec 09 12:27:41 crc kubenswrapper[4703]: I1209 12:27:41.640943 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-6b8bj"] Dec 09 12:27:41 crc kubenswrapper[4703]: E1209 12:27:41.641477 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1d66be2-3a97-487f-99d4-59985c569689" containerName="init" Dec 09 12:27:41 crc kubenswrapper[4703]: I1209 12:27:41.641502 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1d66be2-3a97-487f-99d4-59985c569689" containerName="init" Dec 09 12:27:41 crc kubenswrapper[4703]: E1209 12:27:41.641546 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58ebb093-696a-4703-8e50-0b19695f15d5" containerName="init" Dec 09 12:27:41 crc kubenswrapper[4703]: I1209 12:27:41.641564 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="58ebb093-696a-4703-8e50-0b19695f15d5" containerName="init" Dec 09 12:27:41 crc kubenswrapper[4703]: E1209 12:27:41.641582 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13a09cf1-99ce-4245-9e5c-260b02aeff14" containerName="keystone-bootstrap" Dec 09 12:27:41 crc kubenswrapper[4703]: I1209 12:27:41.641591 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="13a09cf1-99ce-4245-9e5c-260b02aeff14" containerName="keystone-bootstrap" Dec 09 12:27:41 crc kubenswrapper[4703]: I1209 12:27:41.641812 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="13a09cf1-99ce-4245-9e5c-260b02aeff14" containerName="keystone-bootstrap" Dec 09 12:27:41 crc kubenswrapper[4703]: I1209 12:27:41.641842 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1d66be2-3a97-487f-99d4-59985c569689" containerName="init" Dec 09 12:27:41 crc kubenswrapper[4703]: I1209 12:27:41.641864 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="58ebb093-696a-4703-8e50-0b19695f15d5" containerName="init" Dec 09 12:27:41 crc kubenswrapper[4703]: I1209 12:27:41.642935 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-6b8bj" Dec 09 12:27:41 crc kubenswrapper[4703]: I1209 12:27:41.645690 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 09 12:27:41 crc kubenswrapper[4703]: I1209 12:27:41.645999 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 09 12:27:41 crc kubenswrapper[4703]: I1209 12:27:41.646267 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 09 12:27:41 crc kubenswrapper[4703]: I1209 12:27:41.646434 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-s28wx" Dec 09 12:27:41 crc kubenswrapper[4703]: I1209 12:27:41.656588 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-6b8bj"] Dec 09 12:27:41 crc kubenswrapper[4703]: I1209 12:27:41.754768 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnphs\" (UniqueName: \"kubernetes.io/projected/3ecae80d-2552-44d1-9b62-c7112893d38f-kube-api-access-cnphs\") pod \"keystone-bootstrap-6b8bj\" (UID: \"3ecae80d-2552-44d1-9b62-c7112893d38f\") " pod="openstack/keystone-bootstrap-6b8bj" Dec 09 12:27:41 crc kubenswrapper[4703]: I1209 12:27:41.754849 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ecae80d-2552-44d1-9b62-c7112893d38f-config-data\") pod \"keystone-bootstrap-6b8bj\" (UID: \"3ecae80d-2552-44d1-9b62-c7112893d38f\") " pod="openstack/keystone-bootstrap-6b8bj" Dec 09 12:27:41 crc kubenswrapper[4703]: I1209 12:27:41.754905 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ecae80d-2552-44d1-9b62-c7112893d38f-scripts\") pod \"keystone-bootstrap-6b8bj\" (UID: \"3ecae80d-2552-44d1-9b62-c7112893d38f\") " pod="openstack/keystone-bootstrap-6b8bj" Dec 09 12:27:41 crc kubenswrapper[4703]: I1209 12:27:41.754931 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3ecae80d-2552-44d1-9b62-c7112893d38f-credential-keys\") pod \"keystone-bootstrap-6b8bj\" (UID: \"3ecae80d-2552-44d1-9b62-c7112893d38f\") " pod="openstack/keystone-bootstrap-6b8bj" Dec 09 12:27:41 crc kubenswrapper[4703]: I1209 12:27:41.755144 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ecae80d-2552-44d1-9b62-c7112893d38f-combined-ca-bundle\") pod \"keystone-bootstrap-6b8bj\" (UID: \"3ecae80d-2552-44d1-9b62-c7112893d38f\") " pod="openstack/keystone-bootstrap-6b8bj" Dec 09 12:27:41 crc kubenswrapper[4703]: I1209 12:27:41.755274 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3ecae80d-2552-44d1-9b62-c7112893d38f-fernet-keys\") pod \"keystone-bootstrap-6b8bj\" (UID: \"3ecae80d-2552-44d1-9b62-c7112893d38f\") " pod="openstack/keystone-bootstrap-6b8bj" Dec 09 12:27:41 crc kubenswrapper[4703]: I1209 12:27:41.856941 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ecae80d-2552-44d1-9b62-c7112893d38f-config-data\") pod \"keystone-bootstrap-6b8bj\" (UID: \"3ecae80d-2552-44d1-9b62-c7112893d38f\") " pod="openstack/keystone-bootstrap-6b8bj" Dec 09 12:27:41 crc kubenswrapper[4703]: I1209 12:27:41.857018 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ecae80d-2552-44d1-9b62-c7112893d38f-scripts\") pod \"keystone-bootstrap-6b8bj\" (UID: \"3ecae80d-2552-44d1-9b62-c7112893d38f\") " pod="openstack/keystone-bootstrap-6b8bj" Dec 09 12:27:41 crc kubenswrapper[4703]: I1209 12:27:41.857042 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3ecae80d-2552-44d1-9b62-c7112893d38f-credential-keys\") pod \"keystone-bootstrap-6b8bj\" (UID: \"3ecae80d-2552-44d1-9b62-c7112893d38f\") " pod="openstack/keystone-bootstrap-6b8bj" Dec 09 12:27:41 crc kubenswrapper[4703]: I1209 12:27:41.857147 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ecae80d-2552-44d1-9b62-c7112893d38f-combined-ca-bundle\") pod \"keystone-bootstrap-6b8bj\" (UID: \"3ecae80d-2552-44d1-9b62-c7112893d38f\") " pod="openstack/keystone-bootstrap-6b8bj" Dec 09 12:27:41 crc kubenswrapper[4703]: I1209 12:27:41.857173 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3ecae80d-2552-44d1-9b62-c7112893d38f-fernet-keys\") pod \"keystone-bootstrap-6b8bj\" (UID: \"3ecae80d-2552-44d1-9b62-c7112893d38f\") " pod="openstack/keystone-bootstrap-6b8bj" Dec 09 12:27:41 crc kubenswrapper[4703]: I1209 12:27:41.857230 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnphs\" (UniqueName: \"kubernetes.io/projected/3ecae80d-2552-44d1-9b62-c7112893d38f-kube-api-access-cnphs\") pod \"keystone-bootstrap-6b8bj\" (UID: \"3ecae80d-2552-44d1-9b62-c7112893d38f\") " pod="openstack/keystone-bootstrap-6b8bj" Dec 09 12:27:41 crc kubenswrapper[4703]: I1209 12:27:41.863552 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ecae80d-2552-44d1-9b62-c7112893d38f-combined-ca-bundle\") pod \"keystone-bootstrap-6b8bj\" (UID: \"3ecae80d-2552-44d1-9b62-c7112893d38f\") " pod="openstack/keystone-bootstrap-6b8bj" Dec 09 12:27:41 crc kubenswrapper[4703]: I1209 12:27:41.863932 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3ecae80d-2552-44d1-9b62-c7112893d38f-credential-keys\") pod \"keystone-bootstrap-6b8bj\" (UID: \"3ecae80d-2552-44d1-9b62-c7112893d38f\") " pod="openstack/keystone-bootstrap-6b8bj" Dec 09 12:27:41 crc kubenswrapper[4703]: I1209 12:27:41.864744 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ecae80d-2552-44d1-9b62-c7112893d38f-scripts\") pod \"keystone-bootstrap-6b8bj\" (UID: \"3ecae80d-2552-44d1-9b62-c7112893d38f\") " pod="openstack/keystone-bootstrap-6b8bj" Dec 09 12:27:41 crc kubenswrapper[4703]: I1209 12:27:41.869083 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3ecae80d-2552-44d1-9b62-c7112893d38f-fernet-keys\") pod \"keystone-bootstrap-6b8bj\" (UID: \"3ecae80d-2552-44d1-9b62-c7112893d38f\") " pod="openstack/keystone-bootstrap-6b8bj" Dec 09 12:27:41 crc kubenswrapper[4703]: I1209 12:27:41.878300 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ecae80d-2552-44d1-9b62-c7112893d38f-config-data\") pod \"keystone-bootstrap-6b8bj\" (UID: \"3ecae80d-2552-44d1-9b62-c7112893d38f\") " pod="openstack/keystone-bootstrap-6b8bj" Dec 09 12:27:41 crc kubenswrapper[4703]: I1209 12:27:41.882492 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnphs\" (UniqueName: \"kubernetes.io/projected/3ecae80d-2552-44d1-9b62-c7112893d38f-kube-api-access-cnphs\") pod \"keystone-bootstrap-6b8bj\" (UID: \"3ecae80d-2552-44d1-9b62-c7112893d38f\") " pod="openstack/keystone-bootstrap-6b8bj" Dec 09 12:27:41 crc kubenswrapper[4703]: I1209 12:27:41.981378 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-6b8bj" Dec 09 12:27:43 crc kubenswrapper[4703]: I1209 12:27:43.081005 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13a09cf1-99ce-4245-9e5c-260b02aeff14" path="/var/lib/kubelet/pods/13a09cf1-99ce-4245-9e5c-260b02aeff14/volumes" Dec 09 12:27:44 crc kubenswrapper[4703]: I1209 12:27:44.245548 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-78wn9" Dec 09 12:27:44 crc kubenswrapper[4703]: I1209 12:27:44.341636 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-78wn9" event={"ID":"cb943c46-2f32-43db-8192-d20d6e8059ea","Type":"ContainerDied","Data":"d43b82055ae174b2b2abbc0292d577465db4e2c5c5d1abcb41752c2f152105e9"} Dec 09 12:27:44 crc kubenswrapper[4703]: I1209 12:27:44.341730 4703 scope.go:117] "RemoveContainer" containerID="914e48a0dde8274b9f23ac340400c01aa3da20933e4be792329ffbd3f744f87d" Dec 09 12:27:44 crc kubenswrapper[4703]: I1209 12:27:44.341993 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-78wn9" Dec 09 12:27:44 crc kubenswrapper[4703]: I1209 12:27:44.419440 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cb943c46-2f32-43db-8192-d20d6e8059ea-ovsdbserver-sb\") pod \"cb943c46-2f32-43db-8192-d20d6e8059ea\" (UID: \"cb943c46-2f32-43db-8192-d20d6e8059ea\") " Dec 09 12:27:44 crc kubenswrapper[4703]: I1209 12:27:44.419562 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb943c46-2f32-43db-8192-d20d6e8059ea-config\") pod \"cb943c46-2f32-43db-8192-d20d6e8059ea\" (UID: \"cb943c46-2f32-43db-8192-d20d6e8059ea\") " Dec 09 12:27:44 crc kubenswrapper[4703]: I1209 12:27:44.419692 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lmvhn\" (UniqueName: \"kubernetes.io/projected/cb943c46-2f32-43db-8192-d20d6e8059ea-kube-api-access-lmvhn\") pod \"cb943c46-2f32-43db-8192-d20d6e8059ea\" (UID: \"cb943c46-2f32-43db-8192-d20d6e8059ea\") " Dec 09 12:27:44 crc kubenswrapper[4703]: I1209 12:27:44.419762 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cb943c46-2f32-43db-8192-d20d6e8059ea-dns-svc\") pod \"cb943c46-2f32-43db-8192-d20d6e8059ea\" (UID: \"cb943c46-2f32-43db-8192-d20d6e8059ea\") " Dec 09 12:27:44 crc kubenswrapper[4703]: I1209 12:27:44.419854 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cb943c46-2f32-43db-8192-d20d6e8059ea-ovsdbserver-nb\") pod \"cb943c46-2f32-43db-8192-d20d6e8059ea\" (UID: \"cb943c46-2f32-43db-8192-d20d6e8059ea\") " Dec 09 12:27:44 crc kubenswrapper[4703]: I1209 12:27:44.493423 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb943c46-2f32-43db-8192-d20d6e8059ea-kube-api-access-lmvhn" (OuterVolumeSpecName: "kube-api-access-lmvhn") pod "cb943c46-2f32-43db-8192-d20d6e8059ea" (UID: "cb943c46-2f32-43db-8192-d20d6e8059ea"). InnerVolumeSpecName "kube-api-access-lmvhn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:27:44 crc kubenswrapper[4703]: I1209 12:27:44.524817 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lmvhn\" (UniqueName: \"kubernetes.io/projected/cb943c46-2f32-43db-8192-d20d6e8059ea-kube-api-access-lmvhn\") on node \"crc\" DevicePath \"\"" Dec 09 12:27:44 crc kubenswrapper[4703]: I1209 12:27:44.563467 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb943c46-2f32-43db-8192-d20d6e8059ea-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "cb943c46-2f32-43db-8192-d20d6e8059ea" (UID: "cb943c46-2f32-43db-8192-d20d6e8059ea"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:27:44 crc kubenswrapper[4703]: I1209 12:27:44.599052 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb943c46-2f32-43db-8192-d20d6e8059ea-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cb943c46-2f32-43db-8192-d20d6e8059ea" (UID: "cb943c46-2f32-43db-8192-d20d6e8059ea"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:27:44 crc kubenswrapper[4703]: I1209 12:27:44.612958 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb943c46-2f32-43db-8192-d20d6e8059ea-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "cb943c46-2f32-43db-8192-d20d6e8059ea" (UID: "cb943c46-2f32-43db-8192-d20d6e8059ea"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:27:44 crc kubenswrapper[4703]: I1209 12:27:44.616236 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb943c46-2f32-43db-8192-d20d6e8059ea-config" (OuterVolumeSpecName: "config") pod "cb943c46-2f32-43db-8192-d20d6e8059ea" (UID: "cb943c46-2f32-43db-8192-d20d6e8059ea"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:27:44 crc kubenswrapper[4703]: I1209 12:27:44.626578 4703 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cb943c46-2f32-43db-8192-d20d6e8059ea-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 09 12:27:44 crc kubenswrapper[4703]: I1209 12:27:44.626875 4703 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cb943c46-2f32-43db-8192-d20d6e8059ea-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 09 12:27:44 crc kubenswrapper[4703]: I1209 12:27:44.626965 4703 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cb943c46-2f32-43db-8192-d20d6e8059ea-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 09 12:27:44 crc kubenswrapper[4703]: I1209 12:27:44.627055 4703 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb943c46-2f32-43db-8192-d20d6e8059ea-config\") on node \"crc\" DevicePath \"\"" Dec 09 12:27:44 crc kubenswrapper[4703]: I1209 12:27:44.692796 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-78wn9"] Dec 09 12:27:44 crc kubenswrapper[4703]: I1209 12:27:44.702274 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-78wn9"] Dec 09 12:27:45 crc kubenswrapper[4703]: I1209 12:27:45.083898 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb943c46-2f32-43db-8192-d20d6e8059ea" path="/var/lib/kubelet/pods/cb943c46-2f32-43db-8192-d20d6e8059ea/volumes" Dec 09 12:27:47 crc kubenswrapper[4703]: I1209 12:27:47.862134 4703 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-78wn9" podUID="cb943c46-2f32-43db-8192-d20d6e8059ea" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.129:5353: i/o timeout" Dec 09 12:27:53 crc kubenswrapper[4703]: I1209 12:27:53.457401 4703 generic.go:334] "Generic (PLEG): container finished" podID="e4e5cccd-9c7c-467b-a10b-4a989ea688e3" containerID="e8d2763d1b2c74ab50dd51c74ca6a9e741daa257fa609a0dac2037996b9d20f0" exitCode=0 Dec 09 12:27:53 crc kubenswrapper[4703]: I1209 12:27:53.457488 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-29stw" event={"ID":"e4e5cccd-9c7c-467b-a10b-4a989ea688e3","Type":"ContainerDied","Data":"e8d2763d1b2c74ab50dd51c74ca6a9e741daa257fa609a0dac2037996b9d20f0"} Dec 09 12:28:03 crc kubenswrapper[4703]: I1209 12:28:03.595085 4703 generic.go:334] "Generic (PLEG): container finished" podID="0d3e75ac-9860-42b3-b442-f9bd60c82e58" containerID="597b350cad8d8aacc535de81806eabf41e75c87bf67770c72b69ed15a3617f2c" exitCode=0 Dec 09 12:28:03 crc kubenswrapper[4703]: I1209 12:28:03.595216 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-55f4w" event={"ID":"0d3e75ac-9860-42b3-b442-f9bd60c82e58","Type":"ContainerDied","Data":"597b350cad8d8aacc535de81806eabf41e75c87bf67770c72b69ed15a3617f2c"} Dec 09 12:28:05 crc kubenswrapper[4703]: E1209 12:28:05.340971 4703 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Dec 09 12:28:05 crc kubenswrapper[4703]: E1209 12:28:05.341429 4703 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n75hfbh6fhdbh5bbh67dhc8h549h65ch668h647h589h67dh659h557h586hch654h67fh5b9h677h64bh688hc7h5f6h684h7ch7ch7fh9dhddh599q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zkpxz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(8439e49e-7ee2-4be7-b4a2-f437a2124bd9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 12:28:05 crc kubenswrapper[4703]: I1209 12:28:05.883889 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-29stw" Dec 09 12:28:05 crc kubenswrapper[4703]: I1209 12:28:05.911868 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-55f4w" Dec 09 12:28:06 crc kubenswrapper[4703]: I1209 12:28:06.081432 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-65rqn\" (UniqueName: \"kubernetes.io/projected/0d3e75ac-9860-42b3-b442-f9bd60c82e58-kube-api-access-65rqn\") pod \"0d3e75ac-9860-42b3-b442-f9bd60c82e58\" (UID: \"0d3e75ac-9860-42b3-b442-f9bd60c82e58\") " Dec 09 12:28:06 crc kubenswrapper[4703]: I1209 12:28:06.081529 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4e5cccd-9c7c-467b-a10b-4a989ea688e3-config-data\") pod \"e4e5cccd-9c7c-467b-a10b-4a989ea688e3\" (UID: \"e4e5cccd-9c7c-467b-a10b-4a989ea688e3\") " Dec 09 12:28:06 crc kubenswrapper[4703]: I1209 12:28:06.081626 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0d3e75ac-9860-42b3-b442-f9bd60c82e58-config\") pod \"0d3e75ac-9860-42b3-b442-f9bd60c82e58\" (UID: \"0d3e75ac-9860-42b3-b442-f9bd60c82e58\") " Dec 09 12:28:06 crc kubenswrapper[4703]: I1209 12:28:06.081680 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e4e5cccd-9c7c-467b-a10b-4a989ea688e3-db-sync-config-data\") pod \"e4e5cccd-9c7c-467b-a10b-4a989ea688e3\" (UID: \"e4e5cccd-9c7c-467b-a10b-4a989ea688e3\") " Dec 09 12:28:06 crc kubenswrapper[4703]: I1209 12:28:06.082209 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4e5cccd-9c7c-467b-a10b-4a989ea688e3-combined-ca-bundle\") pod \"e4e5cccd-9c7c-467b-a10b-4a989ea688e3\" (UID: \"e4e5cccd-9c7c-467b-a10b-4a989ea688e3\") " Dec 09 12:28:06 crc kubenswrapper[4703]: I1209 12:28:06.082258 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d3e75ac-9860-42b3-b442-f9bd60c82e58-combined-ca-bundle\") pod \"0d3e75ac-9860-42b3-b442-f9bd60c82e58\" (UID: \"0d3e75ac-9860-42b3-b442-f9bd60c82e58\") " Dec 09 12:28:06 crc kubenswrapper[4703]: I1209 12:28:06.082640 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-brmls\" (UniqueName: \"kubernetes.io/projected/e4e5cccd-9c7c-467b-a10b-4a989ea688e3-kube-api-access-brmls\") pod \"e4e5cccd-9c7c-467b-a10b-4a989ea688e3\" (UID: \"e4e5cccd-9c7c-467b-a10b-4a989ea688e3\") " Dec 09 12:28:06 crc kubenswrapper[4703]: I1209 12:28:06.087608 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4e5cccd-9c7c-467b-a10b-4a989ea688e3-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "e4e5cccd-9c7c-467b-a10b-4a989ea688e3" (UID: "e4e5cccd-9c7c-467b-a10b-4a989ea688e3"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:28:06 crc kubenswrapper[4703]: I1209 12:28:06.087640 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d3e75ac-9860-42b3-b442-f9bd60c82e58-kube-api-access-65rqn" (OuterVolumeSpecName: "kube-api-access-65rqn") pod "0d3e75ac-9860-42b3-b442-f9bd60c82e58" (UID: "0d3e75ac-9860-42b3-b442-f9bd60c82e58"). InnerVolumeSpecName "kube-api-access-65rqn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:28:06 crc kubenswrapper[4703]: I1209 12:28:06.087797 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4e5cccd-9c7c-467b-a10b-4a989ea688e3-kube-api-access-brmls" (OuterVolumeSpecName: "kube-api-access-brmls") pod "e4e5cccd-9c7c-467b-a10b-4a989ea688e3" (UID: "e4e5cccd-9c7c-467b-a10b-4a989ea688e3"). InnerVolumeSpecName "kube-api-access-brmls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:28:06 crc kubenswrapper[4703]: I1209 12:28:06.112395 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4e5cccd-9c7c-467b-a10b-4a989ea688e3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e4e5cccd-9c7c-467b-a10b-4a989ea688e3" (UID: "e4e5cccd-9c7c-467b-a10b-4a989ea688e3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:28:06 crc kubenswrapper[4703]: I1209 12:28:06.113383 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d3e75ac-9860-42b3-b442-f9bd60c82e58-config" (OuterVolumeSpecName: "config") pod "0d3e75ac-9860-42b3-b442-f9bd60c82e58" (UID: "0d3e75ac-9860-42b3-b442-f9bd60c82e58"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:28:06 crc kubenswrapper[4703]: I1209 12:28:06.119753 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d3e75ac-9860-42b3-b442-f9bd60c82e58-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0d3e75ac-9860-42b3-b442-f9bd60c82e58" (UID: "0d3e75ac-9860-42b3-b442-f9bd60c82e58"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:28:06 crc kubenswrapper[4703]: I1209 12:28:06.162867 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4e5cccd-9c7c-467b-a10b-4a989ea688e3-config-data" (OuterVolumeSpecName: "config-data") pod "e4e5cccd-9c7c-467b-a10b-4a989ea688e3" (UID: "e4e5cccd-9c7c-467b-a10b-4a989ea688e3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:28:06 crc kubenswrapper[4703]: I1209 12:28:06.184553 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-65rqn\" (UniqueName: \"kubernetes.io/projected/0d3e75ac-9860-42b3-b442-f9bd60c82e58-kube-api-access-65rqn\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:06 crc kubenswrapper[4703]: I1209 12:28:06.184593 4703 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4e5cccd-9c7c-467b-a10b-4a989ea688e3-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:06 crc kubenswrapper[4703]: I1209 12:28:06.184606 4703 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/0d3e75ac-9860-42b3-b442-f9bd60c82e58-config\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:06 crc kubenswrapper[4703]: I1209 12:28:06.184619 4703 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e4e5cccd-9c7c-467b-a10b-4a989ea688e3-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:06 crc kubenswrapper[4703]: I1209 12:28:06.184630 4703 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4e5cccd-9c7c-467b-a10b-4a989ea688e3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:06 crc kubenswrapper[4703]: I1209 12:28:06.184641 4703 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d3e75ac-9860-42b3-b442-f9bd60c82e58-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:06 crc kubenswrapper[4703]: I1209 12:28:06.184652 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-brmls\" (UniqueName: \"kubernetes.io/projected/e4e5cccd-9c7c-467b-a10b-4a989ea688e3-kube-api-access-brmls\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:06 crc kubenswrapper[4703]: I1209 12:28:06.637251 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-55f4w" Dec 09 12:28:06 crc kubenswrapper[4703]: I1209 12:28:06.637243 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-55f4w" event={"ID":"0d3e75ac-9860-42b3-b442-f9bd60c82e58","Type":"ContainerDied","Data":"71f367303afd88791e0e7d4c7193ca221367c23dcb538a6304714896d8b8ca10"} Dec 09 12:28:06 crc kubenswrapper[4703]: I1209 12:28:06.637884 4703 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="71f367303afd88791e0e7d4c7193ca221367c23dcb538a6304714896d8b8ca10" Dec 09 12:28:06 crc kubenswrapper[4703]: I1209 12:28:06.638927 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-29stw" event={"ID":"e4e5cccd-9c7c-467b-a10b-4a989ea688e3","Type":"ContainerDied","Data":"7fb255c9f79112ea0512986264efb2b60b76bb46ee0e3d503b6a9dc8c6fa5e3b"} Dec 09 12:28:06 crc kubenswrapper[4703]: I1209 12:28:06.638958 4703 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7fb255c9f79112ea0512986264efb2b60b76bb46ee0e3d503b6a9dc8c6fa5e3b" Dec 09 12:28:06 crc kubenswrapper[4703]: I1209 12:28:06.639009 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-29stw" Dec 09 12:28:07 crc kubenswrapper[4703]: I1209 12:28:07.215003 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7d88d7b95f-s7vh4"] Dec 09 12:28:07 crc kubenswrapper[4703]: E1209 12:28:07.216231 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb943c46-2f32-43db-8192-d20d6e8059ea" containerName="dnsmasq-dns" Dec 09 12:28:07 crc kubenswrapper[4703]: I1209 12:28:07.216339 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb943c46-2f32-43db-8192-d20d6e8059ea" containerName="dnsmasq-dns" Dec 09 12:28:07 crc kubenswrapper[4703]: E1209 12:28:07.216701 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d3e75ac-9860-42b3-b442-f9bd60c82e58" containerName="neutron-db-sync" Dec 09 12:28:07 crc kubenswrapper[4703]: I1209 12:28:07.216759 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d3e75ac-9860-42b3-b442-f9bd60c82e58" containerName="neutron-db-sync" Dec 09 12:28:07 crc kubenswrapper[4703]: E1209 12:28:07.217151 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4e5cccd-9c7c-467b-a10b-4a989ea688e3" containerName="glance-db-sync" Dec 09 12:28:07 crc kubenswrapper[4703]: I1209 12:28:07.217272 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4e5cccd-9c7c-467b-a10b-4a989ea688e3" containerName="glance-db-sync" Dec 09 12:28:07 crc kubenswrapper[4703]: E1209 12:28:07.219315 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb943c46-2f32-43db-8192-d20d6e8059ea" containerName="init" Dec 09 12:28:07 crc kubenswrapper[4703]: I1209 12:28:07.219510 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb943c46-2f32-43db-8192-d20d6e8059ea" containerName="init" Dec 09 12:28:07 crc kubenswrapper[4703]: I1209 12:28:07.229362 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d3e75ac-9860-42b3-b442-f9bd60c82e58" containerName="neutron-db-sync" Dec 09 12:28:07 crc kubenswrapper[4703]: I1209 12:28:07.229706 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb943c46-2f32-43db-8192-d20d6e8059ea" containerName="dnsmasq-dns" Dec 09 12:28:07 crc kubenswrapper[4703]: I1209 12:28:07.229804 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4e5cccd-9c7c-467b-a10b-4a989ea688e3" containerName="glance-db-sync" Dec 09 12:28:07 crc kubenswrapper[4703]: I1209 12:28:07.231348 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d88d7b95f-s7vh4" Dec 09 12:28:07 crc kubenswrapper[4703]: I1209 12:28:07.257466 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d88d7b95f-s7vh4"] Dec 09 12:28:07 crc kubenswrapper[4703]: I1209 12:28:07.320321 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/90b0d7de-0a52-4200-8644-9a4561cfe725-ovsdbserver-sb\") pod \"dnsmasq-dns-7d88d7b95f-s7vh4\" (UID: \"90b0d7de-0a52-4200-8644-9a4561cfe725\") " pod="openstack/dnsmasq-dns-7d88d7b95f-s7vh4" Dec 09 12:28:07 crc kubenswrapper[4703]: I1209 12:28:07.320380 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2cph\" (UniqueName: \"kubernetes.io/projected/90b0d7de-0a52-4200-8644-9a4561cfe725-kube-api-access-d2cph\") pod \"dnsmasq-dns-7d88d7b95f-s7vh4\" (UID: \"90b0d7de-0a52-4200-8644-9a4561cfe725\") " pod="openstack/dnsmasq-dns-7d88d7b95f-s7vh4" Dec 09 12:28:07 crc kubenswrapper[4703]: I1209 12:28:07.320406 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/90b0d7de-0a52-4200-8644-9a4561cfe725-ovsdbserver-nb\") pod \"dnsmasq-dns-7d88d7b95f-s7vh4\" (UID: \"90b0d7de-0a52-4200-8644-9a4561cfe725\") " pod="openstack/dnsmasq-dns-7d88d7b95f-s7vh4" Dec 09 12:28:07 crc kubenswrapper[4703]: I1209 12:28:07.320458 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/90b0d7de-0a52-4200-8644-9a4561cfe725-dns-svc\") pod \"dnsmasq-dns-7d88d7b95f-s7vh4\" (UID: \"90b0d7de-0a52-4200-8644-9a4561cfe725\") " pod="openstack/dnsmasq-dns-7d88d7b95f-s7vh4" Dec 09 12:28:07 crc kubenswrapper[4703]: I1209 12:28:07.320495 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/90b0d7de-0a52-4200-8644-9a4561cfe725-dns-swift-storage-0\") pod \"dnsmasq-dns-7d88d7b95f-s7vh4\" (UID: \"90b0d7de-0a52-4200-8644-9a4561cfe725\") " pod="openstack/dnsmasq-dns-7d88d7b95f-s7vh4" Dec 09 12:28:07 crc kubenswrapper[4703]: I1209 12:28:07.320529 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90b0d7de-0a52-4200-8644-9a4561cfe725-config\") pod \"dnsmasq-dns-7d88d7b95f-s7vh4\" (UID: \"90b0d7de-0a52-4200-8644-9a4561cfe725\") " pod="openstack/dnsmasq-dns-7d88d7b95f-s7vh4" Dec 09 12:28:07 crc kubenswrapper[4703]: I1209 12:28:07.326862 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-74fff9b6-zdlbc"] Dec 09 12:28:07 crc kubenswrapper[4703]: I1209 12:28:07.329148 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-74fff9b6-zdlbc" Dec 09 12:28:07 crc kubenswrapper[4703]: I1209 12:28:07.340961 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Dec 09 12:28:07 crc kubenswrapper[4703]: I1209 12:28:07.341147 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Dec 09 12:28:07 crc kubenswrapper[4703]: I1209 12:28:07.341407 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-jnktl" Dec 09 12:28:07 crc kubenswrapper[4703]: I1209 12:28:07.376764 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Dec 09 12:28:07 crc kubenswrapper[4703]: I1209 12:28:07.443107 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/90b0d7de-0a52-4200-8644-9a4561cfe725-ovsdbserver-sb\") pod \"dnsmasq-dns-7d88d7b95f-s7vh4\" (UID: \"90b0d7de-0a52-4200-8644-9a4561cfe725\") " pod="openstack/dnsmasq-dns-7d88d7b95f-s7vh4" Dec 09 12:28:07 crc kubenswrapper[4703]: I1209 12:28:07.444155 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2cph\" (UniqueName: \"kubernetes.io/projected/90b0d7de-0a52-4200-8644-9a4561cfe725-kube-api-access-d2cph\") pod \"dnsmasq-dns-7d88d7b95f-s7vh4\" (UID: \"90b0d7de-0a52-4200-8644-9a4561cfe725\") " pod="openstack/dnsmasq-dns-7d88d7b95f-s7vh4" Dec 09 12:28:07 crc kubenswrapper[4703]: I1209 12:28:07.444768 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/90b0d7de-0a52-4200-8644-9a4561cfe725-ovsdbserver-sb\") pod \"dnsmasq-dns-7d88d7b95f-s7vh4\" (UID: \"90b0d7de-0a52-4200-8644-9a4561cfe725\") " pod="openstack/dnsmasq-dns-7d88d7b95f-s7vh4" Dec 09 12:28:07 crc kubenswrapper[4703]: I1209 12:28:07.444380 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/90b0d7de-0a52-4200-8644-9a4561cfe725-ovsdbserver-nb\") pod \"dnsmasq-dns-7d88d7b95f-s7vh4\" (UID: \"90b0d7de-0a52-4200-8644-9a4561cfe725\") " pod="openstack/dnsmasq-dns-7d88d7b95f-s7vh4" Dec 09 12:28:07 crc kubenswrapper[4703]: I1209 12:28:07.445169 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/90b0d7de-0a52-4200-8644-9a4561cfe725-dns-svc\") pod \"dnsmasq-dns-7d88d7b95f-s7vh4\" (UID: \"90b0d7de-0a52-4200-8644-9a4561cfe725\") " pod="openstack/dnsmasq-dns-7d88d7b95f-s7vh4" Dec 09 12:28:07 crc kubenswrapper[4703]: I1209 12:28:07.445272 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5292ab55-fe87-4bb9-8975-5a06bcc517e0-config\") pod \"neutron-74fff9b6-zdlbc\" (UID: \"5292ab55-fe87-4bb9-8975-5a06bcc517e0\") " pod="openstack/neutron-74fff9b6-zdlbc" Dec 09 12:28:07 crc kubenswrapper[4703]: I1209 12:28:07.445359 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/90b0d7de-0a52-4200-8644-9a4561cfe725-dns-swift-storage-0\") pod \"dnsmasq-dns-7d88d7b95f-s7vh4\" (UID: \"90b0d7de-0a52-4200-8644-9a4561cfe725\") " pod="openstack/dnsmasq-dns-7d88d7b95f-s7vh4" Dec 09 12:28:07 crc kubenswrapper[4703]: I1209 12:28:07.445430 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5292ab55-fe87-4bb9-8975-5a06bcc517e0-ovndb-tls-certs\") pod \"neutron-74fff9b6-zdlbc\" (UID: \"5292ab55-fe87-4bb9-8975-5a06bcc517e0\") " pod="openstack/neutron-74fff9b6-zdlbc" Dec 09 12:28:07 crc kubenswrapper[4703]: I1209 12:28:07.445494 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90b0d7de-0a52-4200-8644-9a4561cfe725-config\") pod \"dnsmasq-dns-7d88d7b95f-s7vh4\" (UID: \"90b0d7de-0a52-4200-8644-9a4561cfe725\") " pod="openstack/dnsmasq-dns-7d88d7b95f-s7vh4" Dec 09 12:28:07 crc kubenswrapper[4703]: I1209 12:28:07.445533 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pp7ct\" (UniqueName: \"kubernetes.io/projected/5292ab55-fe87-4bb9-8975-5a06bcc517e0-kube-api-access-pp7ct\") pod \"neutron-74fff9b6-zdlbc\" (UID: \"5292ab55-fe87-4bb9-8975-5a06bcc517e0\") " pod="openstack/neutron-74fff9b6-zdlbc" Dec 09 12:28:07 crc kubenswrapper[4703]: I1209 12:28:07.445625 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5292ab55-fe87-4bb9-8975-5a06bcc517e0-combined-ca-bundle\") pod \"neutron-74fff9b6-zdlbc\" (UID: \"5292ab55-fe87-4bb9-8975-5a06bcc517e0\") " pod="openstack/neutron-74fff9b6-zdlbc" Dec 09 12:28:07 crc kubenswrapper[4703]: I1209 12:28:07.445783 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5292ab55-fe87-4bb9-8975-5a06bcc517e0-httpd-config\") pod \"neutron-74fff9b6-zdlbc\" (UID: \"5292ab55-fe87-4bb9-8975-5a06bcc517e0\") " pod="openstack/neutron-74fff9b6-zdlbc" Dec 09 12:28:07 crc kubenswrapper[4703]: I1209 12:28:07.446945 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/90b0d7de-0a52-4200-8644-9a4561cfe725-ovsdbserver-nb\") pod \"dnsmasq-dns-7d88d7b95f-s7vh4\" (UID: \"90b0d7de-0a52-4200-8644-9a4561cfe725\") " pod="openstack/dnsmasq-dns-7d88d7b95f-s7vh4" Dec 09 12:28:07 crc kubenswrapper[4703]: I1209 12:28:07.452440 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/90b0d7de-0a52-4200-8644-9a4561cfe725-dns-swift-storage-0\") pod \"dnsmasq-dns-7d88d7b95f-s7vh4\" (UID: \"90b0d7de-0a52-4200-8644-9a4561cfe725\") " pod="openstack/dnsmasq-dns-7d88d7b95f-s7vh4" Dec 09 12:28:07 crc kubenswrapper[4703]: I1209 12:28:07.453286 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/90b0d7de-0a52-4200-8644-9a4561cfe725-dns-svc\") pod \"dnsmasq-dns-7d88d7b95f-s7vh4\" (UID: \"90b0d7de-0a52-4200-8644-9a4561cfe725\") " pod="openstack/dnsmasq-dns-7d88d7b95f-s7vh4" Dec 09 12:28:07 crc kubenswrapper[4703]: I1209 12:28:07.454550 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90b0d7de-0a52-4200-8644-9a4561cfe725-config\") pod \"dnsmasq-dns-7d88d7b95f-s7vh4\" (UID: \"90b0d7de-0a52-4200-8644-9a4561cfe725\") " pod="openstack/dnsmasq-dns-7d88d7b95f-s7vh4" Dec 09 12:28:07 crc kubenswrapper[4703]: I1209 12:28:07.515467 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2cph\" (UniqueName: \"kubernetes.io/projected/90b0d7de-0a52-4200-8644-9a4561cfe725-kube-api-access-d2cph\") pod \"dnsmasq-dns-7d88d7b95f-s7vh4\" (UID: \"90b0d7de-0a52-4200-8644-9a4561cfe725\") " pod="openstack/dnsmasq-dns-7d88d7b95f-s7vh4" Dec 09 12:28:07 crc kubenswrapper[4703]: I1209 12:28:07.521758 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-74fff9b6-zdlbc"] Dec 09 12:28:07 crc kubenswrapper[4703]: I1209 12:28:07.574120 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d88d7b95f-s7vh4" Dec 09 12:28:07 crc kubenswrapper[4703]: I1209 12:28:07.575941 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5292ab55-fe87-4bb9-8975-5a06bcc517e0-config\") pod \"neutron-74fff9b6-zdlbc\" (UID: \"5292ab55-fe87-4bb9-8975-5a06bcc517e0\") " pod="openstack/neutron-74fff9b6-zdlbc" Dec 09 12:28:07 crc kubenswrapper[4703]: I1209 12:28:07.576013 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5292ab55-fe87-4bb9-8975-5a06bcc517e0-ovndb-tls-certs\") pod \"neutron-74fff9b6-zdlbc\" (UID: \"5292ab55-fe87-4bb9-8975-5a06bcc517e0\") " pod="openstack/neutron-74fff9b6-zdlbc" Dec 09 12:28:07 crc kubenswrapper[4703]: I1209 12:28:07.576047 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pp7ct\" (UniqueName: \"kubernetes.io/projected/5292ab55-fe87-4bb9-8975-5a06bcc517e0-kube-api-access-pp7ct\") pod \"neutron-74fff9b6-zdlbc\" (UID: \"5292ab55-fe87-4bb9-8975-5a06bcc517e0\") " pod="openstack/neutron-74fff9b6-zdlbc" Dec 09 12:28:07 crc kubenswrapper[4703]: I1209 12:28:07.576087 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5292ab55-fe87-4bb9-8975-5a06bcc517e0-combined-ca-bundle\") pod \"neutron-74fff9b6-zdlbc\" (UID: \"5292ab55-fe87-4bb9-8975-5a06bcc517e0\") " pod="openstack/neutron-74fff9b6-zdlbc" Dec 09 12:28:07 crc kubenswrapper[4703]: I1209 12:28:07.576138 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5292ab55-fe87-4bb9-8975-5a06bcc517e0-httpd-config\") pod \"neutron-74fff9b6-zdlbc\" (UID: \"5292ab55-fe87-4bb9-8975-5a06bcc517e0\") " pod="openstack/neutron-74fff9b6-zdlbc" Dec 09 12:28:07 crc kubenswrapper[4703]: I1209 12:28:07.593704 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/5292ab55-fe87-4bb9-8975-5a06bcc517e0-config\") pod \"neutron-74fff9b6-zdlbc\" (UID: \"5292ab55-fe87-4bb9-8975-5a06bcc517e0\") " pod="openstack/neutron-74fff9b6-zdlbc" Dec 09 12:28:07 crc kubenswrapper[4703]: I1209 12:28:07.624312 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5292ab55-fe87-4bb9-8975-5a06bcc517e0-httpd-config\") pod \"neutron-74fff9b6-zdlbc\" (UID: \"5292ab55-fe87-4bb9-8975-5a06bcc517e0\") " pod="openstack/neutron-74fff9b6-zdlbc" Dec 09 12:28:07 crc kubenswrapper[4703]: I1209 12:28:07.625136 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5292ab55-fe87-4bb9-8975-5a06bcc517e0-ovndb-tls-certs\") pod \"neutron-74fff9b6-zdlbc\" (UID: \"5292ab55-fe87-4bb9-8975-5a06bcc517e0\") " pod="openstack/neutron-74fff9b6-zdlbc" Dec 09 12:28:07 crc kubenswrapper[4703]: I1209 12:28:07.635171 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5292ab55-fe87-4bb9-8975-5a06bcc517e0-combined-ca-bundle\") pod \"neutron-74fff9b6-zdlbc\" (UID: \"5292ab55-fe87-4bb9-8975-5a06bcc517e0\") " pod="openstack/neutron-74fff9b6-zdlbc" Dec 09 12:28:07 crc kubenswrapper[4703]: I1209 12:28:07.641293 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pp7ct\" (UniqueName: \"kubernetes.io/projected/5292ab55-fe87-4bb9-8975-5a06bcc517e0-kube-api-access-pp7ct\") pod \"neutron-74fff9b6-zdlbc\" (UID: \"5292ab55-fe87-4bb9-8975-5a06bcc517e0\") " pod="openstack/neutron-74fff9b6-zdlbc" Dec 09 12:28:07 crc kubenswrapper[4703]: I1209 12:28:07.763097 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-74fff9b6-zdlbc" Dec 09 12:28:07 crc kubenswrapper[4703]: I1209 12:28:07.813858 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d88d7b95f-s7vh4"] Dec 09 12:28:07 crc kubenswrapper[4703]: I1209 12:28:07.864067 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-t56kd"] Dec 09 12:28:07 crc kubenswrapper[4703]: I1209 12:28:07.874688 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-t56kd" Dec 09 12:28:07 crc kubenswrapper[4703]: I1209 12:28:07.887713 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c23c3529-4f24-4180-84ee-f50c0824b3db-dns-svc\") pod \"dnsmasq-dns-55f844cf75-t56kd\" (UID: \"c23c3529-4f24-4180-84ee-f50c0824b3db\") " pod="openstack/dnsmasq-dns-55f844cf75-t56kd" Dec 09 12:28:07 crc kubenswrapper[4703]: I1209 12:28:07.887757 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c23c3529-4f24-4180-84ee-f50c0824b3db-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-t56kd\" (UID: \"c23c3529-4f24-4180-84ee-f50c0824b3db\") " pod="openstack/dnsmasq-dns-55f844cf75-t56kd" Dec 09 12:28:07 crc kubenswrapper[4703]: I1209 12:28:07.887779 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvzxv\" (UniqueName: \"kubernetes.io/projected/c23c3529-4f24-4180-84ee-f50c0824b3db-kube-api-access-fvzxv\") pod \"dnsmasq-dns-55f844cf75-t56kd\" (UID: \"c23c3529-4f24-4180-84ee-f50c0824b3db\") " pod="openstack/dnsmasq-dns-55f844cf75-t56kd" Dec 09 12:28:07 crc kubenswrapper[4703]: I1209 12:28:07.887818 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c23c3529-4f24-4180-84ee-f50c0824b3db-config\") pod \"dnsmasq-dns-55f844cf75-t56kd\" (UID: \"c23c3529-4f24-4180-84ee-f50c0824b3db\") " pod="openstack/dnsmasq-dns-55f844cf75-t56kd" Dec 09 12:28:07 crc kubenswrapper[4703]: I1209 12:28:07.887891 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c23c3529-4f24-4180-84ee-f50c0824b3db-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-t56kd\" (UID: \"c23c3529-4f24-4180-84ee-f50c0824b3db\") " pod="openstack/dnsmasq-dns-55f844cf75-t56kd" Dec 09 12:28:07 crc kubenswrapper[4703]: I1209 12:28:07.887950 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c23c3529-4f24-4180-84ee-f50c0824b3db-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-t56kd\" (UID: \"c23c3529-4f24-4180-84ee-f50c0824b3db\") " pod="openstack/dnsmasq-dns-55f844cf75-t56kd" Dec 09 12:28:07 crc kubenswrapper[4703]: I1209 12:28:07.894763 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-t56kd"] Dec 09 12:28:07 crc kubenswrapper[4703]: I1209 12:28:07.990851 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c23c3529-4f24-4180-84ee-f50c0824b3db-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-t56kd\" (UID: \"c23c3529-4f24-4180-84ee-f50c0824b3db\") " pod="openstack/dnsmasq-dns-55f844cf75-t56kd" Dec 09 12:28:07 crc kubenswrapper[4703]: I1209 12:28:07.990911 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c23c3529-4f24-4180-84ee-f50c0824b3db-dns-svc\") pod \"dnsmasq-dns-55f844cf75-t56kd\" (UID: \"c23c3529-4f24-4180-84ee-f50c0824b3db\") " pod="openstack/dnsmasq-dns-55f844cf75-t56kd" Dec 09 12:28:07 crc kubenswrapper[4703]: I1209 12:28:07.990932 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvzxv\" (UniqueName: \"kubernetes.io/projected/c23c3529-4f24-4180-84ee-f50c0824b3db-kube-api-access-fvzxv\") pod \"dnsmasq-dns-55f844cf75-t56kd\" (UID: \"c23c3529-4f24-4180-84ee-f50c0824b3db\") " pod="openstack/dnsmasq-dns-55f844cf75-t56kd" Dec 09 12:28:07 crc kubenswrapper[4703]: I1209 12:28:07.990951 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c23c3529-4f24-4180-84ee-f50c0824b3db-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-t56kd\" (UID: \"c23c3529-4f24-4180-84ee-f50c0824b3db\") " pod="openstack/dnsmasq-dns-55f844cf75-t56kd" Dec 09 12:28:07 crc kubenswrapper[4703]: I1209 12:28:07.991013 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c23c3529-4f24-4180-84ee-f50c0824b3db-config\") pod \"dnsmasq-dns-55f844cf75-t56kd\" (UID: \"c23c3529-4f24-4180-84ee-f50c0824b3db\") " pod="openstack/dnsmasq-dns-55f844cf75-t56kd" Dec 09 12:28:07 crc kubenswrapper[4703]: I1209 12:28:07.991097 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c23c3529-4f24-4180-84ee-f50c0824b3db-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-t56kd\" (UID: \"c23c3529-4f24-4180-84ee-f50c0824b3db\") " pod="openstack/dnsmasq-dns-55f844cf75-t56kd" Dec 09 12:28:07 crc kubenswrapper[4703]: I1209 12:28:07.992136 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c23c3529-4f24-4180-84ee-f50c0824b3db-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-t56kd\" (UID: \"c23c3529-4f24-4180-84ee-f50c0824b3db\") " pod="openstack/dnsmasq-dns-55f844cf75-t56kd" Dec 09 12:28:07 crc kubenswrapper[4703]: I1209 12:28:07.992752 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c23c3529-4f24-4180-84ee-f50c0824b3db-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-t56kd\" (UID: \"c23c3529-4f24-4180-84ee-f50c0824b3db\") " pod="openstack/dnsmasq-dns-55f844cf75-t56kd" Dec 09 12:28:07 crc kubenswrapper[4703]: I1209 12:28:07.993357 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c23c3529-4f24-4180-84ee-f50c0824b3db-dns-svc\") pod \"dnsmasq-dns-55f844cf75-t56kd\" (UID: \"c23c3529-4f24-4180-84ee-f50c0824b3db\") " pod="openstack/dnsmasq-dns-55f844cf75-t56kd" Dec 09 12:28:07 crc kubenswrapper[4703]: I1209 12:28:07.994156 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c23c3529-4f24-4180-84ee-f50c0824b3db-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-t56kd\" (UID: \"c23c3529-4f24-4180-84ee-f50c0824b3db\") " pod="openstack/dnsmasq-dns-55f844cf75-t56kd" Dec 09 12:28:07 crc kubenswrapper[4703]: I1209 12:28:07.994704 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c23c3529-4f24-4180-84ee-f50c0824b3db-config\") pod \"dnsmasq-dns-55f844cf75-t56kd\" (UID: \"c23c3529-4f24-4180-84ee-f50c0824b3db\") " pod="openstack/dnsmasq-dns-55f844cf75-t56kd" Dec 09 12:28:08 crc kubenswrapper[4703]: I1209 12:28:08.015803 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvzxv\" (UniqueName: \"kubernetes.io/projected/c23c3529-4f24-4180-84ee-f50c0824b3db-kube-api-access-fvzxv\") pod \"dnsmasq-dns-55f844cf75-t56kd\" (UID: \"c23c3529-4f24-4180-84ee-f50c0824b3db\") " pod="openstack/dnsmasq-dns-55f844cf75-t56kd" Dec 09 12:28:08 crc kubenswrapper[4703]: I1209 12:28:08.227875 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-t56kd" Dec 09 12:28:08 crc kubenswrapper[4703]: I1209 12:28:08.361742 4703 scope.go:117] "RemoveContainer" containerID="c514fa49afd349d8a6515b420b8a40a44a3ef0067e8220566ec24af3ee1edce6" Dec 09 12:28:08 crc kubenswrapper[4703]: E1209 12:28:08.417794 4703 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Dec 09 12:28:08 crc kubenswrapper[4703]: E1209 12:28:08.417987 4703 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fk6px,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-5tddl_openstack(24ebaba5-65a6-4be5-8112-10c77a6d986c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 12:28:08 crc kubenswrapper[4703]: E1209 12:28:08.419114 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-5tddl" podUID="24ebaba5-65a6-4be5-8112-10c77a6d986c" Dec 09 12:28:08 crc kubenswrapper[4703]: I1209 12:28:08.497448 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 09 12:28:08 crc kubenswrapper[4703]: I1209 12:28:08.499951 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 09 12:28:08 crc kubenswrapper[4703]: I1209 12:28:08.509160 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Dec 09 12:28:08 crc kubenswrapper[4703]: I1209 12:28:08.509328 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 09 12:28:08 crc kubenswrapper[4703]: I1209 12:28:08.509431 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-5lwc2" Dec 09 12:28:08 crc kubenswrapper[4703]: I1209 12:28:08.517448 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 09 12:28:08 crc kubenswrapper[4703]: I1209 12:28:08.604400 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e38588e-3f75-4755-8201-6b95541ff106-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1e38588e-3f75-4755-8201-6b95541ff106\") " pod="openstack/glance-default-external-api-0" Dec 09 12:28:08 crc kubenswrapper[4703]: I1209 12:28:08.604468 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e38588e-3f75-4755-8201-6b95541ff106-scripts\") pod \"glance-default-external-api-0\" (UID: \"1e38588e-3f75-4755-8201-6b95541ff106\") " pod="openstack/glance-default-external-api-0" Dec 09 12:28:08 crc kubenswrapper[4703]: I1209 12:28:08.604558 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b4442429-2353-4ea7-ac70-798afe17b703\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b4442429-2353-4ea7-ac70-798afe17b703\") pod \"glance-default-external-api-0\" (UID: \"1e38588e-3f75-4755-8201-6b95541ff106\") " pod="openstack/glance-default-external-api-0" Dec 09 12:28:08 crc kubenswrapper[4703]: I1209 12:28:08.604613 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e38588e-3f75-4755-8201-6b95541ff106-logs\") pod \"glance-default-external-api-0\" (UID: \"1e38588e-3f75-4755-8201-6b95541ff106\") " pod="openstack/glance-default-external-api-0" Dec 09 12:28:08 crc kubenswrapper[4703]: I1209 12:28:08.604645 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e38588e-3f75-4755-8201-6b95541ff106-config-data\") pod \"glance-default-external-api-0\" (UID: \"1e38588e-3f75-4755-8201-6b95541ff106\") " pod="openstack/glance-default-external-api-0" Dec 09 12:28:08 crc kubenswrapper[4703]: I1209 12:28:08.604670 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1e38588e-3f75-4755-8201-6b95541ff106-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1e38588e-3f75-4755-8201-6b95541ff106\") " pod="openstack/glance-default-external-api-0" Dec 09 12:28:08 crc kubenswrapper[4703]: I1209 12:28:08.604753 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2jgt\" (UniqueName: \"kubernetes.io/projected/1e38588e-3f75-4755-8201-6b95541ff106-kube-api-access-q2jgt\") pod \"glance-default-external-api-0\" (UID: \"1e38588e-3f75-4755-8201-6b95541ff106\") " pod="openstack/glance-default-external-api-0" Dec 09 12:28:08 crc kubenswrapper[4703]: E1209 12:28:08.689557 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-5tddl" podUID="24ebaba5-65a6-4be5-8112-10c77a6d986c" Dec 09 12:28:08 crc kubenswrapper[4703]: I1209 12:28:08.714109 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b4442429-2353-4ea7-ac70-798afe17b703\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b4442429-2353-4ea7-ac70-798afe17b703\") pod \"glance-default-external-api-0\" (UID: \"1e38588e-3f75-4755-8201-6b95541ff106\") " pod="openstack/glance-default-external-api-0" Dec 09 12:28:08 crc kubenswrapper[4703]: I1209 12:28:08.714280 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e38588e-3f75-4755-8201-6b95541ff106-logs\") pod \"glance-default-external-api-0\" (UID: \"1e38588e-3f75-4755-8201-6b95541ff106\") " pod="openstack/glance-default-external-api-0" Dec 09 12:28:08 crc kubenswrapper[4703]: I1209 12:28:08.714317 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e38588e-3f75-4755-8201-6b95541ff106-config-data\") pod \"glance-default-external-api-0\" (UID: \"1e38588e-3f75-4755-8201-6b95541ff106\") " pod="openstack/glance-default-external-api-0" Dec 09 12:28:08 crc kubenswrapper[4703]: I1209 12:28:08.714361 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1e38588e-3f75-4755-8201-6b95541ff106-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1e38588e-3f75-4755-8201-6b95541ff106\") " pod="openstack/glance-default-external-api-0" Dec 09 12:28:08 crc kubenswrapper[4703]: I1209 12:28:08.714473 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2jgt\" (UniqueName: \"kubernetes.io/projected/1e38588e-3f75-4755-8201-6b95541ff106-kube-api-access-q2jgt\") pod \"glance-default-external-api-0\" (UID: \"1e38588e-3f75-4755-8201-6b95541ff106\") " pod="openstack/glance-default-external-api-0" Dec 09 12:28:08 crc kubenswrapper[4703]: I1209 12:28:08.714520 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e38588e-3f75-4755-8201-6b95541ff106-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1e38588e-3f75-4755-8201-6b95541ff106\") " pod="openstack/glance-default-external-api-0" Dec 09 12:28:08 crc kubenswrapper[4703]: I1209 12:28:08.714559 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e38588e-3f75-4755-8201-6b95541ff106-scripts\") pod \"glance-default-external-api-0\" (UID: \"1e38588e-3f75-4755-8201-6b95541ff106\") " pod="openstack/glance-default-external-api-0" Dec 09 12:28:08 crc kubenswrapper[4703]: I1209 12:28:08.715879 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1e38588e-3f75-4755-8201-6b95541ff106-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1e38588e-3f75-4755-8201-6b95541ff106\") " pod="openstack/glance-default-external-api-0" Dec 09 12:28:08 crc kubenswrapper[4703]: I1209 12:28:08.716601 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e38588e-3f75-4755-8201-6b95541ff106-logs\") pod \"glance-default-external-api-0\" (UID: \"1e38588e-3f75-4755-8201-6b95541ff106\") " pod="openstack/glance-default-external-api-0" Dec 09 12:28:08 crc kubenswrapper[4703]: I1209 12:28:08.717795 4703 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 09 12:28:08 crc kubenswrapper[4703]: I1209 12:28:08.717832 4703 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b4442429-2353-4ea7-ac70-798afe17b703\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b4442429-2353-4ea7-ac70-798afe17b703\") pod \"glance-default-external-api-0\" (UID: \"1e38588e-3f75-4755-8201-6b95541ff106\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/886135ab14297acda1ec732ab82ea94d7b160862d21cc21e3e28b5e1ec2b3603/globalmount\"" pod="openstack/glance-default-external-api-0" Dec 09 12:28:08 crc kubenswrapper[4703]: I1209 12:28:08.720510 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e38588e-3f75-4755-8201-6b95541ff106-scripts\") pod \"glance-default-external-api-0\" (UID: \"1e38588e-3f75-4755-8201-6b95541ff106\") " pod="openstack/glance-default-external-api-0" Dec 09 12:28:08 crc kubenswrapper[4703]: I1209 12:28:08.720777 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e38588e-3f75-4755-8201-6b95541ff106-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1e38588e-3f75-4755-8201-6b95541ff106\") " pod="openstack/glance-default-external-api-0" Dec 09 12:28:08 crc kubenswrapper[4703]: I1209 12:28:08.725217 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e38588e-3f75-4755-8201-6b95541ff106-config-data\") pod \"glance-default-external-api-0\" (UID: \"1e38588e-3f75-4755-8201-6b95541ff106\") " pod="openstack/glance-default-external-api-0" Dec 09 12:28:08 crc kubenswrapper[4703]: I1209 12:28:08.751254 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2jgt\" (UniqueName: \"kubernetes.io/projected/1e38588e-3f75-4755-8201-6b95541ff106-kube-api-access-q2jgt\") pod \"glance-default-external-api-0\" (UID: \"1e38588e-3f75-4755-8201-6b95541ff106\") " pod="openstack/glance-default-external-api-0" Dec 09 12:28:08 crc kubenswrapper[4703]: I1209 12:28:08.762687 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b4442429-2353-4ea7-ac70-798afe17b703\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b4442429-2353-4ea7-ac70-798afe17b703\") pod \"glance-default-external-api-0\" (UID: \"1e38588e-3f75-4755-8201-6b95541ff106\") " pod="openstack/glance-default-external-api-0" Dec 09 12:28:08 crc kubenswrapper[4703]: I1209 12:28:08.826080 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 09 12:28:09 crc kubenswrapper[4703]: I1209 12:28:09.103055 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 09 12:28:09 crc kubenswrapper[4703]: I1209 12:28:09.104950 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 09 12:28:09 crc kubenswrapper[4703]: I1209 12:28:09.109562 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 09 12:28:09 crc kubenswrapper[4703]: I1209 12:28:09.132492 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 09 12:28:09 crc kubenswrapper[4703]: I1209 12:28:09.224006 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93142f2e-b931-41a8-8d67-f586853197a1-logs\") pod \"glance-default-internal-api-0\" (UID: \"93142f2e-b931-41a8-8d67-f586853197a1\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:28:09 crc kubenswrapper[4703]: I1209 12:28:09.224144 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/93142f2e-b931-41a8-8d67-f586853197a1-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"93142f2e-b931-41a8-8d67-f586853197a1\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:28:09 crc kubenswrapper[4703]: I1209 12:28:09.224215 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-cfecbed1-519b-4af5-9dfc-e582a380e9a4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cfecbed1-519b-4af5-9dfc-e582a380e9a4\") pod \"glance-default-internal-api-0\" (UID: \"93142f2e-b931-41a8-8d67-f586853197a1\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:28:09 crc kubenswrapper[4703]: I1209 12:28:09.224281 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93142f2e-b931-41a8-8d67-f586853197a1-scripts\") pod \"glance-default-internal-api-0\" (UID: \"93142f2e-b931-41a8-8d67-f586853197a1\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:28:09 crc kubenswrapper[4703]: I1209 12:28:09.224338 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93142f2e-b931-41a8-8d67-f586853197a1-config-data\") pod \"glance-default-internal-api-0\" (UID: \"93142f2e-b931-41a8-8d67-f586853197a1\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:28:09 crc kubenswrapper[4703]: I1209 12:28:09.224380 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnt25\" (UniqueName: \"kubernetes.io/projected/93142f2e-b931-41a8-8d67-f586853197a1-kube-api-access-hnt25\") pod \"glance-default-internal-api-0\" (UID: \"93142f2e-b931-41a8-8d67-f586853197a1\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:28:09 crc kubenswrapper[4703]: I1209 12:28:09.224461 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93142f2e-b931-41a8-8d67-f586853197a1-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"93142f2e-b931-41a8-8d67-f586853197a1\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:28:09 crc kubenswrapper[4703]: I1209 12:28:09.326531 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93142f2e-b931-41a8-8d67-f586853197a1-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"93142f2e-b931-41a8-8d67-f586853197a1\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:28:09 crc kubenswrapper[4703]: I1209 12:28:09.326612 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93142f2e-b931-41a8-8d67-f586853197a1-logs\") pod \"glance-default-internal-api-0\" (UID: \"93142f2e-b931-41a8-8d67-f586853197a1\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:28:09 crc kubenswrapper[4703]: I1209 12:28:09.326675 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/93142f2e-b931-41a8-8d67-f586853197a1-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"93142f2e-b931-41a8-8d67-f586853197a1\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:28:09 crc kubenswrapper[4703]: I1209 12:28:09.326712 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-cfecbed1-519b-4af5-9dfc-e582a380e9a4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cfecbed1-519b-4af5-9dfc-e582a380e9a4\") pod \"glance-default-internal-api-0\" (UID: \"93142f2e-b931-41a8-8d67-f586853197a1\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:28:09 crc kubenswrapper[4703]: I1209 12:28:09.326749 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93142f2e-b931-41a8-8d67-f586853197a1-scripts\") pod \"glance-default-internal-api-0\" (UID: \"93142f2e-b931-41a8-8d67-f586853197a1\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:28:09 crc kubenswrapper[4703]: I1209 12:28:09.326777 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93142f2e-b931-41a8-8d67-f586853197a1-config-data\") pod \"glance-default-internal-api-0\" (UID: \"93142f2e-b931-41a8-8d67-f586853197a1\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:28:09 crc kubenswrapper[4703]: I1209 12:28:09.326804 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnt25\" (UniqueName: \"kubernetes.io/projected/93142f2e-b931-41a8-8d67-f586853197a1-kube-api-access-hnt25\") pod \"glance-default-internal-api-0\" (UID: \"93142f2e-b931-41a8-8d67-f586853197a1\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:28:09 crc kubenswrapper[4703]: I1209 12:28:09.328855 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93142f2e-b931-41a8-8d67-f586853197a1-logs\") pod \"glance-default-internal-api-0\" (UID: \"93142f2e-b931-41a8-8d67-f586853197a1\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:28:09 crc kubenswrapper[4703]: I1209 12:28:09.329161 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/93142f2e-b931-41a8-8d67-f586853197a1-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"93142f2e-b931-41a8-8d67-f586853197a1\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:28:09 crc kubenswrapper[4703]: I1209 12:28:09.332908 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93142f2e-b931-41a8-8d67-f586853197a1-scripts\") pod \"glance-default-internal-api-0\" (UID: \"93142f2e-b931-41a8-8d67-f586853197a1\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:28:09 crc kubenswrapper[4703]: I1209 12:28:09.337261 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93142f2e-b931-41a8-8d67-f586853197a1-config-data\") pod \"glance-default-internal-api-0\" (UID: \"93142f2e-b931-41a8-8d67-f586853197a1\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:28:09 crc kubenswrapper[4703]: I1209 12:28:09.340581 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93142f2e-b931-41a8-8d67-f586853197a1-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"93142f2e-b931-41a8-8d67-f586853197a1\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:28:09 crc kubenswrapper[4703]: I1209 12:28:09.343837 4703 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 09 12:28:09 crc kubenswrapper[4703]: I1209 12:28:09.343881 4703 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-cfecbed1-519b-4af5-9dfc-e582a380e9a4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cfecbed1-519b-4af5-9dfc-e582a380e9a4\") pod \"glance-default-internal-api-0\" (UID: \"93142f2e-b931-41a8-8d67-f586853197a1\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/797e4922fbff3a4efc14293ba89fb3aa4e856c71aa0daf9f22ac638d77d5a369/globalmount\"" pod="openstack/glance-default-internal-api-0" Dec 09 12:28:09 crc kubenswrapper[4703]: I1209 12:28:09.348003 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnt25\" (UniqueName: \"kubernetes.io/projected/93142f2e-b931-41a8-8d67-f586853197a1-kube-api-access-hnt25\") pod \"glance-default-internal-api-0\" (UID: \"93142f2e-b931-41a8-8d67-f586853197a1\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:28:09 crc kubenswrapper[4703]: I1209 12:28:09.397411 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-cfecbed1-519b-4af5-9dfc-e582a380e9a4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cfecbed1-519b-4af5-9dfc-e582a380e9a4\") pod \"glance-default-internal-api-0\" (UID: \"93142f2e-b931-41a8-8d67-f586853197a1\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:28:09 crc kubenswrapper[4703]: I1209 12:28:09.427940 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 09 12:28:12 crc kubenswrapper[4703]: I1209 12:28:12.431494 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 09 12:28:12 crc kubenswrapper[4703]: I1209 12:28:12.724350 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 09 12:28:14 crc kubenswrapper[4703]: I1209 12:28:14.099969 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-79c5b588df-ksgrp"] Dec 09 12:28:14 crc kubenswrapper[4703]: I1209 12:28:14.102246 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-79c5b588df-ksgrp" Dec 09 12:28:14 crc kubenswrapper[4703]: I1209 12:28:14.111038 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Dec 09 12:28:14 crc kubenswrapper[4703]: I1209 12:28:14.111244 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Dec 09 12:28:14 crc kubenswrapper[4703]: I1209 12:28:14.115931 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-79c5b588df-ksgrp"] Dec 09 12:28:14 crc kubenswrapper[4703]: I1209 12:28:14.168838 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b68ace68-caf9-489c-88d9-0daf26bfdeb7-httpd-config\") pod \"neutron-79c5b588df-ksgrp\" (UID: \"b68ace68-caf9-489c-88d9-0daf26bfdeb7\") " pod="openstack/neutron-79c5b588df-ksgrp" Dec 09 12:28:14 crc kubenswrapper[4703]: I1209 12:28:14.168947 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b68ace68-caf9-489c-88d9-0daf26bfdeb7-internal-tls-certs\") pod \"neutron-79c5b588df-ksgrp\" (UID: \"b68ace68-caf9-489c-88d9-0daf26bfdeb7\") " pod="openstack/neutron-79c5b588df-ksgrp" Dec 09 12:28:14 crc kubenswrapper[4703]: I1209 12:28:14.169122 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b68ace68-caf9-489c-88d9-0daf26bfdeb7-combined-ca-bundle\") pod \"neutron-79c5b588df-ksgrp\" (UID: \"b68ace68-caf9-489c-88d9-0daf26bfdeb7\") " pod="openstack/neutron-79c5b588df-ksgrp" Dec 09 12:28:14 crc kubenswrapper[4703]: I1209 12:28:14.169176 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8thvk\" (UniqueName: \"kubernetes.io/projected/b68ace68-caf9-489c-88d9-0daf26bfdeb7-kube-api-access-8thvk\") pod \"neutron-79c5b588df-ksgrp\" (UID: \"b68ace68-caf9-489c-88d9-0daf26bfdeb7\") " pod="openstack/neutron-79c5b588df-ksgrp" Dec 09 12:28:14 crc kubenswrapper[4703]: I1209 12:28:14.169319 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b68ace68-caf9-489c-88d9-0daf26bfdeb7-config\") pod \"neutron-79c5b588df-ksgrp\" (UID: \"b68ace68-caf9-489c-88d9-0daf26bfdeb7\") " pod="openstack/neutron-79c5b588df-ksgrp" Dec 09 12:28:14 crc kubenswrapper[4703]: I1209 12:28:14.169392 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b68ace68-caf9-489c-88d9-0daf26bfdeb7-ovndb-tls-certs\") pod \"neutron-79c5b588df-ksgrp\" (UID: \"b68ace68-caf9-489c-88d9-0daf26bfdeb7\") " pod="openstack/neutron-79c5b588df-ksgrp" Dec 09 12:28:14 crc kubenswrapper[4703]: I1209 12:28:14.169587 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b68ace68-caf9-489c-88d9-0daf26bfdeb7-public-tls-certs\") pod \"neutron-79c5b588df-ksgrp\" (UID: \"b68ace68-caf9-489c-88d9-0daf26bfdeb7\") " pod="openstack/neutron-79c5b588df-ksgrp" Dec 09 12:28:14 crc kubenswrapper[4703]: I1209 12:28:14.273645 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b68ace68-caf9-489c-88d9-0daf26bfdeb7-config\") pod \"neutron-79c5b588df-ksgrp\" (UID: \"b68ace68-caf9-489c-88d9-0daf26bfdeb7\") " pod="openstack/neutron-79c5b588df-ksgrp" Dec 09 12:28:14 crc kubenswrapper[4703]: I1209 12:28:14.273750 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b68ace68-caf9-489c-88d9-0daf26bfdeb7-ovndb-tls-certs\") pod \"neutron-79c5b588df-ksgrp\" (UID: \"b68ace68-caf9-489c-88d9-0daf26bfdeb7\") " pod="openstack/neutron-79c5b588df-ksgrp" Dec 09 12:28:14 crc kubenswrapper[4703]: I1209 12:28:14.273960 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b68ace68-caf9-489c-88d9-0daf26bfdeb7-public-tls-certs\") pod \"neutron-79c5b588df-ksgrp\" (UID: \"b68ace68-caf9-489c-88d9-0daf26bfdeb7\") " pod="openstack/neutron-79c5b588df-ksgrp" Dec 09 12:28:14 crc kubenswrapper[4703]: I1209 12:28:14.274072 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b68ace68-caf9-489c-88d9-0daf26bfdeb7-httpd-config\") pod \"neutron-79c5b588df-ksgrp\" (UID: \"b68ace68-caf9-489c-88d9-0daf26bfdeb7\") " pod="openstack/neutron-79c5b588df-ksgrp" Dec 09 12:28:14 crc kubenswrapper[4703]: I1209 12:28:14.274168 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b68ace68-caf9-489c-88d9-0daf26bfdeb7-internal-tls-certs\") pod \"neutron-79c5b588df-ksgrp\" (UID: \"b68ace68-caf9-489c-88d9-0daf26bfdeb7\") " pod="openstack/neutron-79c5b588df-ksgrp" Dec 09 12:28:14 crc kubenswrapper[4703]: I1209 12:28:14.274900 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b68ace68-caf9-489c-88d9-0daf26bfdeb7-combined-ca-bundle\") pod \"neutron-79c5b588df-ksgrp\" (UID: \"b68ace68-caf9-489c-88d9-0daf26bfdeb7\") " pod="openstack/neutron-79c5b588df-ksgrp" Dec 09 12:28:14 crc kubenswrapper[4703]: I1209 12:28:14.274938 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8thvk\" (UniqueName: \"kubernetes.io/projected/b68ace68-caf9-489c-88d9-0daf26bfdeb7-kube-api-access-8thvk\") pod \"neutron-79c5b588df-ksgrp\" (UID: \"b68ace68-caf9-489c-88d9-0daf26bfdeb7\") " pod="openstack/neutron-79c5b588df-ksgrp" Dec 09 12:28:14 crc kubenswrapper[4703]: I1209 12:28:14.282678 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b68ace68-caf9-489c-88d9-0daf26bfdeb7-internal-tls-certs\") pod \"neutron-79c5b588df-ksgrp\" (UID: \"b68ace68-caf9-489c-88d9-0daf26bfdeb7\") " pod="openstack/neutron-79c5b588df-ksgrp" Dec 09 12:28:14 crc kubenswrapper[4703]: I1209 12:28:14.283331 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/b68ace68-caf9-489c-88d9-0daf26bfdeb7-config\") pod \"neutron-79c5b588df-ksgrp\" (UID: \"b68ace68-caf9-489c-88d9-0daf26bfdeb7\") " pod="openstack/neutron-79c5b588df-ksgrp" Dec 09 12:28:14 crc kubenswrapper[4703]: I1209 12:28:14.291043 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b68ace68-caf9-489c-88d9-0daf26bfdeb7-combined-ca-bundle\") pod \"neutron-79c5b588df-ksgrp\" (UID: \"b68ace68-caf9-489c-88d9-0daf26bfdeb7\") " pod="openstack/neutron-79c5b588df-ksgrp" Dec 09 12:28:14 crc kubenswrapper[4703]: I1209 12:28:14.291124 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b68ace68-caf9-489c-88d9-0daf26bfdeb7-httpd-config\") pod \"neutron-79c5b588df-ksgrp\" (UID: \"b68ace68-caf9-489c-88d9-0daf26bfdeb7\") " pod="openstack/neutron-79c5b588df-ksgrp" Dec 09 12:28:14 crc kubenswrapper[4703]: I1209 12:28:14.291658 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b68ace68-caf9-489c-88d9-0daf26bfdeb7-ovndb-tls-certs\") pod \"neutron-79c5b588df-ksgrp\" (UID: \"b68ace68-caf9-489c-88d9-0daf26bfdeb7\") " pod="openstack/neutron-79c5b588df-ksgrp" Dec 09 12:28:14 crc kubenswrapper[4703]: I1209 12:28:14.291812 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b68ace68-caf9-489c-88d9-0daf26bfdeb7-public-tls-certs\") pod \"neutron-79c5b588df-ksgrp\" (UID: \"b68ace68-caf9-489c-88d9-0daf26bfdeb7\") " pod="openstack/neutron-79c5b588df-ksgrp" Dec 09 12:28:14 crc kubenswrapper[4703]: I1209 12:28:14.295405 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8thvk\" (UniqueName: \"kubernetes.io/projected/b68ace68-caf9-489c-88d9-0daf26bfdeb7-kube-api-access-8thvk\") pod \"neutron-79c5b588df-ksgrp\" (UID: \"b68ace68-caf9-489c-88d9-0daf26bfdeb7\") " pod="openstack/neutron-79c5b588df-ksgrp" Dec 09 12:28:14 crc kubenswrapper[4703]: I1209 12:28:14.469943 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-79c5b588df-ksgrp" Dec 09 12:28:16 crc kubenswrapper[4703]: I1209 12:28:16.967029 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-6b8bj"] Dec 09 12:28:17 crc kubenswrapper[4703]: E1209 12:28:17.634955 4703 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Dec 09 12:28:17 crc kubenswrapper[4703]: E1209 12:28:17.635053 4703 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Dec 09 12:28:17 crc kubenswrapper[4703]: E1209 12:28:17.635274 4703 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8xljc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-jhwbv_openstack(8dc2a644-ad99-4857-9913-562b6ed7371f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 12:28:17 crc kubenswrapper[4703]: E1209 12:28:17.636494 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cloudkitty-db-sync-jhwbv" podUID="8dc2a644-ad99-4857-9913-562b6ed7371f" Dec 09 12:28:17 crc kubenswrapper[4703]: W1209 12:28:17.689447 4703 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3ecae80d_2552_44d1_9b62_c7112893d38f.slice/crio-1ec8e319ada282e92e48bfa115b080bb042a05821d63da886c3906d33d0233a6 WatchSource:0}: Error finding container 1ec8e319ada282e92e48bfa115b080bb042a05821d63da886c3906d33d0233a6: Status 404 returned error can't find the container with id 1ec8e319ada282e92e48bfa115b080bb042a05821d63da886c3906d33d0233a6 Dec 09 12:28:17 crc kubenswrapper[4703]: I1209 12:28:17.913899 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-6b8bj" event={"ID":"3ecae80d-2552-44d1-9b62-c7112893d38f","Type":"ContainerStarted","Data":"1ec8e319ada282e92e48bfa115b080bb042a05821d63da886c3906d33d0233a6"} Dec 09 12:28:17 crc kubenswrapper[4703]: E1209 12:28:17.926641 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-jhwbv" podUID="8dc2a644-ad99-4857-9913-562b6ed7371f" Dec 09 12:28:18 crc kubenswrapper[4703]: W1209 12:28:18.268942 4703 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc23c3529_4f24_4180_84ee_f50c0824b3db.slice/crio-1330ae07edafbde2470130e7d921db0fd80343aa24855b1efb0e95bc12024aab WatchSource:0}: Error finding container 1330ae07edafbde2470130e7d921db0fd80343aa24855b1efb0e95bc12024aab: Status 404 returned error can't find the container with id 1330ae07edafbde2470130e7d921db0fd80343aa24855b1efb0e95bc12024aab Dec 09 12:28:18 crc kubenswrapper[4703]: I1209 12:28:18.270343 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-t56kd"] Dec 09 12:28:18 crc kubenswrapper[4703]: I1209 12:28:18.381539 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d88d7b95f-s7vh4"] Dec 09 12:28:18 crc kubenswrapper[4703]: I1209 12:28:18.551788 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-74fff9b6-zdlbc"] Dec 09 12:28:18 crc kubenswrapper[4703]: W1209 12:28:18.569265 4703 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5292ab55_fe87_4bb9_8975_5a06bcc517e0.slice/crio-eb66cd1a9801a447461638231e1db40c83fd42bf3ddd39fcf1f36ed85af0afa2 WatchSource:0}: Error finding container eb66cd1a9801a447461638231e1db40c83fd42bf3ddd39fcf1f36ed85af0afa2: Status 404 returned error can't find the container with id eb66cd1a9801a447461638231e1db40c83fd42bf3ddd39fcf1f36ed85af0afa2 Dec 09 12:28:18 crc kubenswrapper[4703]: I1209 12:28:18.688470 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 09 12:28:18 crc kubenswrapper[4703]: I1209 12:28:18.843049 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-79c5b588df-ksgrp"] Dec 09 12:28:18 crc kubenswrapper[4703]: I1209 12:28:18.946851 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-q85xp" event={"ID":"75741f58-7780-4daa-a61b-6985e8baeb5b","Type":"ContainerStarted","Data":"ffe8c00834f88e8a1b16950144072de9b323588a9a22d84558f4307225e3aba7"} Dec 09 12:28:18 crc kubenswrapper[4703]: I1209 12:28:18.956249 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"93142f2e-b931-41a8-8d67-f586853197a1","Type":"ContainerStarted","Data":"f9fe0e371f13304d4a5890d09fdef866dc7cd94ceb10c146205e6f0a59322040"} Dec 09 12:28:18 crc kubenswrapper[4703]: I1209 12:28:18.965938 4703 generic.go:334] "Generic (PLEG): container finished" podID="c23c3529-4f24-4180-84ee-f50c0824b3db" containerID="70a1ac91fb3ce9fdd5994aef5f919b90b6b550dcf3e9a59ae3a2a1330b2052e9" exitCode=0 Dec 09 12:28:18 crc kubenswrapper[4703]: I1209 12:28:18.967463 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-t56kd" event={"ID":"c23c3529-4f24-4180-84ee-f50c0824b3db","Type":"ContainerDied","Data":"70a1ac91fb3ce9fdd5994aef5f919b90b6b550dcf3e9a59ae3a2a1330b2052e9"} Dec 09 12:28:18 crc kubenswrapper[4703]: I1209 12:28:18.967582 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-t56kd" event={"ID":"c23c3529-4f24-4180-84ee-f50c0824b3db","Type":"ContainerStarted","Data":"1330ae07edafbde2470130e7d921db0fd80343aa24855b1efb0e95bc12024aab"} Dec 09 12:28:18 crc kubenswrapper[4703]: I1209 12:28:18.979707 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8439e49e-7ee2-4be7-b4a2-f437a2124bd9","Type":"ContainerStarted","Data":"196a84f9faf1f8521ddb23fbe69489d831a45350fd9be2fd08437f7bb40847e2"} Dec 09 12:28:18 crc kubenswrapper[4703]: I1209 12:28:18.982299 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-q85xp" podStartSLOduration=8.779744063 podStartE2EDuration="50.982285112s" podCreationTimestamp="2025-12-09 12:27:28 +0000 UTC" firstStartedPulling="2025-12-09 12:27:30.351288512 +0000 UTC m=+1349.600052031" lastFinishedPulling="2025-12-09 12:28:12.553829561 +0000 UTC m=+1391.802593080" observedRunningTime="2025-12-09 12:28:18.973562675 +0000 UTC m=+1398.222326184" watchObservedRunningTime="2025-12-09 12:28:18.982285112 +0000 UTC m=+1398.231048631" Dec 09 12:28:18 crc kubenswrapper[4703]: I1209 12:28:18.983403 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-6b8bj" event={"ID":"3ecae80d-2552-44d1-9b62-c7112893d38f","Type":"ContainerStarted","Data":"a267761f02b0927b2a54c59cddb2937a8648c0ef4f03c7f5e1463b4b17805921"} Dec 09 12:28:18 crc kubenswrapper[4703]: I1209 12:28:18.994889 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-lq8j2" event={"ID":"6eb7e89e-dacb-4bd5-a8e5-ebe2a91807d7","Type":"ContainerStarted","Data":"5f49fc5c65b86923d680125a38df5215f1aed13237ac8f72e8e63f62750a4346"} Dec 09 12:28:19 crc kubenswrapper[4703]: I1209 12:28:19.004389 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-79c5b588df-ksgrp" event={"ID":"b68ace68-caf9-489c-88d9-0daf26bfdeb7","Type":"ContainerStarted","Data":"d62621ed90eadf8de8201386591341591cc3abc850792fea750aea17a672cd81"} Dec 09 12:28:19 crc kubenswrapper[4703]: I1209 12:28:19.037993 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-6b8bj" podStartSLOduration=38.037973333 podStartE2EDuration="38.037973333s" podCreationTimestamp="2025-12-09 12:27:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:28:19.034130357 +0000 UTC m=+1398.282893896" watchObservedRunningTime="2025-12-09 12:28:19.037973333 +0000 UTC m=+1398.286736852" Dec 09 12:28:19 crc kubenswrapper[4703]: I1209 12:28:19.060511 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-74fff9b6-zdlbc" event={"ID":"5292ab55-fe87-4bb9-8975-5a06bcc517e0","Type":"ContainerStarted","Data":"eb66cd1a9801a447461638231e1db40c83fd42bf3ddd39fcf1f36ed85af0afa2"} Dec 09 12:28:19 crc kubenswrapper[4703]: I1209 12:28:19.132878 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-lq8j2" podStartSLOduration=16.076564994 podStartE2EDuration="51.132856424s" podCreationTimestamp="2025-12-09 12:27:28 +0000 UTC" firstStartedPulling="2025-12-09 12:27:30.701512699 +0000 UTC m=+1349.950276218" lastFinishedPulling="2025-12-09 12:28:05.757804129 +0000 UTC m=+1385.006567648" observedRunningTime="2025-12-09 12:28:19.072614749 +0000 UTC m=+1398.321378268" watchObservedRunningTime="2025-12-09 12:28:19.132856424 +0000 UTC m=+1398.381619943" Dec 09 12:28:19 crc kubenswrapper[4703]: I1209 12:28:19.135348 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d88d7b95f-s7vh4" event={"ID":"90b0d7de-0a52-4200-8644-9a4561cfe725","Type":"ContainerStarted","Data":"1911eabe5c4d40d8789a18c31a26d904dd49dd31e355a9298e65f6a8f524ef79"} Dec 09 12:28:19 crc kubenswrapper[4703]: I1209 12:28:19.425173 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 09 12:28:19 crc kubenswrapper[4703]: W1209 12:28:19.437401 4703 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e38588e_3f75_4755_8201_6b95541ff106.slice/crio-d9d1d90a1a34448c6b4f403173b8fe2a01e75c98dd4f42be904c573ca2793cac WatchSource:0}: Error finding container d9d1d90a1a34448c6b4f403173b8fe2a01e75c98dd4f42be904c573ca2793cac: Status 404 returned error can't find the container with id d9d1d90a1a34448c6b4f403173b8fe2a01e75c98dd4f42be904c573ca2793cac Dec 09 12:28:20 crc kubenswrapper[4703]: I1209 12:28:20.107269 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"93142f2e-b931-41a8-8d67-f586853197a1","Type":"ContainerStarted","Data":"93950ba68f9aeddcee195a8836e85ce62543205c5a82a660738b7caa358c020f"} Dec 09 12:28:20 crc kubenswrapper[4703]: I1209 12:28:20.119307 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-t56kd" event={"ID":"c23c3529-4f24-4180-84ee-f50c0824b3db","Type":"ContainerStarted","Data":"fd6d85c18f5436e936f17bcd1def02282c8292b6e7829b2b04585449ed5aee0b"} Dec 09 12:28:20 crc kubenswrapper[4703]: I1209 12:28:20.121304 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55f844cf75-t56kd" Dec 09 12:28:20 crc kubenswrapper[4703]: I1209 12:28:20.138098 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-79c5b588df-ksgrp" event={"ID":"b68ace68-caf9-489c-88d9-0daf26bfdeb7","Type":"ContainerStarted","Data":"fdec533e7d048f41aff21cdcf6d66a7802647f8feb94cc12da9440d6bb064229"} Dec 09 12:28:20 crc kubenswrapper[4703]: I1209 12:28:20.143578 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-74fff9b6-zdlbc" event={"ID":"5292ab55-fe87-4bb9-8975-5a06bcc517e0","Type":"ContainerStarted","Data":"b9d54ab745e1ede6572649622e30d1f60eee4f9770db1af98b5bd7f8460b4099"} Dec 09 12:28:20 crc kubenswrapper[4703]: I1209 12:28:20.143645 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-74fff9b6-zdlbc" event={"ID":"5292ab55-fe87-4bb9-8975-5a06bcc517e0","Type":"ContainerStarted","Data":"4fff200c8f68ea22d3b22f55bea6562225b2518482ebdeddc38c45ae5f0acd87"} Dec 09 12:28:20 crc kubenswrapper[4703]: I1209 12:28:20.144217 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-74fff9b6-zdlbc" Dec 09 12:28:20 crc kubenswrapper[4703]: I1209 12:28:20.167638 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1e38588e-3f75-4755-8201-6b95541ff106","Type":"ContainerStarted","Data":"d9d1d90a1a34448c6b4f403173b8fe2a01e75c98dd4f42be904c573ca2793cac"} Dec 09 12:28:20 crc kubenswrapper[4703]: I1209 12:28:20.169889 4703 generic.go:334] "Generic (PLEG): container finished" podID="90b0d7de-0a52-4200-8644-9a4561cfe725" containerID="f7e19feb7cccc2059e44d4f7543ee467bfc1af7228d3524063ec7f4220ebc352" exitCode=0 Dec 09 12:28:20 crc kubenswrapper[4703]: I1209 12:28:20.171240 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d88d7b95f-s7vh4" event={"ID":"90b0d7de-0a52-4200-8644-9a4561cfe725","Type":"ContainerDied","Data":"f7e19feb7cccc2059e44d4f7543ee467bfc1af7228d3524063ec7f4220ebc352"} Dec 09 12:28:20 crc kubenswrapper[4703]: I1209 12:28:20.178349 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55f844cf75-t56kd" podStartSLOduration=13.178318216 podStartE2EDuration="13.178318216s" podCreationTimestamp="2025-12-09 12:28:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:28:20.161638669 +0000 UTC m=+1399.410402188" watchObservedRunningTime="2025-12-09 12:28:20.178318216 +0000 UTC m=+1399.427081755" Dec 09 12:28:20 crc kubenswrapper[4703]: I1209 12:28:20.218484 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-74fff9b6-zdlbc" podStartSLOduration=13.218461658 podStartE2EDuration="13.218461658s" podCreationTimestamp="2025-12-09 12:28:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:28:20.195529035 +0000 UTC m=+1399.444292564" watchObservedRunningTime="2025-12-09 12:28:20.218461658 +0000 UTC m=+1399.467225177" Dec 09 12:28:20 crc kubenswrapper[4703]: I1209 12:28:20.993704 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d88d7b95f-s7vh4" Dec 09 12:28:21 crc kubenswrapper[4703]: I1209 12:28:21.085703 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/90b0d7de-0a52-4200-8644-9a4561cfe725-ovsdbserver-sb\") pod \"90b0d7de-0a52-4200-8644-9a4561cfe725\" (UID: \"90b0d7de-0a52-4200-8644-9a4561cfe725\") " Dec 09 12:28:21 crc kubenswrapper[4703]: I1209 12:28:21.086160 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/90b0d7de-0a52-4200-8644-9a4561cfe725-dns-swift-storage-0\") pod \"90b0d7de-0a52-4200-8644-9a4561cfe725\" (UID: \"90b0d7de-0a52-4200-8644-9a4561cfe725\") " Dec 09 12:28:21 crc kubenswrapper[4703]: I1209 12:28:21.086201 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/90b0d7de-0a52-4200-8644-9a4561cfe725-ovsdbserver-nb\") pod \"90b0d7de-0a52-4200-8644-9a4561cfe725\" (UID: \"90b0d7de-0a52-4200-8644-9a4561cfe725\") " Dec 09 12:28:21 crc kubenswrapper[4703]: I1209 12:28:21.086246 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/90b0d7de-0a52-4200-8644-9a4561cfe725-dns-svc\") pod \"90b0d7de-0a52-4200-8644-9a4561cfe725\" (UID: \"90b0d7de-0a52-4200-8644-9a4561cfe725\") " Dec 09 12:28:21 crc kubenswrapper[4703]: I1209 12:28:21.086280 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2cph\" (UniqueName: \"kubernetes.io/projected/90b0d7de-0a52-4200-8644-9a4561cfe725-kube-api-access-d2cph\") pod \"90b0d7de-0a52-4200-8644-9a4561cfe725\" (UID: \"90b0d7de-0a52-4200-8644-9a4561cfe725\") " Dec 09 12:28:21 crc kubenswrapper[4703]: I1209 12:28:21.086344 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90b0d7de-0a52-4200-8644-9a4561cfe725-config\") pod \"90b0d7de-0a52-4200-8644-9a4561cfe725\" (UID: \"90b0d7de-0a52-4200-8644-9a4561cfe725\") " Dec 09 12:28:21 crc kubenswrapper[4703]: I1209 12:28:21.156040 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90b0d7de-0a52-4200-8644-9a4561cfe725-kube-api-access-d2cph" (OuterVolumeSpecName: "kube-api-access-d2cph") pod "90b0d7de-0a52-4200-8644-9a4561cfe725" (UID: "90b0d7de-0a52-4200-8644-9a4561cfe725"). InnerVolumeSpecName "kube-api-access-d2cph". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:28:21 crc kubenswrapper[4703]: I1209 12:28:21.193295 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d2cph\" (UniqueName: \"kubernetes.io/projected/90b0d7de-0a52-4200-8644-9a4561cfe725-kube-api-access-d2cph\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:21 crc kubenswrapper[4703]: I1209 12:28:21.200780 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90b0d7de-0a52-4200-8644-9a4561cfe725-config" (OuterVolumeSpecName: "config") pod "90b0d7de-0a52-4200-8644-9a4561cfe725" (UID: "90b0d7de-0a52-4200-8644-9a4561cfe725"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:28:21 crc kubenswrapper[4703]: I1209 12:28:21.205515 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90b0d7de-0a52-4200-8644-9a4561cfe725-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "90b0d7de-0a52-4200-8644-9a4561cfe725" (UID: "90b0d7de-0a52-4200-8644-9a4561cfe725"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:28:21 crc kubenswrapper[4703]: I1209 12:28:21.217930 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90b0d7de-0a52-4200-8644-9a4561cfe725-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "90b0d7de-0a52-4200-8644-9a4561cfe725" (UID: "90b0d7de-0a52-4200-8644-9a4561cfe725"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:28:21 crc kubenswrapper[4703]: I1209 12:28:21.246926 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90b0d7de-0a52-4200-8644-9a4561cfe725-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "90b0d7de-0a52-4200-8644-9a4561cfe725" (UID: "90b0d7de-0a52-4200-8644-9a4561cfe725"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:28:21 crc kubenswrapper[4703]: I1209 12:28:21.247552 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d88d7b95f-s7vh4" event={"ID":"90b0d7de-0a52-4200-8644-9a4561cfe725","Type":"ContainerDied","Data":"1911eabe5c4d40d8789a18c31a26d904dd49dd31e355a9298e65f6a8f524ef79"} Dec 09 12:28:21 crc kubenswrapper[4703]: I1209 12:28:21.247606 4703 scope.go:117] "RemoveContainer" containerID="f7e19feb7cccc2059e44d4f7543ee467bfc1af7228d3524063ec7f4220ebc352" Dec 09 12:28:21 crc kubenswrapper[4703]: I1209 12:28:21.247769 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d88d7b95f-s7vh4" Dec 09 12:28:21 crc kubenswrapper[4703]: I1209 12:28:21.256213 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-79c5b588df-ksgrp" event={"ID":"b68ace68-caf9-489c-88d9-0daf26bfdeb7","Type":"ContainerStarted","Data":"48180f3738ccdb48bc557c4e688db4d5c8630025fbef221b90b4859058c572dc"} Dec 09 12:28:21 crc kubenswrapper[4703]: I1209 12:28:21.257345 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-79c5b588df-ksgrp" Dec 09 12:28:21 crc kubenswrapper[4703]: I1209 12:28:21.263036 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1e38588e-3f75-4755-8201-6b95541ff106","Type":"ContainerStarted","Data":"3e3c91a6ba2d81e28bdf74e8357036d175090cd66c4a415167daa780e43d1a14"} Dec 09 12:28:21 crc kubenswrapper[4703]: I1209 12:28:21.283924 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90b0d7de-0a52-4200-8644-9a4561cfe725-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "90b0d7de-0a52-4200-8644-9a4561cfe725" (UID: "90b0d7de-0a52-4200-8644-9a4561cfe725"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:28:21 crc kubenswrapper[4703]: I1209 12:28:21.300321 4703 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/90b0d7de-0a52-4200-8644-9a4561cfe725-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:21 crc kubenswrapper[4703]: I1209 12:28:21.300372 4703 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/90b0d7de-0a52-4200-8644-9a4561cfe725-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:21 crc kubenswrapper[4703]: I1209 12:28:21.300385 4703 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/90b0d7de-0a52-4200-8644-9a4561cfe725-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:21 crc kubenswrapper[4703]: I1209 12:28:21.300398 4703 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90b0d7de-0a52-4200-8644-9a4561cfe725-config\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:21 crc kubenswrapper[4703]: I1209 12:28:21.300409 4703 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/90b0d7de-0a52-4200-8644-9a4561cfe725-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:21 crc kubenswrapper[4703]: I1209 12:28:21.305875 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-79c5b588df-ksgrp" podStartSLOduration=7.305813127 podStartE2EDuration="7.305813127s" podCreationTimestamp="2025-12-09 12:28:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:28:21.284339 +0000 UTC m=+1400.533102519" watchObservedRunningTime="2025-12-09 12:28:21.305813127 +0000 UTC m=+1400.554576646" Dec 09 12:28:21 crc kubenswrapper[4703]: I1209 12:28:21.753873 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d88d7b95f-s7vh4"] Dec 09 12:28:21 crc kubenswrapper[4703]: I1209 12:28:21.770932 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7d88d7b95f-s7vh4"] Dec 09 12:28:22 crc kubenswrapper[4703]: I1209 12:28:22.284527 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1e38588e-3f75-4755-8201-6b95541ff106","Type":"ContainerStarted","Data":"2ef681b336d01a0dd53c863f5ebb4003476d285f8511e43d00935ee88a5d2076"} Dec 09 12:28:22 crc kubenswrapper[4703]: I1209 12:28:22.284621 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="1e38588e-3f75-4755-8201-6b95541ff106" containerName="glance-log" containerID="cri-o://3e3c91a6ba2d81e28bdf74e8357036d175090cd66c4a415167daa780e43d1a14" gracePeriod=30 Dec 09 12:28:22 crc kubenswrapper[4703]: I1209 12:28:22.284651 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="1e38588e-3f75-4755-8201-6b95541ff106" containerName="glance-httpd" containerID="cri-o://2ef681b336d01a0dd53c863f5ebb4003476d285f8511e43d00935ee88a5d2076" gracePeriod=30 Dec 09 12:28:22 crc kubenswrapper[4703]: I1209 12:28:22.290809 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-5tddl" event={"ID":"24ebaba5-65a6-4be5-8112-10c77a6d986c","Type":"ContainerStarted","Data":"6f5fa72102cf4873c2f34f345e09eb221c662001500660781d4003559b5546ae"} Dec 09 12:28:22 crc kubenswrapper[4703]: I1209 12:28:22.303109 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="93142f2e-b931-41a8-8d67-f586853197a1" containerName="glance-log" containerID="cri-o://93950ba68f9aeddcee195a8836e85ce62543205c5a82a660738b7caa358c020f" gracePeriod=30 Dec 09 12:28:22 crc kubenswrapper[4703]: I1209 12:28:22.303434 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"93142f2e-b931-41a8-8d67-f586853197a1","Type":"ContainerStarted","Data":"5b6034afbb654131d7baac5e979bce18651dbfad9dd382dde333286858336607"} Dec 09 12:28:22 crc kubenswrapper[4703]: I1209 12:28:22.304193 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="93142f2e-b931-41a8-8d67-f586853197a1" containerName="glance-httpd" containerID="cri-o://5b6034afbb654131d7baac5e979bce18651dbfad9dd382dde333286858336607" gracePeriod=30 Dec 09 12:28:22 crc kubenswrapper[4703]: I1209 12:28:22.319548 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=15.319526096 podStartE2EDuration="15.319526096s" podCreationTimestamp="2025-12-09 12:28:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:28:22.305596158 +0000 UTC m=+1401.554359687" watchObservedRunningTime="2025-12-09 12:28:22.319526096 +0000 UTC m=+1401.568289615" Dec 09 12:28:22 crc kubenswrapper[4703]: I1209 12:28:22.337316 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-5tddl" podStartSLOduration=4.389934318 podStartE2EDuration="54.337298159s" podCreationTimestamp="2025-12-09 12:27:28 +0000 UTC" firstStartedPulling="2025-12-09 12:27:30.350139892 +0000 UTC m=+1349.598903411" lastFinishedPulling="2025-12-09 12:28:20.297503723 +0000 UTC m=+1399.546267252" observedRunningTime="2025-12-09 12:28:22.331281149 +0000 UTC m=+1401.580044678" watchObservedRunningTime="2025-12-09 12:28:22.337298159 +0000 UTC m=+1401.586061678" Dec 09 12:28:22 crc kubenswrapper[4703]: I1209 12:28:22.371163 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=14.371138085 podStartE2EDuration="14.371138085s" podCreationTimestamp="2025-12-09 12:28:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:28:22.354322575 +0000 UTC m=+1401.603086094" watchObservedRunningTime="2025-12-09 12:28:22.371138085 +0000 UTC m=+1401.619901604" Dec 09 12:28:23 crc kubenswrapper[4703]: I1209 12:28:23.112390 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90b0d7de-0a52-4200-8644-9a4561cfe725" path="/var/lib/kubelet/pods/90b0d7de-0a52-4200-8644-9a4561cfe725/volumes" Dec 09 12:28:23 crc kubenswrapper[4703]: I1209 12:28:23.128289 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 09 12:28:23 crc kubenswrapper[4703]: I1209 12:28:23.251242 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e38588e-3f75-4755-8201-6b95541ff106-scripts\") pod \"1e38588e-3f75-4755-8201-6b95541ff106\" (UID: \"1e38588e-3f75-4755-8201-6b95541ff106\") " Dec 09 12:28:23 crc kubenswrapper[4703]: I1209 12:28:23.251612 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q2jgt\" (UniqueName: \"kubernetes.io/projected/1e38588e-3f75-4755-8201-6b95541ff106-kube-api-access-q2jgt\") pod \"1e38588e-3f75-4755-8201-6b95541ff106\" (UID: \"1e38588e-3f75-4755-8201-6b95541ff106\") " Dec 09 12:28:23 crc kubenswrapper[4703]: I1209 12:28:23.251782 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1e38588e-3f75-4755-8201-6b95541ff106-httpd-run\") pod \"1e38588e-3f75-4755-8201-6b95541ff106\" (UID: \"1e38588e-3f75-4755-8201-6b95541ff106\") " Dec 09 12:28:23 crc kubenswrapper[4703]: I1209 12:28:23.252000 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e38588e-3f75-4755-8201-6b95541ff106-logs\") pod \"1e38588e-3f75-4755-8201-6b95541ff106\" (UID: \"1e38588e-3f75-4755-8201-6b95541ff106\") " Dec 09 12:28:23 crc kubenswrapper[4703]: I1209 12:28:23.252195 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b4442429-2353-4ea7-ac70-798afe17b703\") pod \"1e38588e-3f75-4755-8201-6b95541ff106\" (UID: \"1e38588e-3f75-4755-8201-6b95541ff106\") " Dec 09 12:28:23 crc kubenswrapper[4703]: I1209 12:28:23.252723 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e38588e-3f75-4755-8201-6b95541ff106-combined-ca-bundle\") pod \"1e38588e-3f75-4755-8201-6b95541ff106\" (UID: \"1e38588e-3f75-4755-8201-6b95541ff106\") " Dec 09 12:28:23 crc kubenswrapper[4703]: I1209 12:28:23.252954 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e38588e-3f75-4755-8201-6b95541ff106-config-data\") pod \"1e38588e-3f75-4755-8201-6b95541ff106\" (UID: \"1e38588e-3f75-4755-8201-6b95541ff106\") " Dec 09 12:28:23 crc kubenswrapper[4703]: I1209 12:28:23.255400 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e38588e-3f75-4755-8201-6b95541ff106-logs" (OuterVolumeSpecName: "logs") pod "1e38588e-3f75-4755-8201-6b95541ff106" (UID: "1e38588e-3f75-4755-8201-6b95541ff106"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:28:23 crc kubenswrapper[4703]: I1209 12:28:23.256947 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e38588e-3f75-4755-8201-6b95541ff106-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "1e38588e-3f75-4755-8201-6b95541ff106" (UID: "1e38588e-3f75-4755-8201-6b95541ff106"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:28:23 crc kubenswrapper[4703]: I1209 12:28:23.262757 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e38588e-3f75-4755-8201-6b95541ff106-scripts" (OuterVolumeSpecName: "scripts") pod "1e38588e-3f75-4755-8201-6b95541ff106" (UID: "1e38588e-3f75-4755-8201-6b95541ff106"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:28:23 crc kubenswrapper[4703]: I1209 12:28:23.267064 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e38588e-3f75-4755-8201-6b95541ff106-kube-api-access-q2jgt" (OuterVolumeSpecName: "kube-api-access-q2jgt") pod "1e38588e-3f75-4755-8201-6b95541ff106" (UID: "1e38588e-3f75-4755-8201-6b95541ff106"). InnerVolumeSpecName "kube-api-access-q2jgt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:28:23 crc kubenswrapper[4703]: I1209 12:28:23.295553 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b4442429-2353-4ea7-ac70-798afe17b703" (OuterVolumeSpecName: "glance") pod "1e38588e-3f75-4755-8201-6b95541ff106" (UID: "1e38588e-3f75-4755-8201-6b95541ff106"). InnerVolumeSpecName "pvc-b4442429-2353-4ea7-ac70-798afe17b703". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 09 12:28:23 crc kubenswrapper[4703]: I1209 12:28:23.326380 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e38588e-3f75-4755-8201-6b95541ff106-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1e38588e-3f75-4755-8201-6b95541ff106" (UID: "1e38588e-3f75-4755-8201-6b95541ff106"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:28:23 crc kubenswrapper[4703]: I1209 12:28:23.364439 4703 generic.go:334] "Generic (PLEG): container finished" podID="93142f2e-b931-41a8-8d67-f586853197a1" containerID="5b6034afbb654131d7baac5e979bce18651dbfad9dd382dde333286858336607" exitCode=143 Dec 09 12:28:23 crc kubenswrapper[4703]: I1209 12:28:23.364478 4703 generic.go:334] "Generic (PLEG): container finished" podID="93142f2e-b931-41a8-8d67-f586853197a1" containerID="93950ba68f9aeddcee195a8836e85ce62543205c5a82a660738b7caa358c020f" exitCode=143 Dec 09 12:28:23 crc kubenswrapper[4703]: I1209 12:28:23.364526 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"93142f2e-b931-41a8-8d67-f586853197a1","Type":"ContainerDied","Data":"5b6034afbb654131d7baac5e979bce18651dbfad9dd382dde333286858336607"} Dec 09 12:28:23 crc kubenswrapper[4703]: I1209 12:28:23.364564 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"93142f2e-b931-41a8-8d67-f586853197a1","Type":"ContainerDied","Data":"93950ba68f9aeddcee195a8836e85ce62543205c5a82a660738b7caa358c020f"} Dec 09 12:28:23 crc kubenswrapper[4703]: I1209 12:28:23.364580 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"93142f2e-b931-41a8-8d67-f586853197a1","Type":"ContainerDied","Data":"f9fe0e371f13304d4a5890d09fdef866dc7cd94ceb10c146205e6f0a59322040"} Dec 09 12:28:23 crc kubenswrapper[4703]: I1209 12:28:23.364593 4703 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f9fe0e371f13304d4a5890d09fdef866dc7cd94ceb10c146205e6f0a59322040" Dec 09 12:28:23 crc kubenswrapper[4703]: I1209 12:28:23.373258 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e38588e-3f75-4755-8201-6b95541ff106-config-data" (OuterVolumeSpecName: "config-data") pod "1e38588e-3f75-4755-8201-6b95541ff106" (UID: "1e38588e-3f75-4755-8201-6b95541ff106"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:28:23 crc kubenswrapper[4703]: I1209 12:28:23.374945 4703 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e38588e-3f75-4755-8201-6b95541ff106-logs\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:23 crc kubenswrapper[4703]: I1209 12:28:23.375396 4703 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-b4442429-2353-4ea7-ac70-798afe17b703\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b4442429-2353-4ea7-ac70-798afe17b703\") on node \"crc\" " Dec 09 12:28:23 crc kubenswrapper[4703]: I1209 12:28:23.375526 4703 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e38588e-3f75-4755-8201-6b95541ff106-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:23 crc kubenswrapper[4703]: I1209 12:28:23.375680 4703 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e38588e-3f75-4755-8201-6b95541ff106-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:23 crc kubenswrapper[4703]: I1209 12:28:23.375767 4703 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e38588e-3f75-4755-8201-6b95541ff106-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:23 crc kubenswrapper[4703]: I1209 12:28:23.375852 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q2jgt\" (UniqueName: \"kubernetes.io/projected/1e38588e-3f75-4755-8201-6b95541ff106-kube-api-access-q2jgt\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:23 crc kubenswrapper[4703]: I1209 12:28:23.375941 4703 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1e38588e-3f75-4755-8201-6b95541ff106-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:23 crc kubenswrapper[4703]: I1209 12:28:23.381407 4703 generic.go:334] "Generic (PLEG): container finished" podID="1e38588e-3f75-4755-8201-6b95541ff106" containerID="2ef681b336d01a0dd53c863f5ebb4003476d285f8511e43d00935ee88a5d2076" exitCode=143 Dec 09 12:28:23 crc kubenswrapper[4703]: I1209 12:28:23.381444 4703 generic.go:334] "Generic (PLEG): container finished" podID="1e38588e-3f75-4755-8201-6b95541ff106" containerID="3e3c91a6ba2d81e28bdf74e8357036d175090cd66c4a415167daa780e43d1a14" exitCode=143 Dec 09 12:28:23 crc kubenswrapper[4703]: I1209 12:28:23.381657 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1e38588e-3f75-4755-8201-6b95541ff106","Type":"ContainerDied","Data":"2ef681b336d01a0dd53c863f5ebb4003476d285f8511e43d00935ee88a5d2076"} Dec 09 12:28:23 crc kubenswrapper[4703]: I1209 12:28:23.381701 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1e38588e-3f75-4755-8201-6b95541ff106","Type":"ContainerDied","Data":"3e3c91a6ba2d81e28bdf74e8357036d175090cd66c4a415167daa780e43d1a14"} Dec 09 12:28:23 crc kubenswrapper[4703]: I1209 12:28:23.381742 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1e38588e-3f75-4755-8201-6b95541ff106","Type":"ContainerDied","Data":"d9d1d90a1a34448c6b4f403173b8fe2a01e75c98dd4f42be904c573ca2793cac"} Dec 09 12:28:23 crc kubenswrapper[4703]: I1209 12:28:23.382022 4703 scope.go:117] "RemoveContainer" containerID="2ef681b336d01a0dd53c863f5ebb4003476d285f8511e43d00935ee88a5d2076" Dec 09 12:28:23 crc kubenswrapper[4703]: I1209 12:28:23.382558 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 09 12:28:23 crc kubenswrapper[4703]: I1209 12:28:23.425258 4703 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Dec 09 12:28:23 crc kubenswrapper[4703]: I1209 12:28:23.425430 4703 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-b4442429-2353-4ea7-ac70-798afe17b703" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b4442429-2353-4ea7-ac70-798afe17b703") on node "crc" Dec 09 12:28:23 crc kubenswrapper[4703]: I1209 12:28:23.478189 4703 reconciler_common.go:293] "Volume detached for volume \"pvc-b4442429-2353-4ea7-ac70-798afe17b703\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b4442429-2353-4ea7-ac70-798afe17b703\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:23 crc kubenswrapper[4703]: I1209 12:28:23.485581 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 09 12:28:23 crc kubenswrapper[4703]: I1209 12:28:23.485753 4703 scope.go:117] "RemoveContainer" containerID="3e3c91a6ba2d81e28bdf74e8357036d175090cd66c4a415167daa780e43d1a14" Dec 09 12:28:23 crc kubenswrapper[4703]: I1209 12:28:23.548475 4703 scope.go:117] "RemoveContainer" containerID="2ef681b336d01a0dd53c863f5ebb4003476d285f8511e43d00935ee88a5d2076" Dec 09 12:28:23 crc kubenswrapper[4703]: E1209 12:28:23.561551 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ef681b336d01a0dd53c863f5ebb4003476d285f8511e43d00935ee88a5d2076\": container with ID starting with 2ef681b336d01a0dd53c863f5ebb4003476d285f8511e43d00935ee88a5d2076 not found: ID does not exist" containerID="2ef681b336d01a0dd53c863f5ebb4003476d285f8511e43d00935ee88a5d2076" Dec 09 12:28:23 crc kubenswrapper[4703]: I1209 12:28:23.561615 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ef681b336d01a0dd53c863f5ebb4003476d285f8511e43d00935ee88a5d2076"} err="failed to get container status \"2ef681b336d01a0dd53c863f5ebb4003476d285f8511e43d00935ee88a5d2076\": rpc error: code = NotFound desc = could not find container \"2ef681b336d01a0dd53c863f5ebb4003476d285f8511e43d00935ee88a5d2076\": container with ID starting with 2ef681b336d01a0dd53c863f5ebb4003476d285f8511e43d00935ee88a5d2076 not found: ID does not exist" Dec 09 12:28:23 crc kubenswrapper[4703]: I1209 12:28:23.561662 4703 scope.go:117] "RemoveContainer" containerID="3e3c91a6ba2d81e28bdf74e8357036d175090cd66c4a415167daa780e43d1a14" Dec 09 12:28:23 crc kubenswrapper[4703]: I1209 12:28:23.561791 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 09 12:28:23 crc kubenswrapper[4703]: E1209 12:28:23.564378 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e3c91a6ba2d81e28bdf74e8357036d175090cd66c4a415167daa780e43d1a14\": container with ID starting with 3e3c91a6ba2d81e28bdf74e8357036d175090cd66c4a415167daa780e43d1a14 not found: ID does not exist" containerID="3e3c91a6ba2d81e28bdf74e8357036d175090cd66c4a415167daa780e43d1a14" Dec 09 12:28:23 crc kubenswrapper[4703]: I1209 12:28:23.564432 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e3c91a6ba2d81e28bdf74e8357036d175090cd66c4a415167daa780e43d1a14"} err="failed to get container status \"3e3c91a6ba2d81e28bdf74e8357036d175090cd66c4a415167daa780e43d1a14\": rpc error: code = NotFound desc = could not find container \"3e3c91a6ba2d81e28bdf74e8357036d175090cd66c4a415167daa780e43d1a14\": container with ID starting with 3e3c91a6ba2d81e28bdf74e8357036d175090cd66c4a415167daa780e43d1a14 not found: ID does not exist" Dec 09 12:28:23 crc kubenswrapper[4703]: I1209 12:28:23.564465 4703 scope.go:117] "RemoveContainer" containerID="2ef681b336d01a0dd53c863f5ebb4003476d285f8511e43d00935ee88a5d2076" Dec 09 12:28:23 crc kubenswrapper[4703]: I1209 12:28:23.569523 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ef681b336d01a0dd53c863f5ebb4003476d285f8511e43d00935ee88a5d2076"} err="failed to get container status \"2ef681b336d01a0dd53c863f5ebb4003476d285f8511e43d00935ee88a5d2076\": rpc error: code = NotFound desc = could not find container \"2ef681b336d01a0dd53c863f5ebb4003476d285f8511e43d00935ee88a5d2076\": container with ID starting with 2ef681b336d01a0dd53c863f5ebb4003476d285f8511e43d00935ee88a5d2076 not found: ID does not exist" Dec 09 12:28:23 crc kubenswrapper[4703]: I1209 12:28:23.569581 4703 scope.go:117] "RemoveContainer" containerID="3e3c91a6ba2d81e28bdf74e8357036d175090cd66c4a415167daa780e43d1a14" Dec 09 12:28:23 crc kubenswrapper[4703]: I1209 12:28:23.570071 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e3c91a6ba2d81e28bdf74e8357036d175090cd66c4a415167daa780e43d1a14"} err="failed to get container status \"3e3c91a6ba2d81e28bdf74e8357036d175090cd66c4a415167daa780e43d1a14\": rpc error: code = NotFound desc = could not find container \"3e3c91a6ba2d81e28bdf74e8357036d175090cd66c4a415167daa780e43d1a14\": container with ID starting with 3e3c91a6ba2d81e28bdf74e8357036d175090cd66c4a415167daa780e43d1a14 not found: ID does not exist" Dec 09 12:28:23 crc kubenswrapper[4703]: I1209 12:28:23.580301 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93142f2e-b931-41a8-8d67-f586853197a1-scripts\") pod \"93142f2e-b931-41a8-8d67-f586853197a1\" (UID: \"93142f2e-b931-41a8-8d67-f586853197a1\") " Dec 09 12:28:23 crc kubenswrapper[4703]: I1209 12:28:23.580356 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93142f2e-b931-41a8-8d67-f586853197a1-logs\") pod \"93142f2e-b931-41a8-8d67-f586853197a1\" (UID: \"93142f2e-b931-41a8-8d67-f586853197a1\") " Dec 09 12:28:23 crc kubenswrapper[4703]: I1209 12:28:23.580526 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/93142f2e-b931-41a8-8d67-f586853197a1-httpd-run\") pod \"93142f2e-b931-41a8-8d67-f586853197a1\" (UID: \"93142f2e-b931-41a8-8d67-f586853197a1\") " Dec 09 12:28:23 crc kubenswrapper[4703]: I1209 12:28:23.580664 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93142f2e-b931-41a8-8d67-f586853197a1-combined-ca-bundle\") pod \"93142f2e-b931-41a8-8d67-f586853197a1\" (UID: \"93142f2e-b931-41a8-8d67-f586853197a1\") " Dec 09 12:28:23 crc kubenswrapper[4703]: I1209 12:28:23.580685 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hnt25\" (UniqueName: \"kubernetes.io/projected/93142f2e-b931-41a8-8d67-f586853197a1-kube-api-access-hnt25\") pod \"93142f2e-b931-41a8-8d67-f586853197a1\" (UID: \"93142f2e-b931-41a8-8d67-f586853197a1\") " Dec 09 12:28:23 crc kubenswrapper[4703]: I1209 12:28:23.580721 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93142f2e-b931-41a8-8d67-f586853197a1-config-data\") pod \"93142f2e-b931-41a8-8d67-f586853197a1\" (UID: \"93142f2e-b931-41a8-8d67-f586853197a1\") " Dec 09 12:28:23 crc kubenswrapper[4703]: I1209 12:28:23.580877 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cfecbed1-519b-4af5-9dfc-e582a380e9a4\") pod \"93142f2e-b931-41a8-8d67-f586853197a1\" (UID: \"93142f2e-b931-41a8-8d67-f586853197a1\") " Dec 09 12:28:23 crc kubenswrapper[4703]: I1209 12:28:23.582136 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 09 12:28:23 crc kubenswrapper[4703]: I1209 12:28:23.582522 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/93142f2e-b931-41a8-8d67-f586853197a1-logs" (OuterVolumeSpecName: "logs") pod "93142f2e-b931-41a8-8d67-f586853197a1" (UID: "93142f2e-b931-41a8-8d67-f586853197a1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:28:23 crc kubenswrapper[4703]: I1209 12:28:23.587229 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/93142f2e-b931-41a8-8d67-f586853197a1-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "93142f2e-b931-41a8-8d67-f586853197a1" (UID: "93142f2e-b931-41a8-8d67-f586853197a1"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:28:23 crc kubenswrapper[4703]: I1209 12:28:23.596012 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93142f2e-b931-41a8-8d67-f586853197a1-kube-api-access-hnt25" (OuterVolumeSpecName: "kube-api-access-hnt25") pod "93142f2e-b931-41a8-8d67-f586853197a1" (UID: "93142f2e-b931-41a8-8d67-f586853197a1"). InnerVolumeSpecName "kube-api-access-hnt25". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:28:23 crc kubenswrapper[4703]: I1209 12:28:23.599539 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93142f2e-b931-41a8-8d67-f586853197a1-scripts" (OuterVolumeSpecName: "scripts") pod "93142f2e-b931-41a8-8d67-f586853197a1" (UID: "93142f2e-b931-41a8-8d67-f586853197a1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:28:23 crc kubenswrapper[4703]: I1209 12:28:23.624908 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 09 12:28:23 crc kubenswrapper[4703]: E1209 12:28:23.625612 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e38588e-3f75-4755-8201-6b95541ff106" containerName="glance-httpd" Dec 09 12:28:23 crc kubenswrapper[4703]: I1209 12:28:23.625635 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e38588e-3f75-4755-8201-6b95541ff106" containerName="glance-httpd" Dec 09 12:28:23 crc kubenswrapper[4703]: E1209 12:28:23.625666 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e38588e-3f75-4755-8201-6b95541ff106" containerName="glance-log" Dec 09 12:28:23 crc kubenswrapper[4703]: I1209 12:28:23.625674 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e38588e-3f75-4755-8201-6b95541ff106" containerName="glance-log" Dec 09 12:28:23 crc kubenswrapper[4703]: E1209 12:28:23.625684 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93142f2e-b931-41a8-8d67-f586853197a1" containerName="glance-log" Dec 09 12:28:23 crc kubenswrapper[4703]: I1209 12:28:23.625692 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="93142f2e-b931-41a8-8d67-f586853197a1" containerName="glance-log" Dec 09 12:28:23 crc kubenswrapper[4703]: E1209 12:28:23.625722 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90b0d7de-0a52-4200-8644-9a4561cfe725" containerName="init" Dec 09 12:28:23 crc kubenswrapper[4703]: I1209 12:28:23.625730 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="90b0d7de-0a52-4200-8644-9a4561cfe725" containerName="init" Dec 09 12:28:23 crc kubenswrapper[4703]: E1209 12:28:23.625744 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93142f2e-b931-41a8-8d67-f586853197a1" containerName="glance-httpd" Dec 09 12:28:23 crc kubenswrapper[4703]: I1209 12:28:23.625816 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="93142f2e-b931-41a8-8d67-f586853197a1" containerName="glance-httpd" Dec 09 12:28:23 crc kubenswrapper[4703]: I1209 12:28:23.626775 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e38588e-3f75-4755-8201-6b95541ff106" containerName="glance-log" Dec 09 12:28:23 crc kubenswrapper[4703]: I1209 12:28:23.626799 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="90b0d7de-0a52-4200-8644-9a4561cfe725" containerName="init" Dec 09 12:28:23 crc kubenswrapper[4703]: I1209 12:28:23.626818 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="93142f2e-b931-41a8-8d67-f586853197a1" containerName="glance-httpd" Dec 09 12:28:23 crc kubenswrapper[4703]: I1209 12:28:23.626839 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="93142f2e-b931-41a8-8d67-f586853197a1" containerName="glance-log" Dec 09 12:28:23 crc kubenswrapper[4703]: I1209 12:28:23.626860 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e38588e-3f75-4755-8201-6b95541ff106" containerName="glance-httpd" Dec 09 12:28:23 crc kubenswrapper[4703]: I1209 12:28:23.630147 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 09 12:28:23 crc kubenswrapper[4703]: I1209 12:28:23.641362 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cfecbed1-519b-4af5-9dfc-e582a380e9a4" (OuterVolumeSpecName: "glance") pod "93142f2e-b931-41a8-8d67-f586853197a1" (UID: "93142f2e-b931-41a8-8d67-f586853197a1"). InnerVolumeSpecName "pvc-cfecbed1-519b-4af5-9dfc-e582a380e9a4". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 09 12:28:23 crc kubenswrapper[4703]: I1209 12:28:23.652158 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 09 12:28:23 crc kubenswrapper[4703]: I1209 12:28:23.668464 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93142f2e-b931-41a8-8d67-f586853197a1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "93142f2e-b931-41a8-8d67-f586853197a1" (UID: "93142f2e-b931-41a8-8d67-f586853197a1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:28:23 crc kubenswrapper[4703]: I1209 12:28:23.669825 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Dec 09 12:28:23 crc kubenswrapper[4703]: I1209 12:28:23.687777 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfwk5\" (UniqueName: \"kubernetes.io/projected/db4f79b4-592c-4dc1-ad09-c1582b9d8497-kube-api-access-pfwk5\") pod \"glance-default-external-api-0\" (UID: \"db4f79b4-592c-4dc1-ad09-c1582b9d8497\") " pod="openstack/glance-default-external-api-0" Dec 09 12:28:23 crc kubenswrapper[4703]: I1209 12:28:23.687867 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db4f79b4-592c-4dc1-ad09-c1582b9d8497-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"db4f79b4-592c-4dc1-ad09-c1582b9d8497\") " pod="openstack/glance-default-external-api-0" Dec 09 12:28:23 crc kubenswrapper[4703]: I1209 12:28:23.687894 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b4442429-2353-4ea7-ac70-798afe17b703\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b4442429-2353-4ea7-ac70-798afe17b703\") pod \"glance-default-external-api-0\" (UID: \"db4f79b4-592c-4dc1-ad09-c1582b9d8497\") " pod="openstack/glance-default-external-api-0" Dec 09 12:28:23 crc kubenswrapper[4703]: I1209 12:28:23.687930 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db4f79b4-592c-4dc1-ad09-c1582b9d8497-logs\") pod \"glance-default-external-api-0\" (UID: \"db4f79b4-592c-4dc1-ad09-c1582b9d8497\") " pod="openstack/glance-default-external-api-0" Dec 09 12:28:23 crc kubenswrapper[4703]: I1209 12:28:23.687957 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db4f79b4-592c-4dc1-ad09-c1582b9d8497-config-data\") pod \"glance-default-external-api-0\" (UID: \"db4f79b4-592c-4dc1-ad09-c1582b9d8497\") " pod="openstack/glance-default-external-api-0" Dec 09 12:28:23 crc kubenswrapper[4703]: I1209 12:28:23.687977 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db4f79b4-592c-4dc1-ad09-c1582b9d8497-scripts\") pod \"glance-default-external-api-0\" (UID: \"db4f79b4-592c-4dc1-ad09-c1582b9d8497\") " pod="openstack/glance-default-external-api-0" Dec 09 12:28:23 crc kubenswrapper[4703]: I1209 12:28:23.688030 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/db4f79b4-592c-4dc1-ad09-c1582b9d8497-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"db4f79b4-592c-4dc1-ad09-c1582b9d8497\") " pod="openstack/glance-default-external-api-0" Dec 09 12:28:23 crc kubenswrapper[4703]: I1209 12:28:23.688055 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/db4f79b4-592c-4dc1-ad09-c1582b9d8497-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"db4f79b4-592c-4dc1-ad09-c1582b9d8497\") " pod="openstack/glance-default-external-api-0" Dec 09 12:28:23 crc kubenswrapper[4703]: I1209 12:28:23.688117 4703 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/93142f2e-b931-41a8-8d67-f586853197a1-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:23 crc kubenswrapper[4703]: I1209 12:28:23.688129 4703 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93142f2e-b931-41a8-8d67-f586853197a1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:23 crc kubenswrapper[4703]: I1209 12:28:23.688143 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hnt25\" (UniqueName: \"kubernetes.io/projected/93142f2e-b931-41a8-8d67-f586853197a1-kube-api-access-hnt25\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:23 crc kubenswrapper[4703]: I1209 12:28:23.688166 4703 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-cfecbed1-519b-4af5-9dfc-e582a380e9a4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cfecbed1-519b-4af5-9dfc-e582a380e9a4\") on node \"crc\" " Dec 09 12:28:23 crc kubenswrapper[4703]: I1209 12:28:23.688175 4703 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93142f2e-b931-41a8-8d67-f586853197a1-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:23 crc kubenswrapper[4703]: I1209 12:28:23.688185 4703 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93142f2e-b931-41a8-8d67-f586853197a1-logs\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:23 crc kubenswrapper[4703]: I1209 12:28:23.688701 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 09 12:28:23 crc kubenswrapper[4703]: I1209 12:28:23.703821 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93142f2e-b931-41a8-8d67-f586853197a1-config-data" (OuterVolumeSpecName: "config-data") pod "93142f2e-b931-41a8-8d67-f586853197a1" (UID: "93142f2e-b931-41a8-8d67-f586853197a1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:28:23 crc kubenswrapper[4703]: I1209 12:28:23.741224 4703 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Dec 09 12:28:23 crc kubenswrapper[4703]: I1209 12:28:23.741398 4703 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-cfecbed1-519b-4af5-9dfc-e582a380e9a4" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cfecbed1-519b-4af5-9dfc-e582a380e9a4") on node "crc" Dec 09 12:28:23 crc kubenswrapper[4703]: I1209 12:28:23.790154 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db4f79b4-592c-4dc1-ad09-c1582b9d8497-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"db4f79b4-592c-4dc1-ad09-c1582b9d8497\") " pod="openstack/glance-default-external-api-0" Dec 09 12:28:23 crc kubenswrapper[4703]: I1209 12:28:23.790322 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b4442429-2353-4ea7-ac70-798afe17b703\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b4442429-2353-4ea7-ac70-798afe17b703\") pod \"glance-default-external-api-0\" (UID: \"db4f79b4-592c-4dc1-ad09-c1582b9d8497\") " pod="openstack/glance-default-external-api-0" Dec 09 12:28:23 crc kubenswrapper[4703]: I1209 12:28:23.790378 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db4f79b4-592c-4dc1-ad09-c1582b9d8497-logs\") pod \"glance-default-external-api-0\" (UID: \"db4f79b4-592c-4dc1-ad09-c1582b9d8497\") " pod="openstack/glance-default-external-api-0" Dec 09 12:28:23 crc kubenswrapper[4703]: I1209 12:28:23.790411 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db4f79b4-592c-4dc1-ad09-c1582b9d8497-config-data\") pod \"glance-default-external-api-0\" (UID: \"db4f79b4-592c-4dc1-ad09-c1582b9d8497\") " pod="openstack/glance-default-external-api-0" Dec 09 12:28:23 crc kubenswrapper[4703]: I1209 12:28:23.790441 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db4f79b4-592c-4dc1-ad09-c1582b9d8497-scripts\") pod \"glance-default-external-api-0\" (UID: \"db4f79b4-592c-4dc1-ad09-c1582b9d8497\") " pod="openstack/glance-default-external-api-0" Dec 09 12:28:23 crc kubenswrapper[4703]: I1209 12:28:23.790518 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/db4f79b4-592c-4dc1-ad09-c1582b9d8497-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"db4f79b4-592c-4dc1-ad09-c1582b9d8497\") " pod="openstack/glance-default-external-api-0" Dec 09 12:28:23 crc kubenswrapper[4703]: I1209 12:28:23.790559 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/db4f79b4-592c-4dc1-ad09-c1582b9d8497-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"db4f79b4-592c-4dc1-ad09-c1582b9d8497\") " pod="openstack/glance-default-external-api-0" Dec 09 12:28:23 crc kubenswrapper[4703]: I1209 12:28:23.790641 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfwk5\" (UniqueName: \"kubernetes.io/projected/db4f79b4-592c-4dc1-ad09-c1582b9d8497-kube-api-access-pfwk5\") pod \"glance-default-external-api-0\" (UID: \"db4f79b4-592c-4dc1-ad09-c1582b9d8497\") " pod="openstack/glance-default-external-api-0" Dec 09 12:28:23 crc kubenswrapper[4703]: I1209 12:28:23.790759 4703 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93142f2e-b931-41a8-8d67-f586853197a1-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:23 crc kubenswrapper[4703]: I1209 12:28:23.790805 4703 reconciler_common.go:293] "Volume detached for volume \"pvc-cfecbed1-519b-4af5-9dfc-e582a380e9a4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cfecbed1-519b-4af5-9dfc-e582a380e9a4\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:23 crc kubenswrapper[4703]: I1209 12:28:23.792663 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db4f79b4-592c-4dc1-ad09-c1582b9d8497-logs\") pod \"glance-default-external-api-0\" (UID: \"db4f79b4-592c-4dc1-ad09-c1582b9d8497\") " pod="openstack/glance-default-external-api-0" Dec 09 12:28:23 crc kubenswrapper[4703]: I1209 12:28:23.793947 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/db4f79b4-592c-4dc1-ad09-c1582b9d8497-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"db4f79b4-592c-4dc1-ad09-c1582b9d8497\") " pod="openstack/glance-default-external-api-0" Dec 09 12:28:23 crc kubenswrapper[4703]: I1209 12:28:23.800180 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db4f79b4-592c-4dc1-ad09-c1582b9d8497-scripts\") pod \"glance-default-external-api-0\" (UID: \"db4f79b4-592c-4dc1-ad09-c1582b9d8497\") " pod="openstack/glance-default-external-api-0" Dec 09 12:28:23 crc kubenswrapper[4703]: I1209 12:28:23.801081 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db4f79b4-592c-4dc1-ad09-c1582b9d8497-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"db4f79b4-592c-4dc1-ad09-c1582b9d8497\") " pod="openstack/glance-default-external-api-0" Dec 09 12:28:23 crc kubenswrapper[4703]: I1209 12:28:23.801097 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db4f79b4-592c-4dc1-ad09-c1582b9d8497-config-data\") pod \"glance-default-external-api-0\" (UID: \"db4f79b4-592c-4dc1-ad09-c1582b9d8497\") " pod="openstack/glance-default-external-api-0" Dec 09 12:28:23 crc kubenswrapper[4703]: I1209 12:28:23.801932 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/db4f79b4-592c-4dc1-ad09-c1582b9d8497-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"db4f79b4-592c-4dc1-ad09-c1582b9d8497\") " pod="openstack/glance-default-external-api-0" Dec 09 12:28:23 crc kubenswrapper[4703]: I1209 12:28:23.808023 4703 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 09 12:28:23 crc kubenswrapper[4703]: I1209 12:28:23.808088 4703 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b4442429-2353-4ea7-ac70-798afe17b703\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b4442429-2353-4ea7-ac70-798afe17b703\") pod \"glance-default-external-api-0\" (UID: \"db4f79b4-592c-4dc1-ad09-c1582b9d8497\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/886135ab14297acda1ec732ab82ea94d7b160862d21cc21e3e28b5e1ec2b3603/globalmount\"" pod="openstack/glance-default-external-api-0" Dec 09 12:28:23 crc kubenswrapper[4703]: I1209 12:28:23.817580 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfwk5\" (UniqueName: \"kubernetes.io/projected/db4f79b4-592c-4dc1-ad09-c1582b9d8497-kube-api-access-pfwk5\") pod \"glance-default-external-api-0\" (UID: \"db4f79b4-592c-4dc1-ad09-c1582b9d8497\") " pod="openstack/glance-default-external-api-0" Dec 09 12:28:23 crc kubenswrapper[4703]: I1209 12:28:23.865379 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b4442429-2353-4ea7-ac70-798afe17b703\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b4442429-2353-4ea7-ac70-798afe17b703\") pod \"glance-default-external-api-0\" (UID: \"db4f79b4-592c-4dc1-ad09-c1582b9d8497\") " pod="openstack/glance-default-external-api-0" Dec 09 12:28:23 crc kubenswrapper[4703]: I1209 12:28:23.982698 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 09 12:28:24 crc kubenswrapper[4703]: I1209 12:28:24.407972 4703 generic.go:334] "Generic (PLEG): container finished" podID="6eb7e89e-dacb-4bd5-a8e5-ebe2a91807d7" containerID="5f49fc5c65b86923d680125a38df5215f1aed13237ac8f72e8e63f62750a4346" exitCode=0 Dec 09 12:28:24 crc kubenswrapper[4703]: I1209 12:28:24.408150 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-lq8j2" event={"ID":"6eb7e89e-dacb-4bd5-a8e5-ebe2a91807d7","Type":"ContainerDied","Data":"5f49fc5c65b86923d680125a38df5215f1aed13237ac8f72e8e63f62750a4346"} Dec 09 12:28:24 crc kubenswrapper[4703]: I1209 12:28:24.410591 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 09 12:28:24 crc kubenswrapper[4703]: I1209 12:28:24.473406 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 09 12:28:24 crc kubenswrapper[4703]: I1209 12:28:24.485050 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 09 12:28:24 crc kubenswrapper[4703]: I1209 12:28:24.506462 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 09 12:28:24 crc kubenswrapper[4703]: I1209 12:28:24.509548 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 09 12:28:24 crc kubenswrapper[4703]: I1209 12:28:24.521559 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Dec 09 12:28:24 crc kubenswrapper[4703]: I1209 12:28:24.521795 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 09 12:28:24 crc kubenswrapper[4703]: I1209 12:28:24.549277 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 09 12:28:24 crc kubenswrapper[4703]: I1209 12:28:24.608533 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a92bceed-7795-442f-99c4-c852c51c6284-config-data\") pod \"glance-default-internal-api-0\" (UID: \"a92bceed-7795-442f-99c4-c852c51c6284\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:28:24 crc kubenswrapper[4703]: I1209 12:28:24.608590 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a92bceed-7795-442f-99c4-c852c51c6284-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"a92bceed-7795-442f-99c4-c852c51c6284\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:28:24 crc kubenswrapper[4703]: I1209 12:28:24.608642 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a92bceed-7795-442f-99c4-c852c51c6284-logs\") pod \"glance-default-internal-api-0\" (UID: \"a92bceed-7795-442f-99c4-c852c51c6284\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:28:24 crc kubenswrapper[4703]: I1209 12:28:24.608736 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ts7ph\" (UniqueName: \"kubernetes.io/projected/a92bceed-7795-442f-99c4-c852c51c6284-kube-api-access-ts7ph\") pod \"glance-default-internal-api-0\" (UID: \"a92bceed-7795-442f-99c4-c852c51c6284\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:28:24 crc kubenswrapper[4703]: I1209 12:28:24.608774 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a92bceed-7795-442f-99c4-c852c51c6284-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"a92bceed-7795-442f-99c4-c852c51c6284\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:28:24 crc kubenswrapper[4703]: I1209 12:28:24.608823 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-cfecbed1-519b-4af5-9dfc-e582a380e9a4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cfecbed1-519b-4af5-9dfc-e582a380e9a4\") pod \"glance-default-internal-api-0\" (UID: \"a92bceed-7795-442f-99c4-c852c51c6284\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:28:24 crc kubenswrapper[4703]: I1209 12:28:24.608870 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a92bceed-7795-442f-99c4-c852c51c6284-scripts\") pod \"glance-default-internal-api-0\" (UID: \"a92bceed-7795-442f-99c4-c852c51c6284\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:28:24 crc kubenswrapper[4703]: I1209 12:28:24.608900 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a92bceed-7795-442f-99c4-c852c51c6284-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"a92bceed-7795-442f-99c4-c852c51c6284\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:28:24 crc kubenswrapper[4703]: I1209 12:28:24.709751 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a92bceed-7795-442f-99c4-c852c51c6284-scripts\") pod \"glance-default-internal-api-0\" (UID: \"a92bceed-7795-442f-99c4-c852c51c6284\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:28:24 crc kubenswrapper[4703]: I1209 12:28:24.709843 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a92bceed-7795-442f-99c4-c852c51c6284-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"a92bceed-7795-442f-99c4-c852c51c6284\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:28:24 crc kubenswrapper[4703]: I1209 12:28:24.709878 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a92bceed-7795-442f-99c4-c852c51c6284-config-data\") pod \"glance-default-internal-api-0\" (UID: \"a92bceed-7795-442f-99c4-c852c51c6284\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:28:24 crc kubenswrapper[4703]: I1209 12:28:24.709903 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a92bceed-7795-442f-99c4-c852c51c6284-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"a92bceed-7795-442f-99c4-c852c51c6284\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:28:24 crc kubenswrapper[4703]: I1209 12:28:24.709931 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a92bceed-7795-442f-99c4-c852c51c6284-logs\") pod \"glance-default-internal-api-0\" (UID: \"a92bceed-7795-442f-99c4-c852c51c6284\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:28:24 crc kubenswrapper[4703]: I1209 12:28:24.709990 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ts7ph\" (UniqueName: \"kubernetes.io/projected/a92bceed-7795-442f-99c4-c852c51c6284-kube-api-access-ts7ph\") pod \"glance-default-internal-api-0\" (UID: \"a92bceed-7795-442f-99c4-c852c51c6284\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:28:24 crc kubenswrapper[4703]: I1209 12:28:24.710018 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a92bceed-7795-442f-99c4-c852c51c6284-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"a92bceed-7795-442f-99c4-c852c51c6284\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:28:24 crc kubenswrapper[4703]: I1209 12:28:24.710060 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-cfecbed1-519b-4af5-9dfc-e582a380e9a4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cfecbed1-519b-4af5-9dfc-e582a380e9a4\") pod \"glance-default-internal-api-0\" (UID: \"a92bceed-7795-442f-99c4-c852c51c6284\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:28:24 crc kubenswrapper[4703]: I1209 12:28:24.710913 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a92bceed-7795-442f-99c4-c852c51c6284-logs\") pod \"glance-default-internal-api-0\" (UID: \"a92bceed-7795-442f-99c4-c852c51c6284\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:28:24 crc kubenswrapper[4703]: I1209 12:28:24.711421 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a92bceed-7795-442f-99c4-c852c51c6284-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"a92bceed-7795-442f-99c4-c852c51c6284\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:28:24 crc kubenswrapper[4703]: I1209 12:28:24.716485 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a92bceed-7795-442f-99c4-c852c51c6284-config-data\") pod \"glance-default-internal-api-0\" (UID: \"a92bceed-7795-442f-99c4-c852c51c6284\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:28:24 crc kubenswrapper[4703]: I1209 12:28:24.717125 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a92bceed-7795-442f-99c4-c852c51c6284-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"a92bceed-7795-442f-99c4-c852c51c6284\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:28:24 crc kubenswrapper[4703]: I1209 12:28:24.717173 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a92bceed-7795-442f-99c4-c852c51c6284-scripts\") pod \"glance-default-internal-api-0\" (UID: \"a92bceed-7795-442f-99c4-c852c51c6284\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:28:24 crc kubenswrapper[4703]: I1209 12:28:24.717292 4703 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 09 12:28:24 crc kubenswrapper[4703]: I1209 12:28:24.717316 4703 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-cfecbed1-519b-4af5-9dfc-e582a380e9a4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cfecbed1-519b-4af5-9dfc-e582a380e9a4\") pod \"glance-default-internal-api-0\" (UID: \"a92bceed-7795-442f-99c4-c852c51c6284\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/797e4922fbff3a4efc14293ba89fb3aa4e856c71aa0daf9f22ac638d77d5a369/globalmount\"" pod="openstack/glance-default-internal-api-0" Dec 09 12:28:24 crc kubenswrapper[4703]: I1209 12:28:24.730798 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a92bceed-7795-442f-99c4-c852c51c6284-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"a92bceed-7795-442f-99c4-c852c51c6284\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:28:24 crc kubenswrapper[4703]: I1209 12:28:24.732390 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ts7ph\" (UniqueName: \"kubernetes.io/projected/a92bceed-7795-442f-99c4-c852c51c6284-kube-api-access-ts7ph\") pod \"glance-default-internal-api-0\" (UID: \"a92bceed-7795-442f-99c4-c852c51c6284\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:28:24 crc kubenswrapper[4703]: I1209 12:28:24.774419 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-cfecbed1-519b-4af5-9dfc-e582a380e9a4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cfecbed1-519b-4af5-9dfc-e582a380e9a4\") pod \"glance-default-internal-api-0\" (UID: \"a92bceed-7795-442f-99c4-c852c51c6284\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:28:24 crc kubenswrapper[4703]: I1209 12:28:24.842017 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 09 12:28:25 crc kubenswrapper[4703]: I1209 12:28:25.096814 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e38588e-3f75-4755-8201-6b95541ff106" path="/var/lib/kubelet/pods/1e38588e-3f75-4755-8201-6b95541ff106/volumes" Dec 09 12:28:25 crc kubenswrapper[4703]: I1209 12:28:25.097946 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93142f2e-b931-41a8-8d67-f586853197a1" path="/var/lib/kubelet/pods/93142f2e-b931-41a8-8d67-f586853197a1/volumes" Dec 09 12:28:25 crc kubenswrapper[4703]: I1209 12:28:25.426775 4703 generic.go:334] "Generic (PLEG): container finished" podID="3ecae80d-2552-44d1-9b62-c7112893d38f" containerID="a267761f02b0927b2a54c59cddb2937a8648c0ef4f03c7f5e1463b4b17805921" exitCode=0 Dec 09 12:28:25 crc kubenswrapper[4703]: I1209 12:28:25.426831 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-6b8bj" event={"ID":"3ecae80d-2552-44d1-9b62-c7112893d38f","Type":"ContainerDied","Data":"a267761f02b0927b2a54c59cddb2937a8648c0ef4f03c7f5e1463b4b17805921"} Dec 09 12:28:26 crc kubenswrapper[4703]: I1209 12:28:26.443582 4703 generic.go:334] "Generic (PLEG): container finished" podID="75741f58-7780-4daa-a61b-6985e8baeb5b" containerID="ffe8c00834f88e8a1b16950144072de9b323588a9a22d84558f4307225e3aba7" exitCode=0 Dec 09 12:28:26 crc kubenswrapper[4703]: I1209 12:28:26.443771 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-q85xp" event={"ID":"75741f58-7780-4daa-a61b-6985e8baeb5b","Type":"ContainerDied","Data":"ffe8c00834f88e8a1b16950144072de9b323588a9a22d84558f4307225e3aba7"} Dec 09 12:28:28 crc kubenswrapper[4703]: I1209 12:28:28.230646 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55f844cf75-t56kd" Dec 09 12:28:28 crc kubenswrapper[4703]: I1209 12:28:28.329850 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-wb7mt"] Dec 09 12:28:28 crc kubenswrapper[4703]: I1209 12:28:28.330161 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-58dd9ff6bc-wb7mt" podUID="9f9f4ead-ef04-4275-82de-c42a903d8252" containerName="dnsmasq-dns" containerID="cri-o://bf916b9103a92ddce3d76c94f3ac4c7a6f28d494ba8a425857946f5961fa78d9" gracePeriod=10 Dec 09 12:28:29 crc kubenswrapper[4703]: I1209 12:28:29.714749 4703 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-58dd9ff6bc-wb7mt" podUID="9f9f4ead-ef04-4275-82de-c42a903d8252" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.159:5353: connect: connection refused" Dec 09 12:28:30 crc kubenswrapper[4703]: I1209 12:28:30.493281 4703 generic.go:334] "Generic (PLEG): container finished" podID="9f9f4ead-ef04-4275-82de-c42a903d8252" containerID="bf916b9103a92ddce3d76c94f3ac4c7a6f28d494ba8a425857946f5961fa78d9" exitCode=0 Dec 09 12:28:30 crc kubenswrapper[4703]: I1209 12:28:30.493338 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-wb7mt" event={"ID":"9f9f4ead-ef04-4275-82de-c42a903d8252","Type":"ContainerDied","Data":"bf916b9103a92ddce3d76c94f3ac4c7a6f28d494ba8a425857946f5961fa78d9"} Dec 09 12:28:33 crc kubenswrapper[4703]: I1209 12:28:33.642629 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-6b8bj" event={"ID":"3ecae80d-2552-44d1-9b62-c7112893d38f","Type":"ContainerDied","Data":"1ec8e319ada282e92e48bfa115b080bb042a05821d63da886c3906d33d0233a6"} Dec 09 12:28:33 crc kubenswrapper[4703]: I1209 12:28:33.646443 4703 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ec8e319ada282e92e48bfa115b080bb042a05821d63da886c3906d33d0233a6" Dec 09 12:28:33 crc kubenswrapper[4703]: I1209 12:28:33.653934 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-q85xp" event={"ID":"75741f58-7780-4daa-a61b-6985e8baeb5b","Type":"ContainerDied","Data":"b540b48761bf247b0841e5ee3e9b7e2711d8a084cc6aaa764f520a2a57f405ef"} Dec 09 12:28:33 crc kubenswrapper[4703]: I1209 12:28:33.653987 4703 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b540b48761bf247b0841e5ee3e9b7e2711d8a084cc6aaa764f520a2a57f405ef" Dec 09 12:28:33 crc kubenswrapper[4703]: I1209 12:28:33.657386 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-lq8j2" event={"ID":"6eb7e89e-dacb-4bd5-a8e5-ebe2a91807d7","Type":"ContainerDied","Data":"ba09cb2d539742949e9814c5fafa805e5c3b35c1ff55013f0ab8427ee47f04db"} Dec 09 12:28:33 crc kubenswrapper[4703]: I1209 12:28:33.657453 4703 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba09cb2d539742949e9814c5fafa805e5c3b35c1ff55013f0ab8427ee47f04db" Dec 09 12:28:33 crc kubenswrapper[4703]: I1209 12:28:33.812041 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-lq8j2" Dec 09 12:28:33 crc kubenswrapper[4703]: I1209 12:28:33.834672 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6eb7e89e-dacb-4bd5-a8e5-ebe2a91807d7-scripts\") pod \"6eb7e89e-dacb-4bd5-a8e5-ebe2a91807d7\" (UID: \"6eb7e89e-dacb-4bd5-a8e5-ebe2a91807d7\") " Dec 09 12:28:33 crc kubenswrapper[4703]: I1209 12:28:33.834751 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6gfj6\" (UniqueName: \"kubernetes.io/projected/6eb7e89e-dacb-4bd5-a8e5-ebe2a91807d7-kube-api-access-6gfj6\") pod \"6eb7e89e-dacb-4bd5-a8e5-ebe2a91807d7\" (UID: \"6eb7e89e-dacb-4bd5-a8e5-ebe2a91807d7\") " Dec 09 12:28:33 crc kubenswrapper[4703]: I1209 12:28:33.834778 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6eb7e89e-dacb-4bd5-a8e5-ebe2a91807d7-logs\") pod \"6eb7e89e-dacb-4bd5-a8e5-ebe2a91807d7\" (UID: \"6eb7e89e-dacb-4bd5-a8e5-ebe2a91807d7\") " Dec 09 12:28:33 crc kubenswrapper[4703]: I1209 12:28:33.834796 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6eb7e89e-dacb-4bd5-a8e5-ebe2a91807d7-combined-ca-bundle\") pod \"6eb7e89e-dacb-4bd5-a8e5-ebe2a91807d7\" (UID: \"6eb7e89e-dacb-4bd5-a8e5-ebe2a91807d7\") " Dec 09 12:28:33 crc kubenswrapper[4703]: I1209 12:28:33.843404 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6eb7e89e-dacb-4bd5-a8e5-ebe2a91807d7-config-data\") pod \"6eb7e89e-dacb-4bd5-a8e5-ebe2a91807d7\" (UID: \"6eb7e89e-dacb-4bd5-a8e5-ebe2a91807d7\") " Dec 09 12:28:33 crc kubenswrapper[4703]: I1209 12:28:33.850049 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-q85xp" Dec 09 12:28:33 crc kubenswrapper[4703]: I1209 12:28:33.853756 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6eb7e89e-dacb-4bd5-a8e5-ebe2a91807d7-kube-api-access-6gfj6" (OuterVolumeSpecName: "kube-api-access-6gfj6") pod "6eb7e89e-dacb-4bd5-a8e5-ebe2a91807d7" (UID: "6eb7e89e-dacb-4bd5-a8e5-ebe2a91807d7"). InnerVolumeSpecName "kube-api-access-6gfj6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:28:33 crc kubenswrapper[4703]: I1209 12:28:33.868244 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6eb7e89e-dacb-4bd5-a8e5-ebe2a91807d7-logs" (OuterVolumeSpecName: "logs") pod "6eb7e89e-dacb-4bd5-a8e5-ebe2a91807d7" (UID: "6eb7e89e-dacb-4bd5-a8e5-ebe2a91807d7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:28:33 crc kubenswrapper[4703]: I1209 12:28:33.884938 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6eb7e89e-dacb-4bd5-a8e5-ebe2a91807d7-scripts" (OuterVolumeSpecName: "scripts") pod "6eb7e89e-dacb-4bd5-a8e5-ebe2a91807d7" (UID: "6eb7e89e-dacb-4bd5-a8e5-ebe2a91807d7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:28:33 crc kubenswrapper[4703]: I1209 12:28:33.905121 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-6b8bj" Dec 09 12:28:33 crc kubenswrapper[4703]: I1209 12:28:33.921480 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6eb7e89e-dacb-4bd5-a8e5-ebe2a91807d7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6eb7e89e-dacb-4bd5-a8e5-ebe2a91807d7" (UID: "6eb7e89e-dacb-4bd5-a8e5-ebe2a91807d7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:28:33 crc kubenswrapper[4703]: I1209 12:28:33.921938 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-wb7mt" Dec 09 12:28:33 crc kubenswrapper[4703]: I1209 12:28:33.945905 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9f9f4ead-ef04-4275-82de-c42a903d8252-ovsdbserver-sb\") pod \"9f9f4ead-ef04-4275-82de-c42a903d8252\" (UID: \"9f9f4ead-ef04-4275-82de-c42a903d8252\") " Dec 09 12:28:33 crc kubenswrapper[4703]: I1209 12:28:33.945983 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75741f58-7780-4daa-a61b-6985e8baeb5b-combined-ca-bundle\") pod \"75741f58-7780-4daa-a61b-6985e8baeb5b\" (UID: \"75741f58-7780-4daa-a61b-6985e8baeb5b\") " Dec 09 12:28:33 crc kubenswrapper[4703]: I1209 12:28:33.946088 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ecae80d-2552-44d1-9b62-c7112893d38f-scripts\") pod \"3ecae80d-2552-44d1-9b62-c7112893d38f\" (UID: \"3ecae80d-2552-44d1-9b62-c7112893d38f\") " Dec 09 12:28:33 crc kubenswrapper[4703]: I1209 12:28:33.946139 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9f9f4ead-ef04-4275-82de-c42a903d8252-ovsdbserver-nb\") pod \"9f9f4ead-ef04-4275-82de-c42a903d8252\" (UID: \"9f9f4ead-ef04-4275-82de-c42a903d8252\") " Dec 09 12:28:33 crc kubenswrapper[4703]: I1209 12:28:33.946415 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ecae80d-2552-44d1-9b62-c7112893d38f-combined-ca-bundle\") pod \"3ecae80d-2552-44d1-9b62-c7112893d38f\" (UID: \"3ecae80d-2552-44d1-9b62-c7112893d38f\") " Dec 09 12:28:33 crc kubenswrapper[4703]: I1209 12:28:33.946481 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3ecae80d-2552-44d1-9b62-c7112893d38f-credential-keys\") pod \"3ecae80d-2552-44d1-9b62-c7112893d38f\" (UID: \"3ecae80d-2552-44d1-9b62-c7112893d38f\") " Dec 09 12:28:33 crc kubenswrapper[4703]: I1209 12:28:33.946498 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/75741f58-7780-4daa-a61b-6985e8baeb5b-db-sync-config-data\") pod \"75741f58-7780-4daa-a61b-6985e8baeb5b\" (UID: \"75741f58-7780-4daa-a61b-6985e8baeb5b\") " Dec 09 12:28:33 crc kubenswrapper[4703]: I1209 12:28:33.946537 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9f9f4ead-ef04-4275-82de-c42a903d8252-dns-swift-storage-0\") pod \"9f9f4ead-ef04-4275-82de-c42a903d8252\" (UID: \"9f9f4ead-ef04-4275-82de-c42a903d8252\") " Dec 09 12:28:33 crc kubenswrapper[4703]: I1209 12:28:33.946562 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9f9f4ead-ef04-4275-82de-c42a903d8252-dns-svc\") pod \"9f9f4ead-ef04-4275-82de-c42a903d8252\" (UID: \"9f9f4ead-ef04-4275-82de-c42a903d8252\") " Dec 09 12:28:33 crc kubenswrapper[4703]: I1209 12:28:33.946589 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ecae80d-2552-44d1-9b62-c7112893d38f-config-data\") pod \"3ecae80d-2552-44d1-9b62-c7112893d38f\" (UID: \"3ecae80d-2552-44d1-9b62-c7112893d38f\") " Dec 09 12:28:33 crc kubenswrapper[4703]: I1209 12:28:33.946614 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cnphs\" (UniqueName: \"kubernetes.io/projected/3ecae80d-2552-44d1-9b62-c7112893d38f-kube-api-access-cnphs\") pod \"3ecae80d-2552-44d1-9b62-c7112893d38f\" (UID: \"3ecae80d-2552-44d1-9b62-c7112893d38f\") " Dec 09 12:28:33 crc kubenswrapper[4703]: I1209 12:28:33.946658 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbp2f\" (UniqueName: \"kubernetes.io/projected/75741f58-7780-4daa-a61b-6985e8baeb5b-kube-api-access-dbp2f\") pod \"75741f58-7780-4daa-a61b-6985e8baeb5b\" (UID: \"75741f58-7780-4daa-a61b-6985e8baeb5b\") " Dec 09 12:28:33 crc kubenswrapper[4703]: I1209 12:28:33.946687 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-skrx9\" (UniqueName: \"kubernetes.io/projected/9f9f4ead-ef04-4275-82de-c42a903d8252-kube-api-access-skrx9\") pod \"9f9f4ead-ef04-4275-82de-c42a903d8252\" (UID: \"9f9f4ead-ef04-4275-82de-c42a903d8252\") " Dec 09 12:28:33 crc kubenswrapper[4703]: I1209 12:28:33.946719 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3ecae80d-2552-44d1-9b62-c7112893d38f-fernet-keys\") pod \"3ecae80d-2552-44d1-9b62-c7112893d38f\" (UID: \"3ecae80d-2552-44d1-9b62-c7112893d38f\") " Dec 09 12:28:33 crc kubenswrapper[4703]: I1209 12:28:33.946758 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f9f4ead-ef04-4275-82de-c42a903d8252-config\") pod \"9f9f4ead-ef04-4275-82de-c42a903d8252\" (UID: \"9f9f4ead-ef04-4275-82de-c42a903d8252\") " Dec 09 12:28:33 crc kubenswrapper[4703]: I1209 12:28:33.947258 4703 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6eb7e89e-dacb-4bd5-a8e5-ebe2a91807d7-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:33 crc kubenswrapper[4703]: I1209 12:28:33.947277 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6gfj6\" (UniqueName: \"kubernetes.io/projected/6eb7e89e-dacb-4bd5-a8e5-ebe2a91807d7-kube-api-access-6gfj6\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:33 crc kubenswrapper[4703]: I1209 12:28:33.947290 4703 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6eb7e89e-dacb-4bd5-a8e5-ebe2a91807d7-logs\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:33 crc kubenswrapper[4703]: I1209 12:28:33.947301 4703 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6eb7e89e-dacb-4bd5-a8e5-ebe2a91807d7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:33 crc kubenswrapper[4703]: I1209 12:28:33.948584 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6eb7e89e-dacb-4bd5-a8e5-ebe2a91807d7-config-data" (OuterVolumeSpecName: "config-data") pod "6eb7e89e-dacb-4bd5-a8e5-ebe2a91807d7" (UID: "6eb7e89e-dacb-4bd5-a8e5-ebe2a91807d7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:28:33 crc kubenswrapper[4703]: I1209 12:28:33.983766 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ecae80d-2552-44d1-9b62-c7112893d38f-scripts" (OuterVolumeSpecName: "scripts") pod "3ecae80d-2552-44d1-9b62-c7112893d38f" (UID: "3ecae80d-2552-44d1-9b62-c7112893d38f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:28:33 crc kubenswrapper[4703]: I1209 12:28:33.987025 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ecae80d-2552-44d1-9b62-c7112893d38f-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "3ecae80d-2552-44d1-9b62-c7112893d38f" (UID: "3ecae80d-2552-44d1-9b62-c7112893d38f"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:28:34 crc kubenswrapper[4703]: I1209 12:28:34.018009 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75741f58-7780-4daa-a61b-6985e8baeb5b-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "75741f58-7780-4daa-a61b-6985e8baeb5b" (UID: "75741f58-7780-4daa-a61b-6985e8baeb5b"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:28:34 crc kubenswrapper[4703]: I1209 12:28:34.019375 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ecae80d-2552-44d1-9b62-c7112893d38f-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "3ecae80d-2552-44d1-9b62-c7112893d38f" (UID: "3ecae80d-2552-44d1-9b62-c7112893d38f"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:28:34 crc kubenswrapper[4703]: I1209 12:28:34.021901 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75741f58-7780-4daa-a61b-6985e8baeb5b-kube-api-access-dbp2f" (OuterVolumeSpecName: "kube-api-access-dbp2f") pod "75741f58-7780-4daa-a61b-6985e8baeb5b" (UID: "75741f58-7780-4daa-a61b-6985e8baeb5b"). InnerVolumeSpecName "kube-api-access-dbp2f". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:28:34 crc kubenswrapper[4703]: I1209 12:28:34.032568 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f9f4ead-ef04-4275-82de-c42a903d8252-kube-api-access-skrx9" (OuterVolumeSpecName: "kube-api-access-skrx9") pod "9f9f4ead-ef04-4275-82de-c42a903d8252" (UID: "9f9f4ead-ef04-4275-82de-c42a903d8252"). InnerVolumeSpecName "kube-api-access-skrx9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:28:34 crc kubenswrapper[4703]: I1209 12:28:34.036904 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ecae80d-2552-44d1-9b62-c7112893d38f-kube-api-access-cnphs" (OuterVolumeSpecName: "kube-api-access-cnphs") pod "3ecae80d-2552-44d1-9b62-c7112893d38f" (UID: "3ecae80d-2552-44d1-9b62-c7112893d38f"). InnerVolumeSpecName "kube-api-access-cnphs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:28:34 crc kubenswrapper[4703]: I1209 12:28:34.052656 4703 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6eb7e89e-dacb-4bd5-a8e5-ebe2a91807d7-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:34 crc kubenswrapper[4703]: I1209 12:28:34.053159 4703 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ecae80d-2552-44d1-9b62-c7112893d38f-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:34 crc kubenswrapper[4703]: I1209 12:28:34.053312 4703 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3ecae80d-2552-44d1-9b62-c7112893d38f-credential-keys\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:34 crc kubenswrapper[4703]: I1209 12:28:34.053417 4703 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/75741f58-7780-4daa-a61b-6985e8baeb5b-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:34 crc kubenswrapper[4703]: I1209 12:28:34.053497 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cnphs\" (UniqueName: \"kubernetes.io/projected/3ecae80d-2552-44d1-9b62-c7112893d38f-kube-api-access-cnphs\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:34 crc kubenswrapper[4703]: I1209 12:28:34.053579 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbp2f\" (UniqueName: \"kubernetes.io/projected/75741f58-7780-4daa-a61b-6985e8baeb5b-kube-api-access-dbp2f\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:34 crc kubenswrapper[4703]: I1209 12:28:34.053708 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-skrx9\" (UniqueName: \"kubernetes.io/projected/9f9f4ead-ef04-4275-82de-c42a903d8252-kube-api-access-skrx9\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:34 crc kubenswrapper[4703]: I1209 12:28:34.053777 4703 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3ecae80d-2552-44d1-9b62-c7112893d38f-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:34 crc kubenswrapper[4703]: I1209 12:28:34.465912 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f9f4ead-ef04-4275-82de-c42a903d8252-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "9f9f4ead-ef04-4275-82de-c42a903d8252" (UID: "9f9f4ead-ef04-4275-82de-c42a903d8252"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:28:34 crc kubenswrapper[4703]: I1209 12:28:34.471506 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ecae80d-2552-44d1-9b62-c7112893d38f-config-data" (OuterVolumeSpecName: "config-data") pod "3ecae80d-2552-44d1-9b62-c7112893d38f" (UID: "3ecae80d-2552-44d1-9b62-c7112893d38f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:28:34 crc kubenswrapper[4703]: I1209 12:28:34.486077 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 09 12:28:34 crc kubenswrapper[4703]: I1209 12:28:34.531081 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f9f4ead-ef04-4275-82de-c42a903d8252-config" (OuterVolumeSpecName: "config") pod "9f9f4ead-ef04-4275-82de-c42a903d8252" (UID: "9f9f4ead-ef04-4275-82de-c42a903d8252"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:28:34 crc kubenswrapper[4703]: I1209 12:28:34.570093 4703 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f9f4ead-ef04-4275-82de-c42a903d8252-config\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:34 crc kubenswrapper[4703]: I1209 12:28:34.570144 4703 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9f9f4ead-ef04-4275-82de-c42a903d8252-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:34 crc kubenswrapper[4703]: I1209 12:28:34.570158 4703 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ecae80d-2552-44d1-9b62-c7112893d38f-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:34 crc kubenswrapper[4703]: I1209 12:28:34.570452 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ecae80d-2552-44d1-9b62-c7112893d38f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3ecae80d-2552-44d1-9b62-c7112893d38f" (UID: "3ecae80d-2552-44d1-9b62-c7112893d38f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:28:34 crc kubenswrapper[4703]: I1209 12:28:34.578574 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f9f4ead-ef04-4275-82de-c42a903d8252-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9f9f4ead-ef04-4275-82de-c42a903d8252" (UID: "9f9f4ead-ef04-4275-82de-c42a903d8252"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:28:34 crc kubenswrapper[4703]: I1209 12:28:34.608598 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 09 12:28:34 crc kubenswrapper[4703]: W1209 12:28:34.609517 4703 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddb4f79b4_592c_4dc1_ad09_c1582b9d8497.slice/crio-b2024022a0b733b85ee0ec1a275df2eac0ab96983b162851c980b4bc645308a8 WatchSource:0}: Error finding container b2024022a0b733b85ee0ec1a275df2eac0ab96983b162851c980b4bc645308a8: Status 404 returned error can't find the container with id b2024022a0b733b85ee0ec1a275df2eac0ab96983b162851c980b4bc645308a8 Dec 09 12:28:34 crc kubenswrapper[4703]: I1209 12:28:34.612745 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75741f58-7780-4daa-a61b-6985e8baeb5b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "75741f58-7780-4daa-a61b-6985e8baeb5b" (UID: "75741f58-7780-4daa-a61b-6985e8baeb5b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:28:34 crc kubenswrapper[4703]: I1209 12:28:34.623010 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f9f4ead-ef04-4275-82de-c42a903d8252-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9f9f4ead-ef04-4275-82de-c42a903d8252" (UID: "9f9f4ead-ef04-4275-82de-c42a903d8252"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:28:34 crc kubenswrapper[4703]: I1209 12:28:34.671905 4703 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9f9f4ead-ef04-4275-82de-c42a903d8252-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:34 crc kubenswrapper[4703]: I1209 12:28:34.671954 4703 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75741f58-7780-4daa-a61b-6985e8baeb5b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:34 crc kubenswrapper[4703]: I1209 12:28:34.671965 4703 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9f9f4ead-ef04-4275-82de-c42a903d8252-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:34 crc kubenswrapper[4703]: I1209 12:28:34.671978 4703 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ecae80d-2552-44d1-9b62-c7112893d38f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:34 crc kubenswrapper[4703]: I1209 12:28:34.688901 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f9f4ead-ef04-4275-82de-c42a903d8252-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9f9f4ead-ef04-4275-82de-c42a903d8252" (UID: "9f9f4ead-ef04-4275-82de-c42a903d8252"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:28:34 crc kubenswrapper[4703]: I1209 12:28:34.706333 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8439e49e-7ee2-4be7-b4a2-f437a2124bd9","Type":"ContainerStarted","Data":"65547492c074a0670140ecd1ec250f99d736b5ac6aa6dae034ea634c38c1e3d5"} Dec 09 12:28:34 crc kubenswrapper[4703]: I1209 12:28:34.710303 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a92bceed-7795-442f-99c4-c852c51c6284","Type":"ContainerStarted","Data":"cf1af995918b617cc87b7847847a40eee22bff6313becd320fef250468f5ac72"} Dec 09 12:28:34 crc kubenswrapper[4703]: I1209 12:28:34.715435 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-jhwbv" event={"ID":"8dc2a644-ad99-4857-9913-562b6ed7371f","Type":"ContainerStarted","Data":"943e64a17b3892f1f99a72714c0d139e740d57019c12afba0c75b8320b4461b5"} Dec 09 12:28:34 crc kubenswrapper[4703]: I1209 12:28:34.719973 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"db4f79b4-592c-4dc1-ad09-c1582b9d8497","Type":"ContainerStarted","Data":"b2024022a0b733b85ee0ec1a275df2eac0ab96983b162851c980b4bc645308a8"} Dec 09 12:28:34 crc kubenswrapper[4703]: I1209 12:28:34.748465 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-lq8j2" Dec 09 12:28:34 crc kubenswrapper[4703]: I1209 12:28:34.752245 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-q85xp" Dec 09 12:28:34 crc kubenswrapper[4703]: I1209 12:28:34.753987 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-6b8bj" Dec 09 12:28:34 crc kubenswrapper[4703]: I1209 12:28:34.756050 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-wb7mt" Dec 09 12:28:34 crc kubenswrapper[4703]: I1209 12:28:34.761751 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-db-sync-jhwbv" podStartSLOduration=3.47350572 podStartE2EDuration="1m6.76173255s" podCreationTimestamp="2025-12-09 12:27:28 +0000 UTC" firstStartedPulling="2025-12-09 12:27:30.351061886 +0000 UTC m=+1349.599825405" lastFinishedPulling="2025-12-09 12:28:33.639288716 +0000 UTC m=+1412.888052235" observedRunningTime="2025-12-09 12:28:34.737994507 +0000 UTC m=+1413.986758036" watchObservedRunningTime="2025-12-09 12:28:34.76173255 +0000 UTC m=+1414.010496069" Dec 09 12:28:34 crc kubenswrapper[4703]: I1209 12:28:34.763052 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-wb7mt" event={"ID":"9f9f4ead-ef04-4275-82de-c42a903d8252","Type":"ContainerDied","Data":"907f0b474d95fa78d278a4c2ace2a3b6c5214bada0d4b009e351b6c05d3d675b"} Dec 09 12:28:34 crc kubenswrapper[4703]: I1209 12:28:34.763159 4703 scope.go:117] "RemoveContainer" containerID="bf916b9103a92ddce3d76c94f3ac4c7a6f28d494ba8a425857946f5961fa78d9" Dec 09 12:28:34 crc kubenswrapper[4703]: I1209 12:28:34.775909 4703 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9f9f4ead-ef04-4275-82de-c42a903d8252-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:34 crc kubenswrapper[4703]: I1209 12:28:34.867373 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-wb7mt"] Dec 09 12:28:34 crc kubenswrapper[4703]: I1209 12:28:34.893547 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-wb7mt"] Dec 09 12:28:34 crc kubenswrapper[4703]: I1209 12:28:34.898972 4703 scope.go:117] "RemoveContainer" containerID="11c07a92b483ecb8c3533a083dbd1142e196c3e185413489b5d95865823cbae0" Dec 09 12:28:35 crc kubenswrapper[4703]: I1209 12:28:35.122827 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f9f4ead-ef04-4275-82de-c42a903d8252" path="/var/lib/kubelet/pods/9f9f4ead-ef04-4275-82de-c42a903d8252/volumes" Dec 09 12:28:35 crc kubenswrapper[4703]: I1209 12:28:35.190653 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-7f4f589d9f-q5cvn"] Dec 09 12:28:35 crc kubenswrapper[4703]: E1209 12:28:35.191272 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f9f4ead-ef04-4275-82de-c42a903d8252" containerName="dnsmasq-dns" Dec 09 12:28:35 crc kubenswrapper[4703]: I1209 12:28:35.191297 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f9f4ead-ef04-4275-82de-c42a903d8252" containerName="dnsmasq-dns" Dec 09 12:28:35 crc kubenswrapper[4703]: E1209 12:28:35.191314 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75741f58-7780-4daa-a61b-6985e8baeb5b" containerName="barbican-db-sync" Dec 09 12:28:35 crc kubenswrapper[4703]: I1209 12:28:35.191322 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="75741f58-7780-4daa-a61b-6985e8baeb5b" containerName="barbican-db-sync" Dec 09 12:28:35 crc kubenswrapper[4703]: E1209 12:28:35.191374 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6eb7e89e-dacb-4bd5-a8e5-ebe2a91807d7" containerName="placement-db-sync" Dec 09 12:28:35 crc kubenswrapper[4703]: I1209 12:28:35.191384 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="6eb7e89e-dacb-4bd5-a8e5-ebe2a91807d7" containerName="placement-db-sync" Dec 09 12:28:35 crc kubenswrapper[4703]: E1209 12:28:35.191398 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f9f4ead-ef04-4275-82de-c42a903d8252" containerName="init" Dec 09 12:28:35 crc kubenswrapper[4703]: I1209 12:28:35.191406 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f9f4ead-ef04-4275-82de-c42a903d8252" containerName="init" Dec 09 12:28:35 crc kubenswrapper[4703]: E1209 12:28:35.191424 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ecae80d-2552-44d1-9b62-c7112893d38f" containerName="keystone-bootstrap" Dec 09 12:28:35 crc kubenswrapper[4703]: I1209 12:28:35.191432 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ecae80d-2552-44d1-9b62-c7112893d38f" containerName="keystone-bootstrap" Dec 09 12:28:35 crc kubenswrapper[4703]: I1209 12:28:35.191704 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ecae80d-2552-44d1-9b62-c7112893d38f" containerName="keystone-bootstrap" Dec 09 12:28:35 crc kubenswrapper[4703]: I1209 12:28:35.191729 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f9f4ead-ef04-4275-82de-c42a903d8252" containerName="dnsmasq-dns" Dec 09 12:28:35 crc kubenswrapper[4703]: I1209 12:28:35.191744 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="6eb7e89e-dacb-4bd5-a8e5-ebe2a91807d7" containerName="placement-db-sync" Dec 09 12:28:35 crc kubenswrapper[4703]: I1209 12:28:35.191757 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="75741f58-7780-4daa-a61b-6985e8baeb5b" containerName="barbican-db-sync" Dec 09 12:28:35 crc kubenswrapper[4703]: I1209 12:28:35.193037 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7f4f589d9f-q5cvn" Dec 09 12:28:35 crc kubenswrapper[4703]: I1209 12:28:35.206776 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-vbdwz" Dec 09 12:28:35 crc kubenswrapper[4703]: I1209 12:28:35.207058 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Dec 09 12:28:35 crc kubenswrapper[4703]: I1209 12:28:35.207227 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Dec 09 12:28:35 crc kubenswrapper[4703]: I1209 12:28:35.268796 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-865d57f67b-x7hvw"] Dec 09 12:28:35 crc kubenswrapper[4703]: I1209 12:28:35.270786 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-865d57f67b-x7hvw" Dec 09 12:28:35 crc kubenswrapper[4703]: W1209 12:28:35.286841 4703 reflector.go:561] object-"openstack"/"cert-placement-internal-svc": failed to list *v1.Secret: secrets "cert-placement-internal-svc" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Dec 09 12:28:35 crc kubenswrapper[4703]: E1209 12:28:35.286901 4703 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cert-placement-internal-svc\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cert-placement-internal-svc\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 09 12:28:35 crc kubenswrapper[4703]: I1209 12:28:35.287091 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Dec 09 12:28:35 crc kubenswrapper[4703]: I1209 12:28:35.287338 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Dec 09 12:28:35 crc kubenswrapper[4703]: W1209 12:28:35.287449 4703 reflector.go:561] object-"openstack"/"cert-placement-public-svc": failed to list *v1.Secret: secrets "cert-placement-public-svc" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Dec 09 12:28:35 crc kubenswrapper[4703]: E1209 12:28:35.287464 4703 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cert-placement-public-svc\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cert-placement-public-svc\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 09 12:28:35 crc kubenswrapper[4703]: I1209 12:28:35.287489 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-wjzsp" Dec 09 12:28:35 crc kubenswrapper[4703]: I1209 12:28:35.293217 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dadce417-24c9-4497-91bc-4bbb67fae899-config-data-custom\") pod \"barbican-worker-7f4f589d9f-q5cvn\" (UID: \"dadce417-24c9-4497-91bc-4bbb67fae899\") " pod="openstack/barbican-worker-7f4f589d9f-q5cvn" Dec 09 12:28:35 crc kubenswrapper[4703]: I1209 12:28:35.293409 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tt2f7\" (UniqueName: \"kubernetes.io/projected/dadce417-24c9-4497-91bc-4bbb67fae899-kube-api-access-tt2f7\") pod \"barbican-worker-7f4f589d9f-q5cvn\" (UID: \"dadce417-24c9-4497-91bc-4bbb67fae899\") " pod="openstack/barbican-worker-7f4f589d9f-q5cvn" Dec 09 12:28:35 crc kubenswrapper[4703]: I1209 12:28:35.293516 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dadce417-24c9-4497-91bc-4bbb67fae899-config-data\") pod \"barbican-worker-7f4f589d9f-q5cvn\" (UID: \"dadce417-24c9-4497-91bc-4bbb67fae899\") " pod="openstack/barbican-worker-7f4f589d9f-q5cvn" Dec 09 12:28:35 crc kubenswrapper[4703]: I1209 12:28:35.293587 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dadce417-24c9-4497-91bc-4bbb67fae899-logs\") pod \"barbican-worker-7f4f589d9f-q5cvn\" (UID: \"dadce417-24c9-4497-91bc-4bbb67fae899\") " pod="openstack/barbican-worker-7f4f589d9f-q5cvn" Dec 09 12:28:35 crc kubenswrapper[4703]: I1209 12:28:35.293680 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dadce417-24c9-4497-91bc-4bbb67fae899-combined-ca-bundle\") pod \"barbican-worker-7f4f589d9f-q5cvn\" (UID: \"dadce417-24c9-4497-91bc-4bbb67fae899\") " pod="openstack/barbican-worker-7f4f589d9f-q5cvn" Dec 09 12:28:35 crc kubenswrapper[4703]: I1209 12:28:35.308274 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-569b9f958-2lvsd"] Dec 09 12:28:35 crc kubenswrapper[4703]: I1209 12:28:35.310939 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-569b9f958-2lvsd" Dec 09 12:28:35 crc kubenswrapper[4703]: I1209 12:28:35.328722 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Dec 09 12:28:35 crc kubenswrapper[4703]: I1209 12:28:35.332978 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-865d57f67b-x7hvw"] Dec 09 12:28:35 crc kubenswrapper[4703]: I1209 12:28:35.399145 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a74b2dd-de01-4b4f-ae23-00856d55f81a-config-data\") pod \"barbican-keystone-listener-569b9f958-2lvsd\" (UID: \"8a74b2dd-de01-4b4f-ae23-00856d55f81a\") " pod="openstack/barbican-keystone-listener-569b9f958-2lvsd" Dec 09 12:28:35 crc kubenswrapper[4703]: I1209 12:28:35.405938 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ddebb8b-fcc7-4ee0-b7d4-af4b2969f8f3-internal-tls-certs\") pod \"placement-865d57f67b-x7hvw\" (UID: \"2ddebb8b-fcc7-4ee0-b7d4-af4b2969f8f3\") " pod="openstack/placement-865d57f67b-x7hvw" Dec 09 12:28:35 crc kubenswrapper[4703]: I1209 12:28:35.406024 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ddebb8b-fcc7-4ee0-b7d4-af4b2969f8f3-public-tls-certs\") pod \"placement-865d57f67b-x7hvw\" (UID: \"2ddebb8b-fcc7-4ee0-b7d4-af4b2969f8f3\") " pod="openstack/placement-865d57f67b-x7hvw" Dec 09 12:28:35 crc kubenswrapper[4703]: I1209 12:28:35.406243 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a74b2dd-de01-4b4f-ae23-00856d55f81a-logs\") pod \"barbican-keystone-listener-569b9f958-2lvsd\" (UID: \"8a74b2dd-de01-4b4f-ae23-00856d55f81a\") " pod="openstack/barbican-keystone-listener-569b9f958-2lvsd" Dec 09 12:28:35 crc kubenswrapper[4703]: I1209 12:28:35.406387 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ddebb8b-fcc7-4ee0-b7d4-af4b2969f8f3-logs\") pod \"placement-865d57f67b-x7hvw\" (UID: \"2ddebb8b-fcc7-4ee0-b7d4-af4b2969f8f3\") " pod="openstack/placement-865d57f67b-x7hvw" Dec 09 12:28:35 crc kubenswrapper[4703]: I1209 12:28:35.406429 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tt2f7\" (UniqueName: \"kubernetes.io/projected/dadce417-24c9-4497-91bc-4bbb67fae899-kube-api-access-tt2f7\") pod \"barbican-worker-7f4f589d9f-q5cvn\" (UID: \"dadce417-24c9-4497-91bc-4bbb67fae899\") " pod="openstack/barbican-worker-7f4f589d9f-q5cvn" Dec 09 12:28:35 crc kubenswrapper[4703]: I1209 12:28:35.406535 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dbwt\" (UniqueName: \"kubernetes.io/projected/8a74b2dd-de01-4b4f-ae23-00856d55f81a-kube-api-access-7dbwt\") pod \"barbican-keystone-listener-569b9f958-2lvsd\" (UID: \"8a74b2dd-de01-4b4f-ae23-00856d55f81a\") " pod="openstack/barbican-keystone-listener-569b9f958-2lvsd" Dec 09 12:28:35 crc kubenswrapper[4703]: I1209 12:28:35.406619 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ddebb8b-fcc7-4ee0-b7d4-af4b2969f8f3-scripts\") pod \"placement-865d57f67b-x7hvw\" (UID: \"2ddebb8b-fcc7-4ee0-b7d4-af4b2969f8f3\") " pod="openstack/placement-865d57f67b-x7hvw" Dec 09 12:28:35 crc kubenswrapper[4703]: I1209 12:28:35.406710 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ddebb8b-fcc7-4ee0-b7d4-af4b2969f8f3-config-data\") pod \"placement-865d57f67b-x7hvw\" (UID: \"2ddebb8b-fcc7-4ee0-b7d4-af4b2969f8f3\") " pod="openstack/placement-865d57f67b-x7hvw" Dec 09 12:28:35 crc kubenswrapper[4703]: I1209 12:28:35.406748 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dadce417-24c9-4497-91bc-4bbb67fae899-config-data\") pod \"barbican-worker-7f4f589d9f-q5cvn\" (UID: \"dadce417-24c9-4497-91bc-4bbb67fae899\") " pod="openstack/barbican-worker-7f4f589d9f-q5cvn" Dec 09 12:28:35 crc kubenswrapper[4703]: I1209 12:28:35.406865 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dadce417-24c9-4497-91bc-4bbb67fae899-logs\") pod \"barbican-worker-7f4f589d9f-q5cvn\" (UID: \"dadce417-24c9-4497-91bc-4bbb67fae899\") " pod="openstack/barbican-worker-7f4f589d9f-q5cvn" Dec 09 12:28:35 crc kubenswrapper[4703]: I1209 12:28:35.406997 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8a74b2dd-de01-4b4f-ae23-00856d55f81a-config-data-custom\") pod \"barbican-keystone-listener-569b9f958-2lvsd\" (UID: \"8a74b2dd-de01-4b4f-ae23-00856d55f81a\") " pod="openstack/barbican-keystone-listener-569b9f958-2lvsd" Dec 09 12:28:35 crc kubenswrapper[4703]: I1209 12:28:35.407033 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ddebb8b-fcc7-4ee0-b7d4-af4b2969f8f3-combined-ca-bundle\") pod \"placement-865d57f67b-x7hvw\" (UID: \"2ddebb8b-fcc7-4ee0-b7d4-af4b2969f8f3\") " pod="openstack/placement-865d57f67b-x7hvw" Dec 09 12:28:35 crc kubenswrapper[4703]: I1209 12:28:35.407081 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dadce417-24c9-4497-91bc-4bbb67fae899-combined-ca-bundle\") pod \"barbican-worker-7f4f589d9f-q5cvn\" (UID: \"dadce417-24c9-4497-91bc-4bbb67fae899\") " pod="openstack/barbican-worker-7f4f589d9f-q5cvn" Dec 09 12:28:35 crc kubenswrapper[4703]: I1209 12:28:35.407177 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrxxb\" (UniqueName: \"kubernetes.io/projected/2ddebb8b-fcc7-4ee0-b7d4-af4b2969f8f3-kube-api-access-wrxxb\") pod \"placement-865d57f67b-x7hvw\" (UID: \"2ddebb8b-fcc7-4ee0-b7d4-af4b2969f8f3\") " pod="openstack/placement-865d57f67b-x7hvw" Dec 09 12:28:35 crc kubenswrapper[4703]: I1209 12:28:35.407346 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dadce417-24c9-4497-91bc-4bbb67fae899-config-data-custom\") pod \"barbican-worker-7f4f589d9f-q5cvn\" (UID: \"dadce417-24c9-4497-91bc-4bbb67fae899\") " pod="openstack/barbican-worker-7f4f589d9f-q5cvn" Dec 09 12:28:35 crc kubenswrapper[4703]: I1209 12:28:35.407438 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a74b2dd-de01-4b4f-ae23-00856d55f81a-combined-ca-bundle\") pod \"barbican-keystone-listener-569b9f958-2lvsd\" (UID: \"8a74b2dd-de01-4b4f-ae23-00856d55f81a\") " pod="openstack/barbican-keystone-listener-569b9f958-2lvsd" Dec 09 12:28:35 crc kubenswrapper[4703]: I1209 12:28:35.408726 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dadce417-24c9-4497-91bc-4bbb67fae899-logs\") pod \"barbican-worker-7f4f589d9f-q5cvn\" (UID: \"dadce417-24c9-4497-91bc-4bbb67fae899\") " pod="openstack/barbican-worker-7f4f589d9f-q5cvn" Dec 09 12:28:35 crc kubenswrapper[4703]: I1209 12:28:35.450355 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-569b9f958-2lvsd"] Dec 09 12:28:35 crc kubenswrapper[4703]: I1209 12:28:35.453589 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dadce417-24c9-4497-91bc-4bbb67fae899-config-data-custom\") pod \"barbican-worker-7f4f589d9f-q5cvn\" (UID: \"dadce417-24c9-4497-91bc-4bbb67fae899\") " pod="openstack/barbican-worker-7f4f589d9f-q5cvn" Dec 09 12:28:35 crc kubenswrapper[4703]: I1209 12:28:35.461631 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dadce417-24c9-4497-91bc-4bbb67fae899-combined-ca-bundle\") pod \"barbican-worker-7f4f589d9f-q5cvn\" (UID: \"dadce417-24c9-4497-91bc-4bbb67fae899\") " pod="openstack/barbican-worker-7f4f589d9f-q5cvn" Dec 09 12:28:35 crc kubenswrapper[4703]: I1209 12:28:35.466700 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dadce417-24c9-4497-91bc-4bbb67fae899-config-data\") pod \"barbican-worker-7f4f589d9f-q5cvn\" (UID: \"dadce417-24c9-4497-91bc-4bbb67fae899\") " pod="openstack/barbican-worker-7f4f589d9f-q5cvn" Dec 09 12:28:35 crc kubenswrapper[4703]: I1209 12:28:35.523841 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrxxb\" (UniqueName: \"kubernetes.io/projected/2ddebb8b-fcc7-4ee0-b7d4-af4b2969f8f3-kube-api-access-wrxxb\") pod \"placement-865d57f67b-x7hvw\" (UID: \"2ddebb8b-fcc7-4ee0-b7d4-af4b2969f8f3\") " pod="openstack/placement-865d57f67b-x7hvw" Dec 09 12:28:35 crc kubenswrapper[4703]: I1209 12:28:35.524344 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a74b2dd-de01-4b4f-ae23-00856d55f81a-combined-ca-bundle\") pod \"barbican-keystone-listener-569b9f958-2lvsd\" (UID: \"8a74b2dd-de01-4b4f-ae23-00856d55f81a\") " pod="openstack/barbican-keystone-listener-569b9f958-2lvsd" Dec 09 12:28:35 crc kubenswrapper[4703]: I1209 12:28:35.524520 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a74b2dd-de01-4b4f-ae23-00856d55f81a-config-data\") pod \"barbican-keystone-listener-569b9f958-2lvsd\" (UID: \"8a74b2dd-de01-4b4f-ae23-00856d55f81a\") " pod="openstack/barbican-keystone-listener-569b9f958-2lvsd" Dec 09 12:28:35 crc kubenswrapper[4703]: I1209 12:28:35.524643 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ddebb8b-fcc7-4ee0-b7d4-af4b2969f8f3-internal-tls-certs\") pod \"placement-865d57f67b-x7hvw\" (UID: \"2ddebb8b-fcc7-4ee0-b7d4-af4b2969f8f3\") " pod="openstack/placement-865d57f67b-x7hvw" Dec 09 12:28:35 crc kubenswrapper[4703]: I1209 12:28:35.524748 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ddebb8b-fcc7-4ee0-b7d4-af4b2969f8f3-public-tls-certs\") pod \"placement-865d57f67b-x7hvw\" (UID: \"2ddebb8b-fcc7-4ee0-b7d4-af4b2969f8f3\") " pod="openstack/placement-865d57f67b-x7hvw" Dec 09 12:28:35 crc kubenswrapper[4703]: I1209 12:28:35.524855 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a74b2dd-de01-4b4f-ae23-00856d55f81a-logs\") pod \"barbican-keystone-listener-569b9f958-2lvsd\" (UID: \"8a74b2dd-de01-4b4f-ae23-00856d55f81a\") " pod="openstack/barbican-keystone-listener-569b9f958-2lvsd" Dec 09 12:28:35 crc kubenswrapper[4703]: I1209 12:28:35.525103 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ddebb8b-fcc7-4ee0-b7d4-af4b2969f8f3-logs\") pod \"placement-865d57f67b-x7hvw\" (UID: \"2ddebb8b-fcc7-4ee0-b7d4-af4b2969f8f3\") " pod="openstack/placement-865d57f67b-x7hvw" Dec 09 12:28:35 crc kubenswrapper[4703]: I1209 12:28:35.525294 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dbwt\" (UniqueName: \"kubernetes.io/projected/8a74b2dd-de01-4b4f-ae23-00856d55f81a-kube-api-access-7dbwt\") pod \"barbican-keystone-listener-569b9f958-2lvsd\" (UID: \"8a74b2dd-de01-4b4f-ae23-00856d55f81a\") " pod="openstack/barbican-keystone-listener-569b9f958-2lvsd" Dec 09 12:28:35 crc kubenswrapper[4703]: I1209 12:28:35.525410 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ddebb8b-fcc7-4ee0-b7d4-af4b2969f8f3-scripts\") pod \"placement-865d57f67b-x7hvw\" (UID: \"2ddebb8b-fcc7-4ee0-b7d4-af4b2969f8f3\") " pod="openstack/placement-865d57f67b-x7hvw" Dec 09 12:28:35 crc kubenswrapper[4703]: I1209 12:28:35.525514 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ddebb8b-fcc7-4ee0-b7d4-af4b2969f8f3-config-data\") pod \"placement-865d57f67b-x7hvw\" (UID: \"2ddebb8b-fcc7-4ee0-b7d4-af4b2969f8f3\") " pod="openstack/placement-865d57f67b-x7hvw" Dec 09 12:28:35 crc kubenswrapper[4703]: I1209 12:28:35.525751 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8a74b2dd-de01-4b4f-ae23-00856d55f81a-config-data-custom\") pod \"barbican-keystone-listener-569b9f958-2lvsd\" (UID: \"8a74b2dd-de01-4b4f-ae23-00856d55f81a\") " pod="openstack/barbican-keystone-listener-569b9f958-2lvsd" Dec 09 12:28:35 crc kubenswrapper[4703]: I1209 12:28:35.525844 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ddebb8b-fcc7-4ee0-b7d4-af4b2969f8f3-combined-ca-bundle\") pod \"placement-865d57f67b-x7hvw\" (UID: \"2ddebb8b-fcc7-4ee0-b7d4-af4b2969f8f3\") " pod="openstack/placement-865d57f67b-x7hvw" Dec 09 12:28:35 crc kubenswrapper[4703]: I1209 12:28:35.532812 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a74b2dd-de01-4b4f-ae23-00856d55f81a-logs\") pod \"barbican-keystone-listener-569b9f958-2lvsd\" (UID: \"8a74b2dd-de01-4b4f-ae23-00856d55f81a\") " pod="openstack/barbican-keystone-listener-569b9f958-2lvsd" Dec 09 12:28:35 crc kubenswrapper[4703]: I1209 12:28:35.543247 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ddebb8b-fcc7-4ee0-b7d4-af4b2969f8f3-logs\") pod \"placement-865d57f67b-x7hvw\" (UID: \"2ddebb8b-fcc7-4ee0-b7d4-af4b2969f8f3\") " pod="openstack/placement-865d57f67b-x7hvw" Dec 09 12:28:35 crc kubenswrapper[4703]: I1209 12:28:35.550428 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tt2f7\" (UniqueName: \"kubernetes.io/projected/dadce417-24c9-4497-91bc-4bbb67fae899-kube-api-access-tt2f7\") pod \"barbican-worker-7f4f589d9f-q5cvn\" (UID: \"dadce417-24c9-4497-91bc-4bbb67fae899\") " pod="openstack/barbican-worker-7f4f589d9f-q5cvn" Dec 09 12:28:35 crc kubenswrapper[4703]: I1209 12:28:35.559820 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7f4f589d9f-q5cvn" Dec 09 12:28:35 crc kubenswrapper[4703]: I1209 12:28:35.561907 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a74b2dd-de01-4b4f-ae23-00856d55f81a-config-data\") pod \"barbican-keystone-listener-569b9f958-2lvsd\" (UID: \"8a74b2dd-de01-4b4f-ae23-00856d55f81a\") " pod="openstack/barbican-keystone-listener-569b9f958-2lvsd" Dec 09 12:28:35 crc kubenswrapper[4703]: I1209 12:28:35.579293 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ddebb8b-fcc7-4ee0-b7d4-af4b2969f8f3-scripts\") pod \"placement-865d57f67b-x7hvw\" (UID: \"2ddebb8b-fcc7-4ee0-b7d4-af4b2969f8f3\") " pod="openstack/placement-865d57f67b-x7hvw" Dec 09 12:28:35 crc kubenswrapper[4703]: I1209 12:28:35.603549 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8a74b2dd-de01-4b4f-ae23-00856d55f81a-config-data-custom\") pod \"barbican-keystone-listener-569b9f958-2lvsd\" (UID: \"8a74b2dd-de01-4b4f-ae23-00856d55f81a\") " pod="openstack/barbican-keystone-listener-569b9f958-2lvsd" Dec 09 12:28:35 crc kubenswrapper[4703]: I1209 12:28:35.609254 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a74b2dd-de01-4b4f-ae23-00856d55f81a-combined-ca-bundle\") pod \"barbican-keystone-listener-569b9f958-2lvsd\" (UID: \"8a74b2dd-de01-4b4f-ae23-00856d55f81a\") " pod="openstack/barbican-keystone-listener-569b9f958-2lvsd" Dec 09 12:28:35 crc kubenswrapper[4703]: I1209 12:28:35.609970 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ddebb8b-fcc7-4ee0-b7d4-af4b2969f8f3-combined-ca-bundle\") pod \"placement-865d57f67b-x7hvw\" (UID: \"2ddebb8b-fcc7-4ee0-b7d4-af4b2969f8f3\") " pod="openstack/placement-865d57f67b-x7hvw" Dec 09 12:28:35 crc kubenswrapper[4703]: I1209 12:28:35.728165 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ddebb8b-fcc7-4ee0-b7d4-af4b2969f8f3-config-data\") pod \"placement-865d57f67b-x7hvw\" (UID: \"2ddebb8b-fcc7-4ee0-b7d4-af4b2969f8f3\") " pod="openstack/placement-865d57f67b-x7hvw" Dec 09 12:28:35 crc kubenswrapper[4703]: I1209 12:28:35.732096 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-4ccp2"] Dec 09 12:28:35 crc kubenswrapper[4703]: I1209 12:28:35.741071 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dbwt\" (UniqueName: \"kubernetes.io/projected/8a74b2dd-de01-4b4f-ae23-00856d55f81a-kube-api-access-7dbwt\") pod \"barbican-keystone-listener-569b9f958-2lvsd\" (UID: \"8a74b2dd-de01-4b4f-ae23-00856d55f81a\") " pod="openstack/barbican-keystone-listener-569b9f958-2lvsd" Dec 09 12:28:35 crc kubenswrapper[4703]: I1209 12:28:35.757609 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-4ccp2" Dec 09 12:28:35 crc kubenswrapper[4703]: I1209 12:28:35.762778 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrxxb\" (UniqueName: \"kubernetes.io/projected/2ddebb8b-fcc7-4ee0-b7d4-af4b2969f8f3-kube-api-access-wrxxb\") pod \"placement-865d57f67b-x7hvw\" (UID: \"2ddebb8b-fcc7-4ee0-b7d4-af4b2969f8f3\") " pod="openstack/placement-865d57f67b-x7hvw" Dec 09 12:28:35 crc kubenswrapper[4703]: I1209 12:28:35.805263 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-7f4f589d9f-q5cvn"] Dec 09 12:28:35 crc kubenswrapper[4703]: I1209 12:28:35.836576 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d7ce5f82-bf77-4fcc-9ae9-55c0732c5012-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-4ccp2\" (UID: \"d7ce5f82-bf77-4fcc-9ae9-55c0732c5012\") " pod="openstack/dnsmasq-dns-85ff748b95-4ccp2" Dec 09 12:28:35 crc kubenswrapper[4703]: I1209 12:28:35.836677 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7ce5f82-bf77-4fcc-9ae9-55c0732c5012-config\") pod \"dnsmasq-dns-85ff748b95-4ccp2\" (UID: \"d7ce5f82-bf77-4fcc-9ae9-55c0732c5012\") " pod="openstack/dnsmasq-dns-85ff748b95-4ccp2" Dec 09 12:28:35 crc kubenswrapper[4703]: I1209 12:28:35.836710 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c675c\" (UniqueName: \"kubernetes.io/projected/d7ce5f82-bf77-4fcc-9ae9-55c0732c5012-kube-api-access-c675c\") pod \"dnsmasq-dns-85ff748b95-4ccp2\" (UID: \"d7ce5f82-bf77-4fcc-9ae9-55c0732c5012\") " pod="openstack/dnsmasq-dns-85ff748b95-4ccp2" Dec 09 12:28:35 crc kubenswrapper[4703]: I1209 12:28:35.836740 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d7ce5f82-bf77-4fcc-9ae9-55c0732c5012-dns-svc\") pod \"dnsmasq-dns-85ff748b95-4ccp2\" (UID: \"d7ce5f82-bf77-4fcc-9ae9-55c0732c5012\") " pod="openstack/dnsmasq-dns-85ff748b95-4ccp2" Dec 09 12:28:35 crc kubenswrapper[4703]: I1209 12:28:35.836830 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d7ce5f82-bf77-4fcc-9ae9-55c0732c5012-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-4ccp2\" (UID: \"d7ce5f82-bf77-4fcc-9ae9-55c0732c5012\") " pod="openstack/dnsmasq-dns-85ff748b95-4ccp2" Dec 09 12:28:35 crc kubenswrapper[4703]: I1209 12:28:35.836898 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d7ce5f82-bf77-4fcc-9ae9-55c0732c5012-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-4ccp2\" (UID: \"d7ce5f82-bf77-4fcc-9ae9-55c0732c5012\") " pod="openstack/dnsmasq-dns-85ff748b95-4ccp2" Dec 09 12:28:35 crc kubenswrapper[4703]: I1209 12:28:35.883270 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-4ccp2"] Dec 09 12:28:35 crc kubenswrapper[4703]: I1209 12:28:35.931341 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-5f58796dd5-gpp8b"] Dec 09 12:28:35 crc kubenswrapper[4703]: I1209 12:28:35.933399 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5f58796dd5-gpp8b" Dec 09 12:28:35 crc kubenswrapper[4703]: I1209 12:28:35.938968 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d7ce5f82-bf77-4fcc-9ae9-55c0732c5012-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-4ccp2\" (UID: \"d7ce5f82-bf77-4fcc-9ae9-55c0732c5012\") " pod="openstack/dnsmasq-dns-85ff748b95-4ccp2" Dec 09 12:28:35 crc kubenswrapper[4703]: I1209 12:28:35.939232 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d7ce5f82-bf77-4fcc-9ae9-55c0732c5012-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-4ccp2\" (UID: \"d7ce5f82-bf77-4fcc-9ae9-55c0732c5012\") " pod="openstack/dnsmasq-dns-85ff748b95-4ccp2" Dec 09 12:28:35 crc kubenswrapper[4703]: I1209 12:28:35.939278 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7ce5f82-bf77-4fcc-9ae9-55c0732c5012-config\") pod \"dnsmasq-dns-85ff748b95-4ccp2\" (UID: \"d7ce5f82-bf77-4fcc-9ae9-55c0732c5012\") " pod="openstack/dnsmasq-dns-85ff748b95-4ccp2" Dec 09 12:28:35 crc kubenswrapper[4703]: I1209 12:28:35.939310 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c675c\" (UniqueName: \"kubernetes.io/projected/d7ce5f82-bf77-4fcc-9ae9-55c0732c5012-kube-api-access-c675c\") pod \"dnsmasq-dns-85ff748b95-4ccp2\" (UID: \"d7ce5f82-bf77-4fcc-9ae9-55c0732c5012\") " pod="openstack/dnsmasq-dns-85ff748b95-4ccp2" Dec 09 12:28:35 crc kubenswrapper[4703]: I1209 12:28:35.939337 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d7ce5f82-bf77-4fcc-9ae9-55c0732c5012-dns-svc\") pod \"dnsmasq-dns-85ff748b95-4ccp2\" (UID: \"d7ce5f82-bf77-4fcc-9ae9-55c0732c5012\") " pod="openstack/dnsmasq-dns-85ff748b95-4ccp2" Dec 09 12:28:35 crc kubenswrapper[4703]: I1209 12:28:35.939421 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d7ce5f82-bf77-4fcc-9ae9-55c0732c5012-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-4ccp2\" (UID: \"d7ce5f82-bf77-4fcc-9ae9-55c0732c5012\") " pod="openstack/dnsmasq-dns-85ff748b95-4ccp2" Dec 09 12:28:35 crc kubenswrapper[4703]: I1209 12:28:35.943955 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d7ce5f82-bf77-4fcc-9ae9-55c0732c5012-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-4ccp2\" (UID: \"d7ce5f82-bf77-4fcc-9ae9-55c0732c5012\") " pod="openstack/dnsmasq-dns-85ff748b95-4ccp2" Dec 09 12:28:35 crc kubenswrapper[4703]: I1209 12:28:35.944585 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d7ce5f82-bf77-4fcc-9ae9-55c0732c5012-dns-svc\") pod \"dnsmasq-dns-85ff748b95-4ccp2\" (UID: \"d7ce5f82-bf77-4fcc-9ae9-55c0732c5012\") " pod="openstack/dnsmasq-dns-85ff748b95-4ccp2" Dec 09 12:28:35 crc kubenswrapper[4703]: I1209 12:28:35.944609 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7ce5f82-bf77-4fcc-9ae9-55c0732c5012-config\") pod \"dnsmasq-dns-85ff748b95-4ccp2\" (UID: \"d7ce5f82-bf77-4fcc-9ae9-55c0732c5012\") " pod="openstack/dnsmasq-dns-85ff748b95-4ccp2" Dec 09 12:28:35 crc kubenswrapper[4703]: I1209 12:28:35.945213 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d7ce5f82-bf77-4fcc-9ae9-55c0732c5012-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-4ccp2\" (UID: \"d7ce5f82-bf77-4fcc-9ae9-55c0732c5012\") " pod="openstack/dnsmasq-dns-85ff748b95-4ccp2" Dec 09 12:28:35 crc kubenswrapper[4703]: I1209 12:28:35.953346 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d7ce5f82-bf77-4fcc-9ae9-55c0732c5012-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-4ccp2\" (UID: \"d7ce5f82-bf77-4fcc-9ae9-55c0732c5012\") " pod="openstack/dnsmasq-dns-85ff748b95-4ccp2" Dec 09 12:28:35 crc kubenswrapper[4703]: I1209 12:28:35.970668 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 09 12:28:35 crc kubenswrapper[4703]: I1209 12:28:35.973878 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5f58796dd5-gpp8b"] Dec 09 12:28:35 crc kubenswrapper[4703]: I1209 12:28:35.976647 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-569b9f958-2lvsd" Dec 09 12:28:35 crc kubenswrapper[4703]: I1209 12:28:35.971569 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Dec 09 12:28:35 crc kubenswrapper[4703]: I1209 12:28:35.971569 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 09 12:28:35 crc kubenswrapper[4703]: I1209 12:28:35.971622 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Dec 09 12:28:35 crc kubenswrapper[4703]: I1209 12:28:35.971638 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-s28wx" Dec 09 12:28:35 crc kubenswrapper[4703]: I1209 12:28:35.971685 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 09 12:28:36 crc kubenswrapper[4703]: I1209 12:28:36.009841 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c675c\" (UniqueName: \"kubernetes.io/projected/d7ce5f82-bf77-4fcc-9ae9-55c0732c5012-kube-api-access-c675c\") pod \"dnsmasq-dns-85ff748b95-4ccp2\" (UID: \"d7ce5f82-bf77-4fcc-9ae9-55c0732c5012\") " pod="openstack/dnsmasq-dns-85ff748b95-4ccp2" Dec 09 12:28:36 crc kubenswrapper[4703]: I1209 12:28:36.041977 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce0e8ad7-3e3e-4b1a-a892-0824acf34a1b-config-data\") pod \"keystone-5f58796dd5-gpp8b\" (UID: \"ce0e8ad7-3e3e-4b1a-a892-0824acf34a1b\") " pod="openstack/keystone-5f58796dd5-gpp8b" Dec 09 12:28:36 crc kubenswrapper[4703]: I1209 12:28:36.042082 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ce0e8ad7-3e3e-4b1a-a892-0824acf34a1b-credential-keys\") pod \"keystone-5f58796dd5-gpp8b\" (UID: \"ce0e8ad7-3e3e-4b1a-a892-0824acf34a1b\") " pod="openstack/keystone-5f58796dd5-gpp8b" Dec 09 12:28:36 crc kubenswrapper[4703]: I1209 12:28:36.042142 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqsr6\" (UniqueName: \"kubernetes.io/projected/ce0e8ad7-3e3e-4b1a-a892-0824acf34a1b-kube-api-access-dqsr6\") pod \"keystone-5f58796dd5-gpp8b\" (UID: \"ce0e8ad7-3e3e-4b1a-a892-0824acf34a1b\") " pod="openstack/keystone-5f58796dd5-gpp8b" Dec 09 12:28:36 crc kubenswrapper[4703]: I1209 12:28:36.042177 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce0e8ad7-3e3e-4b1a-a892-0824acf34a1b-public-tls-certs\") pod \"keystone-5f58796dd5-gpp8b\" (UID: \"ce0e8ad7-3e3e-4b1a-a892-0824acf34a1b\") " pod="openstack/keystone-5f58796dd5-gpp8b" Dec 09 12:28:36 crc kubenswrapper[4703]: I1209 12:28:36.042307 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce0e8ad7-3e3e-4b1a-a892-0824acf34a1b-internal-tls-certs\") pod \"keystone-5f58796dd5-gpp8b\" (UID: \"ce0e8ad7-3e3e-4b1a-a892-0824acf34a1b\") " pod="openstack/keystone-5f58796dd5-gpp8b" Dec 09 12:28:36 crc kubenswrapper[4703]: I1209 12:28:36.042340 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce0e8ad7-3e3e-4b1a-a892-0824acf34a1b-combined-ca-bundle\") pod \"keystone-5f58796dd5-gpp8b\" (UID: \"ce0e8ad7-3e3e-4b1a-a892-0824acf34a1b\") " pod="openstack/keystone-5f58796dd5-gpp8b" Dec 09 12:28:36 crc kubenswrapper[4703]: I1209 12:28:36.042398 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ce0e8ad7-3e3e-4b1a-a892-0824acf34a1b-fernet-keys\") pod \"keystone-5f58796dd5-gpp8b\" (UID: \"ce0e8ad7-3e3e-4b1a-a892-0824acf34a1b\") " pod="openstack/keystone-5f58796dd5-gpp8b" Dec 09 12:28:36 crc kubenswrapper[4703]: I1209 12:28:36.042425 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce0e8ad7-3e3e-4b1a-a892-0824acf34a1b-scripts\") pod \"keystone-5f58796dd5-gpp8b\" (UID: \"ce0e8ad7-3e3e-4b1a-a892-0824acf34a1b\") " pod="openstack/keystone-5f58796dd5-gpp8b" Dec 09 12:28:36 crc kubenswrapper[4703]: I1209 12:28:36.131806 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-4ccp2" Dec 09 12:28:36 crc kubenswrapper[4703]: I1209 12:28:36.145549 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce0e8ad7-3e3e-4b1a-a892-0824acf34a1b-scripts\") pod \"keystone-5f58796dd5-gpp8b\" (UID: \"ce0e8ad7-3e3e-4b1a-a892-0824acf34a1b\") " pod="openstack/keystone-5f58796dd5-gpp8b" Dec 09 12:28:36 crc kubenswrapper[4703]: I1209 12:28:36.145771 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce0e8ad7-3e3e-4b1a-a892-0824acf34a1b-config-data\") pod \"keystone-5f58796dd5-gpp8b\" (UID: \"ce0e8ad7-3e3e-4b1a-a892-0824acf34a1b\") " pod="openstack/keystone-5f58796dd5-gpp8b" Dec 09 12:28:36 crc kubenswrapper[4703]: I1209 12:28:36.145825 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ce0e8ad7-3e3e-4b1a-a892-0824acf34a1b-credential-keys\") pod \"keystone-5f58796dd5-gpp8b\" (UID: \"ce0e8ad7-3e3e-4b1a-a892-0824acf34a1b\") " pod="openstack/keystone-5f58796dd5-gpp8b" Dec 09 12:28:36 crc kubenswrapper[4703]: I1209 12:28:36.145891 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqsr6\" (UniqueName: \"kubernetes.io/projected/ce0e8ad7-3e3e-4b1a-a892-0824acf34a1b-kube-api-access-dqsr6\") pod \"keystone-5f58796dd5-gpp8b\" (UID: \"ce0e8ad7-3e3e-4b1a-a892-0824acf34a1b\") " pod="openstack/keystone-5f58796dd5-gpp8b" Dec 09 12:28:36 crc kubenswrapper[4703]: I1209 12:28:36.145923 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce0e8ad7-3e3e-4b1a-a892-0824acf34a1b-public-tls-certs\") pod \"keystone-5f58796dd5-gpp8b\" (UID: \"ce0e8ad7-3e3e-4b1a-a892-0824acf34a1b\") " pod="openstack/keystone-5f58796dd5-gpp8b" Dec 09 12:28:36 crc kubenswrapper[4703]: I1209 12:28:36.146037 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce0e8ad7-3e3e-4b1a-a892-0824acf34a1b-internal-tls-certs\") pod \"keystone-5f58796dd5-gpp8b\" (UID: \"ce0e8ad7-3e3e-4b1a-a892-0824acf34a1b\") " pod="openstack/keystone-5f58796dd5-gpp8b" Dec 09 12:28:36 crc kubenswrapper[4703]: I1209 12:28:36.146068 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce0e8ad7-3e3e-4b1a-a892-0824acf34a1b-combined-ca-bundle\") pod \"keystone-5f58796dd5-gpp8b\" (UID: \"ce0e8ad7-3e3e-4b1a-a892-0824acf34a1b\") " pod="openstack/keystone-5f58796dd5-gpp8b" Dec 09 12:28:36 crc kubenswrapper[4703]: I1209 12:28:36.146133 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ce0e8ad7-3e3e-4b1a-a892-0824acf34a1b-fernet-keys\") pod \"keystone-5f58796dd5-gpp8b\" (UID: \"ce0e8ad7-3e3e-4b1a-a892-0824acf34a1b\") " pod="openstack/keystone-5f58796dd5-gpp8b" Dec 09 12:28:36 crc kubenswrapper[4703]: I1209 12:28:36.265329 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Dec 09 12:28:36 crc kubenswrapper[4703]: I1209 12:28:36.296880 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ce0e8ad7-3e3e-4b1a-a892-0824acf34a1b-fernet-keys\") pod \"keystone-5f58796dd5-gpp8b\" (UID: \"ce0e8ad7-3e3e-4b1a-a892-0824acf34a1b\") " pod="openstack/keystone-5f58796dd5-gpp8b" Dec 09 12:28:36 crc kubenswrapper[4703]: I1209 12:28:36.296933 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce0e8ad7-3e3e-4b1a-a892-0824acf34a1b-public-tls-certs\") pod \"keystone-5f58796dd5-gpp8b\" (UID: \"ce0e8ad7-3e3e-4b1a-a892-0824acf34a1b\") " pod="openstack/keystone-5f58796dd5-gpp8b" Dec 09 12:28:36 crc kubenswrapper[4703]: I1209 12:28:36.297022 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ce0e8ad7-3e3e-4b1a-a892-0824acf34a1b-credential-keys\") pod \"keystone-5f58796dd5-gpp8b\" (UID: \"ce0e8ad7-3e3e-4b1a-a892-0824acf34a1b\") " pod="openstack/keystone-5f58796dd5-gpp8b" Dec 09 12:28:36 crc kubenswrapper[4703]: I1209 12:28:36.296936 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce0e8ad7-3e3e-4b1a-a892-0824acf34a1b-internal-tls-certs\") pod \"keystone-5f58796dd5-gpp8b\" (UID: \"ce0e8ad7-3e3e-4b1a-a892-0824acf34a1b\") " pod="openstack/keystone-5f58796dd5-gpp8b" Dec 09 12:28:36 crc kubenswrapper[4703]: I1209 12:28:36.297516 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce0e8ad7-3e3e-4b1a-a892-0824acf34a1b-config-data\") pod \"keystone-5f58796dd5-gpp8b\" (UID: \"ce0e8ad7-3e3e-4b1a-a892-0824acf34a1b\") " pod="openstack/keystone-5f58796dd5-gpp8b" Dec 09 12:28:36 crc kubenswrapper[4703]: I1209 12:28:36.301155 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqsr6\" (UniqueName: \"kubernetes.io/projected/ce0e8ad7-3e3e-4b1a-a892-0824acf34a1b-kube-api-access-dqsr6\") pod \"keystone-5f58796dd5-gpp8b\" (UID: \"ce0e8ad7-3e3e-4b1a-a892-0824acf34a1b\") " pod="openstack/keystone-5f58796dd5-gpp8b" Dec 09 12:28:36 crc kubenswrapper[4703]: I1209 12:28:36.305100 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce0e8ad7-3e3e-4b1a-a892-0824acf34a1b-combined-ca-bundle\") pod \"keystone-5f58796dd5-gpp8b\" (UID: \"ce0e8ad7-3e3e-4b1a-a892-0824acf34a1b\") " pod="openstack/keystone-5f58796dd5-gpp8b" Dec 09 12:28:36 crc kubenswrapper[4703]: I1209 12:28:36.309505 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-94fd6bfbb-8c7xs"] Dec 09 12:28:36 crc kubenswrapper[4703]: I1209 12:28:36.310086 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ddebb8b-fcc7-4ee0-b7d4-af4b2969f8f3-public-tls-certs\") pod \"placement-865d57f67b-x7hvw\" (UID: \"2ddebb8b-fcc7-4ee0-b7d4-af4b2969f8f3\") " pod="openstack/placement-865d57f67b-x7hvw" Dec 09 12:28:36 crc kubenswrapper[4703]: I1209 12:28:36.311607 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce0e8ad7-3e3e-4b1a-a892-0824acf34a1b-scripts\") pod \"keystone-5f58796dd5-gpp8b\" (UID: \"ce0e8ad7-3e3e-4b1a-a892-0824acf34a1b\") " pod="openstack/keystone-5f58796dd5-gpp8b" Dec 09 12:28:36 crc kubenswrapper[4703]: I1209 12:28:36.317762 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-94fd6bfbb-8c7xs" Dec 09 12:28:36 crc kubenswrapper[4703]: I1209 12:28:36.323419 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Dec 09 12:28:36 crc kubenswrapper[4703]: I1209 12:28:36.376223 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-94fd6bfbb-8c7xs"] Dec 09 12:28:36 crc kubenswrapper[4703]: I1209 12:28:36.456676 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ff08dc4-6bf7-4c61-bdf4-289c0c653a2f-config-data\") pod \"barbican-api-94fd6bfbb-8c7xs\" (UID: \"8ff08dc4-6bf7-4c61-bdf4-289c0c653a2f\") " pod="openstack/barbican-api-94fd6bfbb-8c7xs" Dec 09 12:28:36 crc kubenswrapper[4703]: I1209 12:28:36.456841 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ff08dc4-6bf7-4c61-bdf4-289c0c653a2f-logs\") pod \"barbican-api-94fd6bfbb-8c7xs\" (UID: \"8ff08dc4-6bf7-4c61-bdf4-289c0c653a2f\") " pod="openstack/barbican-api-94fd6bfbb-8c7xs" Dec 09 12:28:36 crc kubenswrapper[4703]: I1209 12:28:36.456959 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8ff08dc4-6bf7-4c61-bdf4-289c0c653a2f-config-data-custom\") pod \"barbican-api-94fd6bfbb-8c7xs\" (UID: \"8ff08dc4-6bf7-4c61-bdf4-289c0c653a2f\") " pod="openstack/barbican-api-94fd6bfbb-8c7xs" Dec 09 12:28:36 crc kubenswrapper[4703]: I1209 12:28:36.456997 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cp5ws\" (UniqueName: \"kubernetes.io/projected/8ff08dc4-6bf7-4c61-bdf4-289c0c653a2f-kube-api-access-cp5ws\") pod \"barbican-api-94fd6bfbb-8c7xs\" (UID: \"8ff08dc4-6bf7-4c61-bdf4-289c0c653a2f\") " pod="openstack/barbican-api-94fd6bfbb-8c7xs" Dec 09 12:28:36 crc kubenswrapper[4703]: I1209 12:28:36.457028 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ff08dc4-6bf7-4c61-bdf4-289c0c653a2f-combined-ca-bundle\") pod \"barbican-api-94fd6bfbb-8c7xs\" (UID: \"8ff08dc4-6bf7-4c61-bdf4-289c0c653a2f\") " pod="openstack/barbican-api-94fd6bfbb-8c7xs" Dec 09 12:28:36 crc kubenswrapper[4703]: E1209 12:28:36.538337 4703 secret.go:188] Couldn't get secret openstack/cert-placement-internal-svc: failed to sync secret cache: timed out waiting for the condition Dec 09 12:28:36 crc kubenswrapper[4703]: E1209 12:28:36.538457 4703 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2ddebb8b-fcc7-4ee0-b7d4-af4b2969f8f3-internal-tls-certs podName:2ddebb8b-fcc7-4ee0-b7d4-af4b2969f8f3 nodeName:}" failed. No retries permitted until 2025-12-09 12:28:37.038433957 +0000 UTC m=+1416.287197476 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "internal-tls-certs" (UniqueName: "kubernetes.io/secret/2ddebb8b-fcc7-4ee0-b7d4-af4b2969f8f3-internal-tls-certs") pod "placement-865d57f67b-x7hvw" (UID: "2ddebb8b-fcc7-4ee0-b7d4-af4b2969f8f3") : failed to sync secret cache: timed out waiting for the condition Dec 09 12:28:36 crc kubenswrapper[4703]: I1209 12:28:36.566021 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8ff08dc4-6bf7-4c61-bdf4-289c0c653a2f-config-data-custom\") pod \"barbican-api-94fd6bfbb-8c7xs\" (UID: \"8ff08dc4-6bf7-4c61-bdf4-289c0c653a2f\") " pod="openstack/barbican-api-94fd6bfbb-8c7xs" Dec 09 12:28:36 crc kubenswrapper[4703]: I1209 12:28:36.566377 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cp5ws\" (UniqueName: \"kubernetes.io/projected/8ff08dc4-6bf7-4c61-bdf4-289c0c653a2f-kube-api-access-cp5ws\") pod \"barbican-api-94fd6bfbb-8c7xs\" (UID: \"8ff08dc4-6bf7-4c61-bdf4-289c0c653a2f\") " pod="openstack/barbican-api-94fd6bfbb-8c7xs" Dec 09 12:28:36 crc kubenswrapper[4703]: I1209 12:28:36.566414 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ff08dc4-6bf7-4c61-bdf4-289c0c653a2f-combined-ca-bundle\") pod \"barbican-api-94fd6bfbb-8c7xs\" (UID: \"8ff08dc4-6bf7-4c61-bdf4-289c0c653a2f\") " pod="openstack/barbican-api-94fd6bfbb-8c7xs" Dec 09 12:28:36 crc kubenswrapper[4703]: I1209 12:28:36.566509 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ff08dc4-6bf7-4c61-bdf4-289c0c653a2f-config-data\") pod \"barbican-api-94fd6bfbb-8c7xs\" (UID: \"8ff08dc4-6bf7-4c61-bdf4-289c0c653a2f\") " pod="openstack/barbican-api-94fd6bfbb-8c7xs" Dec 09 12:28:36 crc kubenswrapper[4703]: I1209 12:28:36.566604 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ff08dc4-6bf7-4c61-bdf4-289c0c653a2f-logs\") pod \"barbican-api-94fd6bfbb-8c7xs\" (UID: \"8ff08dc4-6bf7-4c61-bdf4-289c0c653a2f\") " pod="openstack/barbican-api-94fd6bfbb-8c7xs" Dec 09 12:28:36 crc kubenswrapper[4703]: I1209 12:28:36.567102 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ff08dc4-6bf7-4c61-bdf4-289c0c653a2f-logs\") pod \"barbican-api-94fd6bfbb-8c7xs\" (UID: \"8ff08dc4-6bf7-4c61-bdf4-289c0c653a2f\") " pod="openstack/barbican-api-94fd6bfbb-8c7xs" Dec 09 12:28:36 crc kubenswrapper[4703]: I1209 12:28:36.577415 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ff08dc4-6bf7-4c61-bdf4-289c0c653a2f-config-data\") pod \"barbican-api-94fd6bfbb-8c7xs\" (UID: \"8ff08dc4-6bf7-4c61-bdf4-289c0c653a2f\") " pod="openstack/barbican-api-94fd6bfbb-8c7xs" Dec 09 12:28:36 crc kubenswrapper[4703]: I1209 12:28:36.609054 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ff08dc4-6bf7-4c61-bdf4-289c0c653a2f-combined-ca-bundle\") pod \"barbican-api-94fd6bfbb-8c7xs\" (UID: \"8ff08dc4-6bf7-4c61-bdf4-289c0c653a2f\") " pod="openstack/barbican-api-94fd6bfbb-8c7xs" Dec 09 12:28:36 crc kubenswrapper[4703]: I1209 12:28:36.609712 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8ff08dc4-6bf7-4c61-bdf4-289c0c653a2f-config-data-custom\") pod \"barbican-api-94fd6bfbb-8c7xs\" (UID: \"8ff08dc4-6bf7-4c61-bdf4-289c0c653a2f\") " pod="openstack/barbican-api-94fd6bfbb-8c7xs" Dec 09 12:28:36 crc kubenswrapper[4703]: I1209 12:28:36.612961 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5f58796dd5-gpp8b" Dec 09 12:28:36 crc kubenswrapper[4703]: I1209 12:28:36.619726 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cp5ws\" (UniqueName: \"kubernetes.io/projected/8ff08dc4-6bf7-4c61-bdf4-289c0c653a2f-kube-api-access-cp5ws\") pod \"barbican-api-94fd6bfbb-8c7xs\" (UID: \"8ff08dc4-6bf7-4c61-bdf4-289c0c653a2f\") " pod="openstack/barbican-api-94fd6bfbb-8c7xs" Dec 09 12:28:36 crc kubenswrapper[4703]: I1209 12:28:36.704500 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-94fd6bfbb-8c7xs" Dec 09 12:28:36 crc kubenswrapper[4703]: I1209 12:28:36.853448 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Dec 09 12:28:36 crc kubenswrapper[4703]: I1209 12:28:36.918989 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-7f4f589d9f-q5cvn"] Dec 09 12:28:36 crc kubenswrapper[4703]: W1209 12:28:36.936341 4703 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddadce417_24c9_4497_91bc_4bbb67fae899.slice/crio-be11ee7bdc814701501bb3e123b319580b823a8baaed2b60bf62208e8b1039ed WatchSource:0}: Error finding container be11ee7bdc814701501bb3e123b319580b823a8baaed2b60bf62208e8b1039ed: Status 404 returned error can't find the container with id be11ee7bdc814701501bb3e123b319580b823a8baaed2b60bf62208e8b1039ed Dec 09 12:28:37 crc kubenswrapper[4703]: I1209 12:28:37.100580 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ddebb8b-fcc7-4ee0-b7d4-af4b2969f8f3-internal-tls-certs\") pod \"placement-865d57f67b-x7hvw\" (UID: \"2ddebb8b-fcc7-4ee0-b7d4-af4b2969f8f3\") " pod="openstack/placement-865d57f67b-x7hvw" Dec 09 12:28:37 crc kubenswrapper[4703]: I1209 12:28:37.114509 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ddebb8b-fcc7-4ee0-b7d4-af4b2969f8f3-internal-tls-certs\") pod \"placement-865d57f67b-x7hvw\" (UID: \"2ddebb8b-fcc7-4ee0-b7d4-af4b2969f8f3\") " pod="openstack/placement-865d57f67b-x7hvw" Dec 09 12:28:37 crc kubenswrapper[4703]: I1209 12:28:37.130124 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-865d57f67b-x7hvw" Dec 09 12:28:37 crc kubenswrapper[4703]: I1209 12:28:37.139672 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"db4f79b4-592c-4dc1-ad09-c1582b9d8497","Type":"ContainerStarted","Data":"d5789bc93f79ea2373b48b65d376f6ea7876199dffc2e47e5864bfa90517bfbf"} Dec 09 12:28:37 crc kubenswrapper[4703]: I1209 12:28:37.160905 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a92bceed-7795-442f-99c4-c852c51c6284","Type":"ContainerStarted","Data":"dd5e176b5dc8e23e00e8dc2a1f9d5037a6aa2d80f99ab89d8a59b5211d30d1e8"} Dec 09 12:28:37 crc kubenswrapper[4703]: I1209 12:28:37.165446 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7f4f589d9f-q5cvn" event={"ID":"dadce417-24c9-4497-91bc-4bbb67fae899","Type":"ContainerStarted","Data":"be11ee7bdc814701501bb3e123b319580b823a8baaed2b60bf62208e8b1039ed"} Dec 09 12:28:37 crc kubenswrapper[4703]: I1209 12:28:37.445956 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-4ccp2"] Dec 09 12:28:37 crc kubenswrapper[4703]: I1209 12:28:37.463565 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-569b9f958-2lvsd"] Dec 09 12:28:37 crc kubenswrapper[4703]: I1209 12:28:37.684416 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5f58796dd5-gpp8b"] Dec 09 12:28:37 crc kubenswrapper[4703]: I1209 12:28:37.799784 4703 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-74fff9b6-zdlbc" podUID="5292ab55-fe87-4bb9-8975-5a06bcc517e0" containerName="neutron-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 09 12:28:37 crc kubenswrapper[4703]: I1209 12:28:37.800507 4703 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/neutron-74fff9b6-zdlbc" podUID="5292ab55-fe87-4bb9-8975-5a06bcc517e0" containerName="neutron-api" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 09 12:28:37 crc kubenswrapper[4703]: I1209 12:28:37.800528 4703 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/neutron-74fff9b6-zdlbc" podUID="5292ab55-fe87-4bb9-8975-5a06bcc517e0" containerName="neutron-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 09 12:28:37 crc kubenswrapper[4703]: I1209 12:28:37.884560 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-94fd6bfbb-8c7xs"] Dec 09 12:28:37 crc kubenswrapper[4703]: W1209 12:28:37.901277 4703 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8ff08dc4_6bf7_4c61_bdf4_289c0c653a2f.slice/crio-ca97f713c271635edecd9b50e9aa8c3c14188b16f8660bce2a71cd6274753397 WatchSource:0}: Error finding container ca97f713c271635edecd9b50e9aa8c3c14188b16f8660bce2a71cd6274753397: Status 404 returned error can't find the container with id ca97f713c271635edecd9b50e9aa8c3c14188b16f8660bce2a71cd6274753397 Dec 09 12:28:38 crc kubenswrapper[4703]: I1209 12:28:38.093708 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-865d57f67b-x7hvw"] Dec 09 12:28:38 crc kubenswrapper[4703]: I1209 12:28:38.211871 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-865d57f67b-x7hvw" event={"ID":"2ddebb8b-fcc7-4ee0-b7d4-af4b2969f8f3","Type":"ContainerStarted","Data":"821ad7925de44d0ff24037bb9bef4d47087ee88148cbd516659c37695047eedc"} Dec 09 12:28:38 crc kubenswrapper[4703]: I1209 12:28:38.250556 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5f58796dd5-gpp8b" event={"ID":"ce0e8ad7-3e3e-4b1a-a892-0824acf34a1b","Type":"ContainerStarted","Data":"b5c68727bbdaeb69d1e089f2d548c7da065f803754d0b501907ee5667477eebf"} Dec 09 12:28:38 crc kubenswrapper[4703]: I1209 12:28:38.297603 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-4ccp2" event={"ID":"d7ce5f82-bf77-4fcc-9ae9-55c0732c5012","Type":"ContainerStarted","Data":"28affe60e7c3ddd5d4f2433c6611c538ab9d1dafe943b8ab250aa788fdbd2658"} Dec 09 12:28:38 crc kubenswrapper[4703]: I1209 12:28:38.312701 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-94fd6bfbb-8c7xs" event={"ID":"8ff08dc4-6bf7-4c61-bdf4-289c0c653a2f","Type":"ContainerStarted","Data":"ca97f713c271635edecd9b50e9aa8c3c14188b16f8660bce2a71cd6274753397"} Dec 09 12:28:38 crc kubenswrapper[4703]: I1209 12:28:38.362042 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-569b9f958-2lvsd" event={"ID":"8a74b2dd-de01-4b4f-ae23-00856d55f81a","Type":"ContainerStarted","Data":"b161d7e289a1dcd5fc86f2b0f2b25d122de1de4caae1bd81e0c60b506f31b21d"} Dec 09 12:28:39 crc kubenswrapper[4703]: I1209 12:28:39.373630 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-94fd6bfbb-8c7xs" event={"ID":"8ff08dc4-6bf7-4c61-bdf4-289c0c653a2f","Type":"ContainerStarted","Data":"6dead61dfb312ac79981c8eabd409d599baae5e2925bfec2efcfb51b184b5bb1"} Dec 09 12:28:39 crc kubenswrapper[4703]: I1209 12:28:39.375544 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5f58796dd5-gpp8b" event={"ID":"ce0e8ad7-3e3e-4b1a-a892-0824acf34a1b","Type":"ContainerStarted","Data":"04ebb514718adab64997a1a76b19864a6c5c7f170264386643abfbe10955538d"} Dec 09 12:28:39 crc kubenswrapper[4703]: I1209 12:28:39.375658 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-5f58796dd5-gpp8b" Dec 09 12:28:39 crc kubenswrapper[4703]: I1209 12:28:39.377310 4703 generic.go:334] "Generic (PLEG): container finished" podID="d7ce5f82-bf77-4fcc-9ae9-55c0732c5012" containerID="7221928aee52153437649465743c33f4b1548af84eeac21bcea0e7162ddcbbee" exitCode=0 Dec 09 12:28:39 crc kubenswrapper[4703]: I1209 12:28:39.377349 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-4ccp2" event={"ID":"d7ce5f82-bf77-4fcc-9ae9-55c0732c5012","Type":"ContainerDied","Data":"7221928aee52153437649465743c33f4b1548af84eeac21bcea0e7162ddcbbee"} Dec 09 12:28:39 crc kubenswrapper[4703]: I1209 12:28:39.399983 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-5f58796dd5-gpp8b" podStartSLOduration=4.399967699 podStartE2EDuration="4.399967699s" podCreationTimestamp="2025-12-09 12:28:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:28:39.39399455 +0000 UTC m=+1418.642758069" watchObservedRunningTime="2025-12-09 12:28:39.399967699 +0000 UTC m=+1418.648731218" Dec 09 12:28:40 crc kubenswrapper[4703]: I1209 12:28:40.396789 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-865d57f67b-x7hvw" event={"ID":"2ddebb8b-fcc7-4ee0-b7d4-af4b2969f8f3","Type":"ContainerStarted","Data":"24ca7cd3e83bdaec4e5c64aea59cc59b2439d282b84b07de85b5c445fc2f9dc8"} Dec 09 12:28:40 crc kubenswrapper[4703]: I1209 12:28:40.401142 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-4ccp2" event={"ID":"d7ce5f82-bf77-4fcc-9ae9-55c0732c5012","Type":"ContainerStarted","Data":"f01f5c84e5ca6a649f1cbdccb4c4e2f793d17d73c0b57633fa3f02de7c6ea09d"} Dec 09 12:28:40 crc kubenswrapper[4703]: I1209 12:28:40.402460 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-85ff748b95-4ccp2" Dec 09 12:28:40 crc kubenswrapper[4703]: I1209 12:28:40.410257 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a92bceed-7795-442f-99c4-c852c51c6284","Type":"ContainerStarted","Data":"e665c0eec651eb3aea395c83654ee943673bd3b89396ca192e184ffc6709f905"} Dec 09 12:28:40 crc kubenswrapper[4703]: I1209 12:28:40.413291 4703 generic.go:334] "Generic (PLEG): container finished" podID="24ebaba5-65a6-4be5-8112-10c77a6d986c" containerID="6f5fa72102cf4873c2f34f345e09eb221c662001500660781d4003559b5546ae" exitCode=0 Dec 09 12:28:40 crc kubenswrapper[4703]: I1209 12:28:40.413336 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-5tddl" event={"ID":"24ebaba5-65a6-4be5-8112-10c77a6d986c","Type":"ContainerDied","Data":"6f5fa72102cf4873c2f34f345e09eb221c662001500660781d4003559b5546ae"} Dec 09 12:28:40 crc kubenswrapper[4703]: I1209 12:28:40.417438 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"db4f79b4-592c-4dc1-ad09-c1582b9d8497","Type":"ContainerStarted","Data":"ad9ce1825554221993e768893541a522a107440f94554e7bfba233ff5d0b16be"} Dec 09 12:28:40 crc kubenswrapper[4703]: I1209 12:28:40.429743 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-85ff748b95-4ccp2" podStartSLOduration=5.429723018 podStartE2EDuration="5.429723018s" podCreationTimestamp="2025-12-09 12:28:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:28:40.426263132 +0000 UTC m=+1419.675026661" watchObservedRunningTime="2025-12-09 12:28:40.429723018 +0000 UTC m=+1419.678486537" Dec 09 12:28:40 crc kubenswrapper[4703]: I1209 12:28:40.486977 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=17.486960168 podStartE2EDuration="17.486960168s" podCreationTimestamp="2025-12-09 12:28:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:28:40.479065571 +0000 UTC m=+1419.727829090" watchObservedRunningTime="2025-12-09 12:28:40.486960168 +0000 UTC m=+1419.735723687" Dec 09 12:28:40 crc kubenswrapper[4703]: I1209 12:28:40.510300 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=16.51027836 podStartE2EDuration="16.51027836s" podCreationTimestamp="2025-12-09 12:28:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:28:40.505414238 +0000 UTC m=+1419.754177757" watchObservedRunningTime="2025-12-09 12:28:40.51027836 +0000 UTC m=+1419.759041879" Dec 09 12:28:41 crc kubenswrapper[4703]: I1209 12:28:41.014855 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6fc6c9d8f6-snwb8"] Dec 09 12:28:41 crc kubenswrapper[4703]: I1209 12:28:41.017630 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6fc6c9d8f6-snwb8" Dec 09 12:28:41 crc kubenswrapper[4703]: I1209 12:28:41.021839 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Dec 09 12:28:41 crc kubenswrapper[4703]: I1209 12:28:41.022563 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Dec 09 12:28:41 crc kubenswrapper[4703]: I1209 12:28:41.041766 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6fc6c9d8f6-snwb8"] Dec 09 12:28:41 crc kubenswrapper[4703]: I1209 12:28:41.066834 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bd9698c2-9ecf-4c99-9060-98201396be37-config-data-custom\") pod \"barbican-api-6fc6c9d8f6-snwb8\" (UID: \"bd9698c2-9ecf-4c99-9060-98201396be37\") " pod="openstack/barbican-api-6fc6c9d8f6-snwb8" Dec 09 12:28:41 crc kubenswrapper[4703]: I1209 12:28:41.066881 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m24gm\" (UniqueName: \"kubernetes.io/projected/bd9698c2-9ecf-4c99-9060-98201396be37-kube-api-access-m24gm\") pod \"barbican-api-6fc6c9d8f6-snwb8\" (UID: \"bd9698c2-9ecf-4c99-9060-98201396be37\") " pod="openstack/barbican-api-6fc6c9d8f6-snwb8" Dec 09 12:28:41 crc kubenswrapper[4703]: I1209 12:28:41.066925 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd9698c2-9ecf-4c99-9060-98201396be37-internal-tls-certs\") pod \"barbican-api-6fc6c9d8f6-snwb8\" (UID: \"bd9698c2-9ecf-4c99-9060-98201396be37\") " pod="openstack/barbican-api-6fc6c9d8f6-snwb8" Dec 09 12:28:41 crc kubenswrapper[4703]: I1209 12:28:41.067055 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd9698c2-9ecf-4c99-9060-98201396be37-config-data\") pod \"barbican-api-6fc6c9d8f6-snwb8\" (UID: \"bd9698c2-9ecf-4c99-9060-98201396be37\") " pod="openstack/barbican-api-6fc6c9d8f6-snwb8" Dec 09 12:28:41 crc kubenswrapper[4703]: I1209 12:28:41.067105 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd9698c2-9ecf-4c99-9060-98201396be37-combined-ca-bundle\") pod \"barbican-api-6fc6c9d8f6-snwb8\" (UID: \"bd9698c2-9ecf-4c99-9060-98201396be37\") " pod="openstack/barbican-api-6fc6c9d8f6-snwb8" Dec 09 12:28:41 crc kubenswrapper[4703]: I1209 12:28:41.067277 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd9698c2-9ecf-4c99-9060-98201396be37-public-tls-certs\") pod \"barbican-api-6fc6c9d8f6-snwb8\" (UID: \"bd9698c2-9ecf-4c99-9060-98201396be37\") " pod="openstack/barbican-api-6fc6c9d8f6-snwb8" Dec 09 12:28:41 crc kubenswrapper[4703]: I1209 12:28:41.067329 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd9698c2-9ecf-4c99-9060-98201396be37-logs\") pod \"barbican-api-6fc6c9d8f6-snwb8\" (UID: \"bd9698c2-9ecf-4c99-9060-98201396be37\") " pod="openstack/barbican-api-6fc6c9d8f6-snwb8" Dec 09 12:28:41 crc kubenswrapper[4703]: I1209 12:28:41.177061 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd9698c2-9ecf-4c99-9060-98201396be37-config-data\") pod \"barbican-api-6fc6c9d8f6-snwb8\" (UID: \"bd9698c2-9ecf-4c99-9060-98201396be37\") " pod="openstack/barbican-api-6fc6c9d8f6-snwb8" Dec 09 12:28:41 crc kubenswrapper[4703]: I1209 12:28:41.177134 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd9698c2-9ecf-4c99-9060-98201396be37-combined-ca-bundle\") pod \"barbican-api-6fc6c9d8f6-snwb8\" (UID: \"bd9698c2-9ecf-4c99-9060-98201396be37\") " pod="openstack/barbican-api-6fc6c9d8f6-snwb8" Dec 09 12:28:41 crc kubenswrapper[4703]: I1209 12:28:41.177285 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd9698c2-9ecf-4c99-9060-98201396be37-public-tls-certs\") pod \"barbican-api-6fc6c9d8f6-snwb8\" (UID: \"bd9698c2-9ecf-4c99-9060-98201396be37\") " pod="openstack/barbican-api-6fc6c9d8f6-snwb8" Dec 09 12:28:41 crc kubenswrapper[4703]: I1209 12:28:41.177383 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd9698c2-9ecf-4c99-9060-98201396be37-logs\") pod \"barbican-api-6fc6c9d8f6-snwb8\" (UID: \"bd9698c2-9ecf-4c99-9060-98201396be37\") " pod="openstack/barbican-api-6fc6c9d8f6-snwb8" Dec 09 12:28:41 crc kubenswrapper[4703]: I1209 12:28:41.177513 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bd9698c2-9ecf-4c99-9060-98201396be37-config-data-custom\") pod \"barbican-api-6fc6c9d8f6-snwb8\" (UID: \"bd9698c2-9ecf-4c99-9060-98201396be37\") " pod="openstack/barbican-api-6fc6c9d8f6-snwb8" Dec 09 12:28:41 crc kubenswrapper[4703]: I1209 12:28:41.177556 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m24gm\" (UniqueName: \"kubernetes.io/projected/bd9698c2-9ecf-4c99-9060-98201396be37-kube-api-access-m24gm\") pod \"barbican-api-6fc6c9d8f6-snwb8\" (UID: \"bd9698c2-9ecf-4c99-9060-98201396be37\") " pod="openstack/barbican-api-6fc6c9d8f6-snwb8" Dec 09 12:28:41 crc kubenswrapper[4703]: I1209 12:28:41.177620 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd9698c2-9ecf-4c99-9060-98201396be37-internal-tls-certs\") pod \"barbican-api-6fc6c9d8f6-snwb8\" (UID: \"bd9698c2-9ecf-4c99-9060-98201396be37\") " pod="openstack/barbican-api-6fc6c9d8f6-snwb8" Dec 09 12:28:41 crc kubenswrapper[4703]: I1209 12:28:41.178089 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd9698c2-9ecf-4c99-9060-98201396be37-logs\") pod \"barbican-api-6fc6c9d8f6-snwb8\" (UID: \"bd9698c2-9ecf-4c99-9060-98201396be37\") " pod="openstack/barbican-api-6fc6c9d8f6-snwb8" Dec 09 12:28:41 crc kubenswrapper[4703]: I1209 12:28:41.184567 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bd9698c2-9ecf-4c99-9060-98201396be37-config-data-custom\") pod \"barbican-api-6fc6c9d8f6-snwb8\" (UID: \"bd9698c2-9ecf-4c99-9060-98201396be37\") " pod="openstack/barbican-api-6fc6c9d8f6-snwb8" Dec 09 12:28:41 crc kubenswrapper[4703]: I1209 12:28:41.185013 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd9698c2-9ecf-4c99-9060-98201396be37-combined-ca-bundle\") pod \"barbican-api-6fc6c9d8f6-snwb8\" (UID: \"bd9698c2-9ecf-4c99-9060-98201396be37\") " pod="openstack/barbican-api-6fc6c9d8f6-snwb8" Dec 09 12:28:41 crc kubenswrapper[4703]: I1209 12:28:41.186637 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd9698c2-9ecf-4c99-9060-98201396be37-config-data\") pod \"barbican-api-6fc6c9d8f6-snwb8\" (UID: \"bd9698c2-9ecf-4c99-9060-98201396be37\") " pod="openstack/barbican-api-6fc6c9d8f6-snwb8" Dec 09 12:28:41 crc kubenswrapper[4703]: I1209 12:28:41.194261 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd9698c2-9ecf-4c99-9060-98201396be37-internal-tls-certs\") pod \"barbican-api-6fc6c9d8f6-snwb8\" (UID: \"bd9698c2-9ecf-4c99-9060-98201396be37\") " pod="openstack/barbican-api-6fc6c9d8f6-snwb8" Dec 09 12:28:41 crc kubenswrapper[4703]: I1209 12:28:41.201144 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m24gm\" (UniqueName: \"kubernetes.io/projected/bd9698c2-9ecf-4c99-9060-98201396be37-kube-api-access-m24gm\") pod \"barbican-api-6fc6c9d8f6-snwb8\" (UID: \"bd9698c2-9ecf-4c99-9060-98201396be37\") " pod="openstack/barbican-api-6fc6c9d8f6-snwb8" Dec 09 12:28:41 crc kubenswrapper[4703]: I1209 12:28:41.201706 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd9698c2-9ecf-4c99-9060-98201396be37-public-tls-certs\") pod \"barbican-api-6fc6c9d8f6-snwb8\" (UID: \"bd9698c2-9ecf-4c99-9060-98201396be37\") " pod="openstack/barbican-api-6fc6c9d8f6-snwb8" Dec 09 12:28:41 crc kubenswrapper[4703]: I1209 12:28:41.363517 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6fc6c9d8f6-snwb8" Dec 09 12:28:43 crc kubenswrapper[4703]: I1209 12:28:43.982884 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 09 12:28:43 crc kubenswrapper[4703]: I1209 12:28:43.983544 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 09 12:28:44 crc kubenswrapper[4703]: I1209 12:28:44.490045 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-79c5b588df-ksgrp" Dec 09 12:28:44 crc kubenswrapper[4703]: I1209 12:28:44.587621 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-74fff9b6-zdlbc"] Dec 09 12:28:44 crc kubenswrapper[4703]: I1209 12:28:44.588349 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-74fff9b6-zdlbc" podUID="5292ab55-fe87-4bb9-8975-5a06bcc517e0" containerName="neutron-api" containerID="cri-o://4fff200c8f68ea22d3b22f55bea6562225b2518482ebdeddc38c45ae5f0acd87" gracePeriod=30 Dec 09 12:28:44 crc kubenswrapper[4703]: I1209 12:28:44.588556 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-74fff9b6-zdlbc" podUID="5292ab55-fe87-4bb9-8975-5a06bcc517e0" containerName="neutron-httpd" containerID="cri-o://b9d54ab745e1ede6572649622e30d1f60eee4f9770db1af98b5bd7f8460b4099" gracePeriod=30 Dec 09 12:28:44 crc kubenswrapper[4703]: I1209 12:28:44.600802 4703 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-74fff9b6-zdlbc" podUID="5292ab55-fe87-4bb9-8975-5a06bcc517e0" containerName="neutron-httpd" probeResult="failure" output="Get \"http://10.217.0.163:9696/\": EOF" Dec 09 12:28:44 crc kubenswrapper[4703]: I1209 12:28:44.843122 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 09 12:28:44 crc kubenswrapper[4703]: I1209 12:28:44.843863 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 09 12:28:45 crc kubenswrapper[4703]: I1209 12:28:45.220813 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 09 12:28:45 crc kubenswrapper[4703]: I1209 12:28:45.220902 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 09 12:28:45 crc kubenswrapper[4703]: I1209 12:28:45.222934 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 09 12:28:45 crc kubenswrapper[4703]: I1209 12:28:45.222985 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 09 12:28:45 crc kubenswrapper[4703]: I1209 12:28:45.223087 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 09 12:28:45 crc kubenswrapper[4703]: I1209 12:28:45.233510 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 09 12:28:45 crc kubenswrapper[4703]: I1209 12:28:45.490551 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-5tddl" Dec 09 12:28:45 crc kubenswrapper[4703]: I1209 12:28:45.550049 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-5tddl" Dec 09 12:28:45 crc kubenswrapper[4703]: I1209 12:28:45.550385 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-5tddl" event={"ID":"24ebaba5-65a6-4be5-8112-10c77a6d986c","Type":"ContainerDied","Data":"64d9f24ae92cac0e4d79a040aca44254783ee6a024bb928510929bfc0804040a"} Dec 09 12:28:45 crc kubenswrapper[4703]: I1209 12:28:45.552274 4703 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64d9f24ae92cac0e4d79a040aca44254783ee6a024bb928510929bfc0804040a" Dec 09 12:28:45 crc kubenswrapper[4703]: I1209 12:28:45.552318 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 09 12:28:45 crc kubenswrapper[4703]: I1209 12:28:45.552558 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 09 12:28:45 crc kubenswrapper[4703]: I1209 12:28:45.601826 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/24ebaba5-65a6-4be5-8112-10c77a6d986c-db-sync-config-data\") pod \"24ebaba5-65a6-4be5-8112-10c77a6d986c\" (UID: \"24ebaba5-65a6-4be5-8112-10c77a6d986c\") " Dec 09 12:28:45 crc kubenswrapper[4703]: I1209 12:28:45.601942 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24ebaba5-65a6-4be5-8112-10c77a6d986c-config-data\") pod \"24ebaba5-65a6-4be5-8112-10c77a6d986c\" (UID: \"24ebaba5-65a6-4be5-8112-10c77a6d986c\") " Dec 09 12:28:45 crc kubenswrapper[4703]: I1209 12:28:45.602071 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24ebaba5-65a6-4be5-8112-10c77a6d986c-scripts\") pod \"24ebaba5-65a6-4be5-8112-10c77a6d986c\" (UID: \"24ebaba5-65a6-4be5-8112-10c77a6d986c\") " Dec 09 12:28:45 crc kubenswrapper[4703]: I1209 12:28:45.602133 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24ebaba5-65a6-4be5-8112-10c77a6d986c-combined-ca-bundle\") pod \"24ebaba5-65a6-4be5-8112-10c77a6d986c\" (UID: \"24ebaba5-65a6-4be5-8112-10c77a6d986c\") " Dec 09 12:28:45 crc kubenswrapper[4703]: I1209 12:28:45.602166 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fk6px\" (UniqueName: \"kubernetes.io/projected/24ebaba5-65a6-4be5-8112-10c77a6d986c-kube-api-access-fk6px\") pod \"24ebaba5-65a6-4be5-8112-10c77a6d986c\" (UID: \"24ebaba5-65a6-4be5-8112-10c77a6d986c\") " Dec 09 12:28:45 crc kubenswrapper[4703]: I1209 12:28:45.602392 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/24ebaba5-65a6-4be5-8112-10c77a6d986c-etc-machine-id\") pod \"24ebaba5-65a6-4be5-8112-10c77a6d986c\" (UID: \"24ebaba5-65a6-4be5-8112-10c77a6d986c\") " Dec 09 12:28:45 crc kubenswrapper[4703]: I1209 12:28:45.602958 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24ebaba5-65a6-4be5-8112-10c77a6d986c-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "24ebaba5-65a6-4be5-8112-10c77a6d986c" (UID: "24ebaba5-65a6-4be5-8112-10c77a6d986c"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 12:28:45 crc kubenswrapper[4703]: I1209 12:28:45.614828 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24ebaba5-65a6-4be5-8112-10c77a6d986c-scripts" (OuterVolumeSpecName: "scripts") pod "24ebaba5-65a6-4be5-8112-10c77a6d986c" (UID: "24ebaba5-65a6-4be5-8112-10c77a6d986c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:28:45 crc kubenswrapper[4703]: I1209 12:28:45.619794 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24ebaba5-65a6-4be5-8112-10c77a6d986c-kube-api-access-fk6px" (OuterVolumeSpecName: "kube-api-access-fk6px") pod "24ebaba5-65a6-4be5-8112-10c77a6d986c" (UID: "24ebaba5-65a6-4be5-8112-10c77a6d986c"). InnerVolumeSpecName "kube-api-access-fk6px". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:28:45 crc kubenswrapper[4703]: I1209 12:28:45.627566 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24ebaba5-65a6-4be5-8112-10c77a6d986c-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "24ebaba5-65a6-4be5-8112-10c77a6d986c" (UID: "24ebaba5-65a6-4be5-8112-10c77a6d986c"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:28:45 crc kubenswrapper[4703]: I1209 12:28:45.675290 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24ebaba5-65a6-4be5-8112-10c77a6d986c-config-data" (OuterVolumeSpecName: "config-data") pod "24ebaba5-65a6-4be5-8112-10c77a6d986c" (UID: "24ebaba5-65a6-4be5-8112-10c77a6d986c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:28:45 crc kubenswrapper[4703]: I1209 12:28:45.691570 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24ebaba5-65a6-4be5-8112-10c77a6d986c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "24ebaba5-65a6-4be5-8112-10c77a6d986c" (UID: "24ebaba5-65a6-4be5-8112-10c77a6d986c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:28:45 crc kubenswrapper[4703]: I1209 12:28:45.705451 4703 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24ebaba5-65a6-4be5-8112-10c77a6d986c-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:45 crc kubenswrapper[4703]: I1209 12:28:45.705494 4703 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24ebaba5-65a6-4be5-8112-10c77a6d986c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:45 crc kubenswrapper[4703]: I1209 12:28:45.705508 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fk6px\" (UniqueName: \"kubernetes.io/projected/24ebaba5-65a6-4be5-8112-10c77a6d986c-kube-api-access-fk6px\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:45 crc kubenswrapper[4703]: I1209 12:28:45.705519 4703 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/24ebaba5-65a6-4be5-8112-10c77a6d986c-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:45 crc kubenswrapper[4703]: I1209 12:28:45.705531 4703 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/24ebaba5-65a6-4be5-8112-10c77a6d986c-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:45 crc kubenswrapper[4703]: I1209 12:28:45.705542 4703 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24ebaba5-65a6-4be5-8112-10c77a6d986c-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:45 crc kubenswrapper[4703]: I1209 12:28:45.887159 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6fc6c9d8f6-snwb8"] Dec 09 12:28:46 crc kubenswrapper[4703]: I1209 12:28:46.135347 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-85ff748b95-4ccp2" Dec 09 12:28:46 crc kubenswrapper[4703]: I1209 12:28:46.209276 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-t56kd"] Dec 09 12:28:46 crc kubenswrapper[4703]: I1209 12:28:46.209577 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-55f844cf75-t56kd" podUID="c23c3529-4f24-4180-84ee-f50c0824b3db" containerName="dnsmasq-dns" containerID="cri-o://fd6d85c18f5436e936f17bcd1def02282c8292b6e7829b2b04585449ed5aee0b" gracePeriod=10 Dec 09 12:28:46 crc kubenswrapper[4703]: I1209 12:28:46.594498 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-94fd6bfbb-8c7xs" event={"ID":"8ff08dc4-6bf7-4c61-bdf4-289c0c653a2f","Type":"ContainerStarted","Data":"9adfb512ae2ab274efee1d0e64ecacb8729c1d7ec2e24fd57c3de2c74b98259d"} Dec 09 12:28:46 crc kubenswrapper[4703]: I1209 12:28:46.595807 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-94fd6bfbb-8c7xs" Dec 09 12:28:46 crc kubenswrapper[4703]: I1209 12:28:46.596400 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-94fd6bfbb-8c7xs" Dec 09 12:28:46 crc kubenswrapper[4703]: I1209 12:28:46.606558 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6fc6c9d8f6-snwb8" event={"ID":"bd9698c2-9ecf-4c99-9060-98201396be37","Type":"ContainerStarted","Data":"02a2ac0153d0b7386d23a5fc3c454bf4fdd7051e224fe2cd879436e40a8c036b"} Dec 09 12:28:47 crc kubenswrapper[4703]: I1209 12:28:47.024376 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-94fd6bfbb-8c7xs" podStartSLOduration=11.024354139 podStartE2EDuration="11.024354139s" podCreationTimestamp="2025-12-09 12:28:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:28:47.01234399 +0000 UTC m=+1426.261107529" watchObservedRunningTime="2025-12-09 12:28:47.024354139 +0000 UTC m=+1426.273117658" Dec 09 12:28:47 crc kubenswrapper[4703]: I1209 12:28:47.383693 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Dec 09 12:28:47 crc kubenswrapper[4703]: E1209 12:28:47.384334 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24ebaba5-65a6-4be5-8112-10c77a6d986c" containerName="cinder-db-sync" Dec 09 12:28:47 crc kubenswrapper[4703]: I1209 12:28:47.384350 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="24ebaba5-65a6-4be5-8112-10c77a6d986c" containerName="cinder-db-sync" Dec 09 12:28:47 crc kubenswrapper[4703]: I1209 12:28:47.384590 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="24ebaba5-65a6-4be5-8112-10c77a6d986c" containerName="cinder-db-sync" Dec 09 12:28:47 crc kubenswrapper[4703]: I1209 12:28:47.385888 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 09 12:28:47 crc kubenswrapper[4703]: I1209 12:28:47.403586 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Dec 09 12:28:47 crc kubenswrapper[4703]: I1209 12:28:47.404273 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Dec 09 12:28:47 crc kubenswrapper[4703]: I1209 12:28:47.404434 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-5ckqr" Dec 09 12:28:47 crc kubenswrapper[4703]: I1209 12:28:47.404490 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Dec 09 12:28:47 crc kubenswrapper[4703]: I1209 12:28:47.416263 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 09 12:28:47 crc kubenswrapper[4703]: I1209 12:28:47.451260 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-tzbnl"] Dec 09 12:28:47 crc kubenswrapper[4703]: I1209 12:28:47.462358 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-tzbnl" Dec 09 12:28:47 crc kubenswrapper[4703]: I1209 12:28:47.492422 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-tzbnl"] Dec 09 12:28:47 crc kubenswrapper[4703]: I1209 12:28:47.545034 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c7f991a-5d56-41ee-a0c3-005743b900b9-config-data\") pod \"cinder-scheduler-0\" (UID: \"2c7f991a-5d56-41ee-a0c3-005743b900b9\") " pod="openstack/cinder-scheduler-0" Dec 09 12:28:47 crc kubenswrapper[4703]: I1209 12:28:47.545114 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p87wn\" (UniqueName: \"kubernetes.io/projected/2c7f991a-5d56-41ee-a0c3-005743b900b9-kube-api-access-p87wn\") pod \"cinder-scheduler-0\" (UID: \"2c7f991a-5d56-41ee-a0c3-005743b900b9\") " pod="openstack/cinder-scheduler-0" Dec 09 12:28:47 crc kubenswrapper[4703]: I1209 12:28:47.545449 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2c7f991a-5d56-41ee-a0c3-005743b900b9-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"2c7f991a-5d56-41ee-a0c3-005743b900b9\") " pod="openstack/cinder-scheduler-0" Dec 09 12:28:47 crc kubenswrapper[4703]: I1209 12:28:47.545618 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c7f991a-5d56-41ee-a0c3-005743b900b9-scripts\") pod \"cinder-scheduler-0\" (UID: \"2c7f991a-5d56-41ee-a0c3-005743b900b9\") " pod="openstack/cinder-scheduler-0" Dec 09 12:28:47 crc kubenswrapper[4703]: I1209 12:28:47.545699 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c7f991a-5d56-41ee-a0c3-005743b900b9-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"2c7f991a-5d56-41ee-a0c3-005743b900b9\") " pod="openstack/cinder-scheduler-0" Dec 09 12:28:47 crc kubenswrapper[4703]: I1209 12:28:47.545742 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2c7f991a-5d56-41ee-a0c3-005743b900b9-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"2c7f991a-5d56-41ee-a0c3-005743b900b9\") " pod="openstack/cinder-scheduler-0" Dec 09 12:28:47 crc kubenswrapper[4703]: I1209 12:28:47.651401 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c7f991a-5d56-41ee-a0c3-005743b900b9-config-data\") pod \"cinder-scheduler-0\" (UID: \"2c7f991a-5d56-41ee-a0c3-005743b900b9\") " pod="openstack/cinder-scheduler-0" Dec 09 12:28:47 crc kubenswrapper[4703]: I1209 12:28:47.651474 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e4b5b400-6d6b-41b6-a431-c22f26c76096-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-tzbnl\" (UID: \"e4b5b400-6d6b-41b6-a431-c22f26c76096\") " pod="openstack/dnsmasq-dns-5c9776ccc5-tzbnl" Dec 09 12:28:47 crc kubenswrapper[4703]: I1209 12:28:47.651515 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e4b5b400-6d6b-41b6-a431-c22f26c76096-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-tzbnl\" (UID: \"e4b5b400-6d6b-41b6-a431-c22f26c76096\") " pod="openstack/dnsmasq-dns-5c9776ccc5-tzbnl" Dec 09 12:28:47 crc kubenswrapper[4703]: I1209 12:28:47.651562 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p87wn\" (UniqueName: \"kubernetes.io/projected/2c7f991a-5d56-41ee-a0c3-005743b900b9-kube-api-access-p87wn\") pod \"cinder-scheduler-0\" (UID: \"2c7f991a-5d56-41ee-a0c3-005743b900b9\") " pod="openstack/cinder-scheduler-0" Dec 09 12:28:47 crc kubenswrapper[4703]: I1209 12:28:47.651665 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2c7f991a-5d56-41ee-a0c3-005743b900b9-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"2c7f991a-5d56-41ee-a0c3-005743b900b9\") " pod="openstack/cinder-scheduler-0" Dec 09 12:28:47 crc kubenswrapper[4703]: I1209 12:28:47.651732 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e4b5b400-6d6b-41b6-a431-c22f26c76096-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-tzbnl\" (UID: \"e4b5b400-6d6b-41b6-a431-c22f26c76096\") " pod="openstack/dnsmasq-dns-5c9776ccc5-tzbnl" Dec 09 12:28:47 crc kubenswrapper[4703]: I1209 12:28:47.651760 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c7f991a-5d56-41ee-a0c3-005743b900b9-scripts\") pod \"cinder-scheduler-0\" (UID: \"2c7f991a-5d56-41ee-a0c3-005743b900b9\") " pod="openstack/cinder-scheduler-0" Dec 09 12:28:47 crc kubenswrapper[4703]: I1209 12:28:47.651806 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c7f991a-5d56-41ee-a0c3-005743b900b9-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"2c7f991a-5d56-41ee-a0c3-005743b900b9\") " pod="openstack/cinder-scheduler-0" Dec 09 12:28:47 crc kubenswrapper[4703]: I1209 12:28:47.651832 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e4b5b400-6d6b-41b6-a431-c22f26c76096-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-tzbnl\" (UID: \"e4b5b400-6d6b-41b6-a431-c22f26c76096\") " pod="openstack/dnsmasq-dns-5c9776ccc5-tzbnl" Dec 09 12:28:47 crc kubenswrapper[4703]: I1209 12:28:47.651871 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2c7f991a-5d56-41ee-a0c3-005743b900b9-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"2c7f991a-5d56-41ee-a0c3-005743b900b9\") " pod="openstack/cinder-scheduler-0" Dec 09 12:28:47 crc kubenswrapper[4703]: I1209 12:28:47.651906 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4b5b400-6d6b-41b6-a431-c22f26c76096-config\") pod \"dnsmasq-dns-5c9776ccc5-tzbnl\" (UID: \"e4b5b400-6d6b-41b6-a431-c22f26c76096\") " pod="openstack/dnsmasq-dns-5c9776ccc5-tzbnl" Dec 09 12:28:47 crc kubenswrapper[4703]: I1209 12:28:47.651929 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sw5mq\" (UniqueName: \"kubernetes.io/projected/e4b5b400-6d6b-41b6-a431-c22f26c76096-kube-api-access-sw5mq\") pod \"dnsmasq-dns-5c9776ccc5-tzbnl\" (UID: \"e4b5b400-6d6b-41b6-a431-c22f26c76096\") " pod="openstack/dnsmasq-dns-5c9776ccc5-tzbnl" Dec 09 12:28:47 crc kubenswrapper[4703]: I1209 12:28:47.652357 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2c7f991a-5d56-41ee-a0c3-005743b900b9-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"2c7f991a-5d56-41ee-a0c3-005743b900b9\") " pod="openstack/cinder-scheduler-0" Dec 09 12:28:47 crc kubenswrapper[4703]: I1209 12:28:47.664600 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c7f991a-5d56-41ee-a0c3-005743b900b9-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"2c7f991a-5d56-41ee-a0c3-005743b900b9\") " pod="openstack/cinder-scheduler-0" Dec 09 12:28:47 crc kubenswrapper[4703]: I1209 12:28:47.670961 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c7f991a-5d56-41ee-a0c3-005743b900b9-config-data\") pod \"cinder-scheduler-0\" (UID: \"2c7f991a-5d56-41ee-a0c3-005743b900b9\") " pod="openstack/cinder-scheduler-0" Dec 09 12:28:47 crc kubenswrapper[4703]: I1209 12:28:47.673590 4703 generic.go:334] "Generic (PLEG): container finished" podID="c23c3529-4f24-4180-84ee-f50c0824b3db" containerID="fd6d85c18f5436e936f17bcd1def02282c8292b6e7829b2b04585449ed5aee0b" exitCode=0 Dec 09 12:28:47 crc kubenswrapper[4703]: I1209 12:28:47.673712 4703 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 09 12:28:47 crc kubenswrapper[4703]: I1209 12:28:47.673738 4703 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 09 12:28:47 crc kubenswrapper[4703]: I1209 12:28:47.675096 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-t56kd" event={"ID":"c23c3529-4f24-4180-84ee-f50c0824b3db","Type":"ContainerDied","Data":"fd6d85c18f5436e936f17bcd1def02282c8292b6e7829b2b04585449ed5aee0b"} Dec 09 12:28:47 crc kubenswrapper[4703]: I1209 12:28:47.677966 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2c7f991a-5d56-41ee-a0c3-005743b900b9-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"2c7f991a-5d56-41ee-a0c3-005743b900b9\") " pod="openstack/cinder-scheduler-0" Dec 09 12:28:47 crc kubenswrapper[4703]: I1209 12:28:47.683641 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c7f991a-5d56-41ee-a0c3-005743b900b9-scripts\") pod \"cinder-scheduler-0\" (UID: \"2c7f991a-5d56-41ee-a0c3-005743b900b9\") " pod="openstack/cinder-scheduler-0" Dec 09 12:28:47 crc kubenswrapper[4703]: I1209 12:28:47.693685 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p87wn\" (UniqueName: \"kubernetes.io/projected/2c7f991a-5d56-41ee-a0c3-005743b900b9-kube-api-access-p87wn\") pod \"cinder-scheduler-0\" (UID: \"2c7f991a-5d56-41ee-a0c3-005743b900b9\") " pod="openstack/cinder-scheduler-0" Dec 09 12:28:47 crc kubenswrapper[4703]: I1209 12:28:47.753871 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e4b5b400-6d6b-41b6-a431-c22f26c76096-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-tzbnl\" (UID: \"e4b5b400-6d6b-41b6-a431-c22f26c76096\") " pod="openstack/dnsmasq-dns-5c9776ccc5-tzbnl" Dec 09 12:28:47 crc kubenswrapper[4703]: I1209 12:28:47.753941 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e4b5b400-6d6b-41b6-a431-c22f26c76096-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-tzbnl\" (UID: \"e4b5b400-6d6b-41b6-a431-c22f26c76096\") " pod="openstack/dnsmasq-dns-5c9776ccc5-tzbnl" Dec 09 12:28:47 crc kubenswrapper[4703]: I1209 12:28:47.753989 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4b5b400-6d6b-41b6-a431-c22f26c76096-config\") pod \"dnsmasq-dns-5c9776ccc5-tzbnl\" (UID: \"e4b5b400-6d6b-41b6-a431-c22f26c76096\") " pod="openstack/dnsmasq-dns-5c9776ccc5-tzbnl" Dec 09 12:28:47 crc kubenswrapper[4703]: I1209 12:28:47.754010 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sw5mq\" (UniqueName: \"kubernetes.io/projected/e4b5b400-6d6b-41b6-a431-c22f26c76096-kube-api-access-sw5mq\") pod \"dnsmasq-dns-5c9776ccc5-tzbnl\" (UID: \"e4b5b400-6d6b-41b6-a431-c22f26c76096\") " pod="openstack/dnsmasq-dns-5c9776ccc5-tzbnl" Dec 09 12:28:47 crc kubenswrapper[4703]: I1209 12:28:47.754044 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e4b5b400-6d6b-41b6-a431-c22f26c76096-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-tzbnl\" (UID: \"e4b5b400-6d6b-41b6-a431-c22f26c76096\") " pod="openstack/dnsmasq-dns-5c9776ccc5-tzbnl" Dec 09 12:28:47 crc kubenswrapper[4703]: I1209 12:28:47.754067 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e4b5b400-6d6b-41b6-a431-c22f26c76096-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-tzbnl\" (UID: \"e4b5b400-6d6b-41b6-a431-c22f26c76096\") " pod="openstack/dnsmasq-dns-5c9776ccc5-tzbnl" Dec 09 12:28:47 crc kubenswrapper[4703]: I1209 12:28:47.754853 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e4b5b400-6d6b-41b6-a431-c22f26c76096-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-tzbnl\" (UID: \"e4b5b400-6d6b-41b6-a431-c22f26c76096\") " pod="openstack/dnsmasq-dns-5c9776ccc5-tzbnl" Dec 09 12:28:47 crc kubenswrapper[4703]: I1209 12:28:47.755449 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e4b5b400-6d6b-41b6-a431-c22f26c76096-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-tzbnl\" (UID: \"e4b5b400-6d6b-41b6-a431-c22f26c76096\") " pod="openstack/dnsmasq-dns-5c9776ccc5-tzbnl" Dec 09 12:28:47 crc kubenswrapper[4703]: I1209 12:28:47.756038 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e4b5b400-6d6b-41b6-a431-c22f26c76096-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-tzbnl\" (UID: \"e4b5b400-6d6b-41b6-a431-c22f26c76096\") " pod="openstack/dnsmasq-dns-5c9776ccc5-tzbnl" Dec 09 12:28:47 crc kubenswrapper[4703]: I1209 12:28:47.756661 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4b5b400-6d6b-41b6-a431-c22f26c76096-config\") pod \"dnsmasq-dns-5c9776ccc5-tzbnl\" (UID: \"e4b5b400-6d6b-41b6-a431-c22f26c76096\") " pod="openstack/dnsmasq-dns-5c9776ccc5-tzbnl" Dec 09 12:28:47 crc kubenswrapper[4703]: I1209 12:28:47.757321 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e4b5b400-6d6b-41b6-a431-c22f26c76096-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-tzbnl\" (UID: \"e4b5b400-6d6b-41b6-a431-c22f26c76096\") " pod="openstack/dnsmasq-dns-5c9776ccc5-tzbnl" Dec 09 12:28:47 crc kubenswrapper[4703]: I1209 12:28:47.781537 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 09 12:28:47 crc kubenswrapper[4703]: I1209 12:28:47.786000 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sw5mq\" (UniqueName: \"kubernetes.io/projected/e4b5b400-6d6b-41b6-a431-c22f26c76096-kube-api-access-sw5mq\") pod \"dnsmasq-dns-5c9776ccc5-tzbnl\" (UID: \"e4b5b400-6d6b-41b6-a431-c22f26c76096\") " pod="openstack/dnsmasq-dns-5c9776ccc5-tzbnl" Dec 09 12:28:47 crc kubenswrapper[4703]: I1209 12:28:47.814050 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-tzbnl" Dec 09 12:28:47 crc kubenswrapper[4703]: I1209 12:28:47.832242 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Dec 09 12:28:47 crc kubenswrapper[4703]: I1209 12:28:47.834752 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 09 12:28:47 crc kubenswrapper[4703]: I1209 12:28:47.837284 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Dec 09 12:28:47 crc kubenswrapper[4703]: I1209 12:28:47.869721 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 09 12:28:47 crc kubenswrapper[4703]: I1209 12:28:47.959496 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7453bd3b-1c8e-4401-b6b3-644e7aff4ca2-etc-machine-id\") pod \"cinder-api-0\" (UID: \"7453bd3b-1c8e-4401-b6b3-644e7aff4ca2\") " pod="openstack/cinder-api-0" Dec 09 12:28:47 crc kubenswrapper[4703]: I1209 12:28:47.959543 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7453bd3b-1c8e-4401-b6b3-644e7aff4ca2-scripts\") pod \"cinder-api-0\" (UID: \"7453bd3b-1c8e-4401-b6b3-644e7aff4ca2\") " pod="openstack/cinder-api-0" Dec 09 12:28:47 crc kubenswrapper[4703]: I1209 12:28:47.959573 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7453bd3b-1c8e-4401-b6b3-644e7aff4ca2-logs\") pod \"cinder-api-0\" (UID: \"7453bd3b-1c8e-4401-b6b3-644e7aff4ca2\") " pod="openstack/cinder-api-0" Dec 09 12:28:47 crc kubenswrapper[4703]: I1209 12:28:47.959634 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7453bd3b-1c8e-4401-b6b3-644e7aff4ca2-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"7453bd3b-1c8e-4401-b6b3-644e7aff4ca2\") " pod="openstack/cinder-api-0" Dec 09 12:28:47 crc kubenswrapper[4703]: I1209 12:28:47.959673 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7453bd3b-1c8e-4401-b6b3-644e7aff4ca2-config-data-custom\") pod \"cinder-api-0\" (UID: \"7453bd3b-1c8e-4401-b6b3-644e7aff4ca2\") " pod="openstack/cinder-api-0" Dec 09 12:28:47 crc kubenswrapper[4703]: I1209 12:28:47.959702 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7453bd3b-1c8e-4401-b6b3-644e7aff4ca2-config-data\") pod \"cinder-api-0\" (UID: \"7453bd3b-1c8e-4401-b6b3-644e7aff4ca2\") " pod="openstack/cinder-api-0" Dec 09 12:28:47 crc kubenswrapper[4703]: I1209 12:28:47.959789 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9mf2\" (UniqueName: \"kubernetes.io/projected/7453bd3b-1c8e-4401-b6b3-644e7aff4ca2-kube-api-access-q9mf2\") pod \"cinder-api-0\" (UID: \"7453bd3b-1c8e-4401-b6b3-644e7aff4ca2\") " pod="openstack/cinder-api-0" Dec 09 12:28:48 crc kubenswrapper[4703]: I1209 12:28:48.061969 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7453bd3b-1c8e-4401-b6b3-644e7aff4ca2-etc-machine-id\") pod \"cinder-api-0\" (UID: \"7453bd3b-1c8e-4401-b6b3-644e7aff4ca2\") " pod="openstack/cinder-api-0" Dec 09 12:28:48 crc kubenswrapper[4703]: I1209 12:28:48.062033 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7453bd3b-1c8e-4401-b6b3-644e7aff4ca2-scripts\") pod \"cinder-api-0\" (UID: \"7453bd3b-1c8e-4401-b6b3-644e7aff4ca2\") " pod="openstack/cinder-api-0" Dec 09 12:28:48 crc kubenswrapper[4703]: I1209 12:28:48.062067 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7453bd3b-1c8e-4401-b6b3-644e7aff4ca2-logs\") pod \"cinder-api-0\" (UID: \"7453bd3b-1c8e-4401-b6b3-644e7aff4ca2\") " pod="openstack/cinder-api-0" Dec 09 12:28:48 crc kubenswrapper[4703]: I1209 12:28:48.062132 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7453bd3b-1c8e-4401-b6b3-644e7aff4ca2-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"7453bd3b-1c8e-4401-b6b3-644e7aff4ca2\") " pod="openstack/cinder-api-0" Dec 09 12:28:48 crc kubenswrapper[4703]: I1209 12:28:48.062202 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7453bd3b-1c8e-4401-b6b3-644e7aff4ca2-config-data-custom\") pod \"cinder-api-0\" (UID: \"7453bd3b-1c8e-4401-b6b3-644e7aff4ca2\") " pod="openstack/cinder-api-0" Dec 09 12:28:48 crc kubenswrapper[4703]: I1209 12:28:48.062250 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7453bd3b-1c8e-4401-b6b3-644e7aff4ca2-config-data\") pod \"cinder-api-0\" (UID: \"7453bd3b-1c8e-4401-b6b3-644e7aff4ca2\") " pod="openstack/cinder-api-0" Dec 09 12:28:48 crc kubenswrapper[4703]: I1209 12:28:48.062350 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9mf2\" (UniqueName: \"kubernetes.io/projected/7453bd3b-1c8e-4401-b6b3-644e7aff4ca2-kube-api-access-q9mf2\") pod \"cinder-api-0\" (UID: \"7453bd3b-1c8e-4401-b6b3-644e7aff4ca2\") " pod="openstack/cinder-api-0" Dec 09 12:28:48 crc kubenswrapper[4703]: I1209 12:28:48.062820 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7453bd3b-1c8e-4401-b6b3-644e7aff4ca2-etc-machine-id\") pod \"cinder-api-0\" (UID: \"7453bd3b-1c8e-4401-b6b3-644e7aff4ca2\") " pod="openstack/cinder-api-0" Dec 09 12:28:48 crc kubenswrapper[4703]: I1209 12:28:48.064579 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7453bd3b-1c8e-4401-b6b3-644e7aff4ca2-logs\") pod \"cinder-api-0\" (UID: \"7453bd3b-1c8e-4401-b6b3-644e7aff4ca2\") " pod="openstack/cinder-api-0" Dec 09 12:28:48 crc kubenswrapper[4703]: I1209 12:28:48.068595 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7453bd3b-1c8e-4401-b6b3-644e7aff4ca2-config-data-custom\") pod \"cinder-api-0\" (UID: \"7453bd3b-1c8e-4401-b6b3-644e7aff4ca2\") " pod="openstack/cinder-api-0" Dec 09 12:28:48 crc kubenswrapper[4703]: I1209 12:28:48.071123 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7453bd3b-1c8e-4401-b6b3-644e7aff4ca2-config-data\") pod \"cinder-api-0\" (UID: \"7453bd3b-1c8e-4401-b6b3-644e7aff4ca2\") " pod="openstack/cinder-api-0" Dec 09 12:28:48 crc kubenswrapper[4703]: I1209 12:28:48.071712 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7453bd3b-1c8e-4401-b6b3-644e7aff4ca2-scripts\") pod \"cinder-api-0\" (UID: \"7453bd3b-1c8e-4401-b6b3-644e7aff4ca2\") " pod="openstack/cinder-api-0" Dec 09 12:28:48 crc kubenswrapper[4703]: I1209 12:28:48.082373 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7453bd3b-1c8e-4401-b6b3-644e7aff4ca2-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"7453bd3b-1c8e-4401-b6b3-644e7aff4ca2\") " pod="openstack/cinder-api-0" Dec 09 12:28:48 crc kubenswrapper[4703]: I1209 12:28:48.083346 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9mf2\" (UniqueName: \"kubernetes.io/projected/7453bd3b-1c8e-4401-b6b3-644e7aff4ca2-kube-api-access-q9mf2\") pod \"cinder-api-0\" (UID: \"7453bd3b-1c8e-4401-b6b3-644e7aff4ca2\") " pod="openstack/cinder-api-0" Dec 09 12:28:48 crc kubenswrapper[4703]: I1209 12:28:48.204156 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 09 12:28:48 crc kubenswrapper[4703]: I1209 12:28:48.229565 4703 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-55f844cf75-t56kd" podUID="c23c3529-4f24-4180-84ee-f50c0824b3db" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.164:5353: connect: connection refused" Dec 09 12:28:48 crc kubenswrapper[4703]: I1209 12:28:48.688714 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-865d57f67b-x7hvw" event={"ID":"2ddebb8b-fcc7-4ee0-b7d4-af4b2969f8f3","Type":"ContainerStarted","Data":"48e4c49db40824cfd4d5ffdfb851aa552c9e2d8c5e562dbee7084c494f5f4ad2"} Dec 09 12:28:48 crc kubenswrapper[4703]: I1209 12:28:48.688791 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-865d57f67b-x7hvw" Dec 09 12:28:48 crc kubenswrapper[4703]: I1209 12:28:48.688957 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-865d57f67b-x7hvw" Dec 09 12:28:48 crc kubenswrapper[4703]: I1209 12:28:48.692322 4703 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/placement-865d57f67b-x7hvw" podUID="2ddebb8b-fcc7-4ee0-b7d4-af4b2969f8f3" containerName="placement-log" probeResult="failure" output="Get \"https://10.217.0.171:8778/\": dial tcp 10.217.0.171:8778: connect: connection refused" Dec 09 12:28:48 crc kubenswrapper[4703]: I1209 12:28:48.696662 4703 generic.go:334] "Generic (PLEG): container finished" podID="5292ab55-fe87-4bb9-8975-5a06bcc517e0" containerID="4fff200c8f68ea22d3b22f55bea6562225b2518482ebdeddc38c45ae5f0acd87" exitCode=0 Dec 09 12:28:48 crc kubenswrapper[4703]: I1209 12:28:48.696744 4703 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 09 12:28:48 crc kubenswrapper[4703]: I1209 12:28:48.696863 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-74fff9b6-zdlbc" event={"ID":"5292ab55-fe87-4bb9-8975-5a06bcc517e0","Type":"ContainerDied","Data":"4fff200c8f68ea22d3b22f55bea6562225b2518482ebdeddc38c45ae5f0acd87"} Dec 09 12:28:48 crc kubenswrapper[4703]: I1209 12:28:48.727784 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-865d57f67b-x7hvw" podStartSLOduration=13.727764635 podStartE2EDuration="13.727764635s" podCreationTimestamp="2025-12-09 12:28:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:28:48.715026997 +0000 UTC m=+1427.963790536" watchObservedRunningTime="2025-12-09 12:28:48.727764635 +0000 UTC m=+1427.976528154" Dec 09 12:28:49 crc kubenswrapper[4703]: I1209 12:28:49.713585 4703 generic.go:334] "Generic (PLEG): container finished" podID="5292ab55-fe87-4bb9-8975-5a06bcc517e0" containerID="b9d54ab745e1ede6572649622e30d1f60eee4f9770db1af98b5bd7f8460b4099" exitCode=0 Dec 09 12:28:49 crc kubenswrapper[4703]: I1209 12:28:49.713660 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-74fff9b6-zdlbc" event={"ID":"5292ab55-fe87-4bb9-8975-5a06bcc517e0","Type":"ContainerDied","Data":"b9d54ab745e1ede6572649622e30d1f60eee4f9770db1af98b5bd7f8460b4099"} Dec 09 12:28:49 crc kubenswrapper[4703]: I1209 12:28:49.938790 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 09 12:28:49 crc kubenswrapper[4703]: I1209 12:28:49.939146 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 09 12:28:50 crc kubenswrapper[4703]: I1209 12:28:50.071621 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 09 12:28:50 crc kubenswrapper[4703]: I1209 12:28:50.071706 4703 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 09 12:28:50 crc kubenswrapper[4703]: I1209 12:28:50.212495 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Dec 09 12:28:50 crc kubenswrapper[4703]: I1209 12:28:50.231341 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 09 12:28:50 crc kubenswrapper[4703]: I1209 12:28:50.254116 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-94fd6bfbb-8c7xs" Dec 09 12:28:50 crc kubenswrapper[4703]: I1209 12:28:50.702752 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-m57j8"] Dec 09 12:28:50 crc kubenswrapper[4703]: I1209 12:28:50.716397 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m57j8" Dec 09 12:28:50 crc kubenswrapper[4703]: I1209 12:28:50.727710 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-m57j8"] Dec 09 12:28:50 crc kubenswrapper[4703]: I1209 12:28:50.851829 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03ce63d0-6e1f-4d88-a26c-05b867554db5-utilities\") pod \"redhat-operators-m57j8\" (UID: \"03ce63d0-6e1f-4d88-a26c-05b867554db5\") " pod="openshift-marketplace/redhat-operators-m57j8" Dec 09 12:28:50 crc kubenswrapper[4703]: I1209 12:28:50.852024 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzbxc\" (UniqueName: \"kubernetes.io/projected/03ce63d0-6e1f-4d88-a26c-05b867554db5-kube-api-access-rzbxc\") pod \"redhat-operators-m57j8\" (UID: \"03ce63d0-6e1f-4d88-a26c-05b867554db5\") " pod="openshift-marketplace/redhat-operators-m57j8" Dec 09 12:28:50 crc kubenswrapper[4703]: I1209 12:28:50.852157 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03ce63d0-6e1f-4d88-a26c-05b867554db5-catalog-content\") pod \"redhat-operators-m57j8\" (UID: \"03ce63d0-6e1f-4d88-a26c-05b867554db5\") " pod="openshift-marketplace/redhat-operators-m57j8" Dec 09 12:28:50 crc kubenswrapper[4703]: I1209 12:28:50.955472 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzbxc\" (UniqueName: \"kubernetes.io/projected/03ce63d0-6e1f-4d88-a26c-05b867554db5-kube-api-access-rzbxc\") pod \"redhat-operators-m57j8\" (UID: \"03ce63d0-6e1f-4d88-a26c-05b867554db5\") " pod="openshift-marketplace/redhat-operators-m57j8" Dec 09 12:28:50 crc kubenswrapper[4703]: I1209 12:28:50.955895 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03ce63d0-6e1f-4d88-a26c-05b867554db5-catalog-content\") pod \"redhat-operators-m57j8\" (UID: \"03ce63d0-6e1f-4d88-a26c-05b867554db5\") " pod="openshift-marketplace/redhat-operators-m57j8" Dec 09 12:28:50 crc kubenswrapper[4703]: I1209 12:28:50.955986 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03ce63d0-6e1f-4d88-a26c-05b867554db5-utilities\") pod \"redhat-operators-m57j8\" (UID: \"03ce63d0-6e1f-4d88-a26c-05b867554db5\") " pod="openshift-marketplace/redhat-operators-m57j8" Dec 09 12:28:50 crc kubenswrapper[4703]: I1209 12:28:50.956659 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03ce63d0-6e1f-4d88-a26c-05b867554db5-utilities\") pod \"redhat-operators-m57j8\" (UID: \"03ce63d0-6e1f-4d88-a26c-05b867554db5\") " pod="openshift-marketplace/redhat-operators-m57j8" Dec 09 12:28:50 crc kubenswrapper[4703]: I1209 12:28:50.957032 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03ce63d0-6e1f-4d88-a26c-05b867554db5-catalog-content\") pod \"redhat-operators-m57j8\" (UID: \"03ce63d0-6e1f-4d88-a26c-05b867554db5\") " pod="openshift-marketplace/redhat-operators-m57j8" Dec 09 12:28:51 crc kubenswrapper[4703]: I1209 12:28:51.004248 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzbxc\" (UniqueName: \"kubernetes.io/projected/03ce63d0-6e1f-4d88-a26c-05b867554db5-kube-api-access-rzbxc\") pod \"redhat-operators-m57j8\" (UID: \"03ce63d0-6e1f-4d88-a26c-05b867554db5\") " pod="openshift-marketplace/redhat-operators-m57j8" Dec 09 12:28:51 crc kubenswrapper[4703]: I1209 12:28:51.175219 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m57j8" Dec 09 12:28:52 crc kubenswrapper[4703]: I1209 12:28:52.285397 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-865d57f67b-x7hvw" Dec 09 12:28:52 crc kubenswrapper[4703]: I1209 12:28:52.775785 4703 generic.go:334] "Generic (PLEG): container finished" podID="8dc2a644-ad99-4857-9913-562b6ed7371f" containerID="943e64a17b3892f1f99a72714c0d139e740d57019c12afba0c75b8320b4461b5" exitCode=0 Dec 09 12:28:52 crc kubenswrapper[4703]: I1209 12:28:52.775841 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-jhwbv" event={"ID":"8dc2a644-ad99-4857-9913-562b6ed7371f","Type":"ContainerDied","Data":"943e64a17b3892f1f99a72714c0d139e740d57019c12afba0c75b8320b4461b5"} Dec 09 12:28:53 crc kubenswrapper[4703]: I1209 12:28:53.011724 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-94fd6bfbb-8c7xs" Dec 09 12:28:53 crc kubenswrapper[4703]: I1209 12:28:53.232642 4703 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-55f844cf75-t56kd" podUID="c23c3529-4f24-4180-84ee-f50c0824b3db" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.164:5353: connect: connection refused" Dec 09 12:28:55 crc kubenswrapper[4703]: I1209 12:28:55.178987 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-jhwbv" Dec 09 12:28:55 crc kubenswrapper[4703]: I1209 12:28:55.273646 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8dc2a644-ad99-4857-9913-562b6ed7371f-scripts\") pod \"8dc2a644-ad99-4857-9913-562b6ed7371f\" (UID: \"8dc2a644-ad99-4857-9913-562b6ed7371f\") " Dec 09 12:28:55 crc kubenswrapper[4703]: I1209 12:28:55.273737 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8dc2a644-ad99-4857-9913-562b6ed7371f-combined-ca-bundle\") pod \"8dc2a644-ad99-4857-9913-562b6ed7371f\" (UID: \"8dc2a644-ad99-4857-9913-562b6ed7371f\") " Dec 09 12:28:55 crc kubenswrapper[4703]: I1209 12:28:55.273798 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/8dc2a644-ad99-4857-9913-562b6ed7371f-certs\") pod \"8dc2a644-ad99-4857-9913-562b6ed7371f\" (UID: \"8dc2a644-ad99-4857-9913-562b6ed7371f\") " Dec 09 12:28:55 crc kubenswrapper[4703]: I1209 12:28:55.273952 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8dc2a644-ad99-4857-9913-562b6ed7371f-config-data\") pod \"8dc2a644-ad99-4857-9913-562b6ed7371f\" (UID: \"8dc2a644-ad99-4857-9913-562b6ed7371f\") " Dec 09 12:28:55 crc kubenswrapper[4703]: I1209 12:28:55.274013 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8xljc\" (UniqueName: \"kubernetes.io/projected/8dc2a644-ad99-4857-9913-562b6ed7371f-kube-api-access-8xljc\") pod \"8dc2a644-ad99-4857-9913-562b6ed7371f\" (UID: \"8dc2a644-ad99-4857-9913-562b6ed7371f\") " Dec 09 12:28:55 crc kubenswrapper[4703]: I1209 12:28:55.351758 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8dc2a644-ad99-4857-9913-562b6ed7371f-certs" (OuterVolumeSpecName: "certs") pod "8dc2a644-ad99-4857-9913-562b6ed7371f" (UID: "8dc2a644-ad99-4857-9913-562b6ed7371f"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:28:55 crc kubenswrapper[4703]: I1209 12:28:55.351993 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8dc2a644-ad99-4857-9913-562b6ed7371f-config-data" (OuterVolumeSpecName: "config-data") pod "8dc2a644-ad99-4857-9913-562b6ed7371f" (UID: "8dc2a644-ad99-4857-9913-562b6ed7371f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:28:55 crc kubenswrapper[4703]: I1209 12:28:55.352487 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8dc2a644-ad99-4857-9913-562b6ed7371f-scripts" (OuterVolumeSpecName: "scripts") pod "8dc2a644-ad99-4857-9913-562b6ed7371f" (UID: "8dc2a644-ad99-4857-9913-562b6ed7371f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:28:55 crc kubenswrapper[4703]: I1209 12:28:55.355418 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8dc2a644-ad99-4857-9913-562b6ed7371f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8dc2a644-ad99-4857-9913-562b6ed7371f" (UID: "8dc2a644-ad99-4857-9913-562b6ed7371f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:28:55 crc kubenswrapper[4703]: I1209 12:28:55.360593 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8dc2a644-ad99-4857-9913-562b6ed7371f-kube-api-access-8xljc" (OuterVolumeSpecName: "kube-api-access-8xljc") pod "8dc2a644-ad99-4857-9913-562b6ed7371f" (UID: "8dc2a644-ad99-4857-9913-562b6ed7371f"). InnerVolumeSpecName "kube-api-access-8xljc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:28:55 crc kubenswrapper[4703]: I1209 12:28:55.389158 4703 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8dc2a644-ad99-4857-9913-562b6ed7371f-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:55 crc kubenswrapper[4703]: I1209 12:28:55.389339 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8xljc\" (UniqueName: \"kubernetes.io/projected/8dc2a644-ad99-4857-9913-562b6ed7371f-kube-api-access-8xljc\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:55 crc kubenswrapper[4703]: I1209 12:28:55.389393 4703 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8dc2a644-ad99-4857-9913-562b6ed7371f-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:55 crc kubenswrapper[4703]: I1209 12:28:55.389407 4703 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8dc2a644-ad99-4857-9913-562b6ed7371f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:55 crc kubenswrapper[4703]: I1209 12:28:55.389420 4703 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/projected/8dc2a644-ad99-4857-9913-562b6ed7371f-certs\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:55 crc kubenswrapper[4703]: I1209 12:28:55.816411 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-jhwbv" event={"ID":"8dc2a644-ad99-4857-9913-562b6ed7371f","Type":"ContainerDied","Data":"e87f3c6a38f8c6c012e119ea1317570617efdd832050f77c61b166d71cf247a1"} Dec 09 12:28:55 crc kubenswrapper[4703]: I1209 12:28:55.816495 4703 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e87f3c6a38f8c6c012e119ea1317570617efdd832050f77c61b166d71cf247a1" Dec 09 12:28:55 crc kubenswrapper[4703]: I1209 12:28:55.816599 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-jhwbv" Dec 09 12:28:56 crc kubenswrapper[4703]: I1209 12:28:56.298113 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-storageinit-zflpx"] Dec 09 12:28:56 crc kubenswrapper[4703]: E1209 12:28:56.299105 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8dc2a644-ad99-4857-9913-562b6ed7371f" containerName="cloudkitty-db-sync" Dec 09 12:28:56 crc kubenswrapper[4703]: I1209 12:28:56.299127 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="8dc2a644-ad99-4857-9913-562b6ed7371f" containerName="cloudkitty-db-sync" Dec 09 12:28:56 crc kubenswrapper[4703]: I1209 12:28:56.299448 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="8dc2a644-ad99-4857-9913-562b6ed7371f" containerName="cloudkitty-db-sync" Dec 09 12:28:56 crc kubenswrapper[4703]: I1209 12:28:56.300525 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-storageinit-zflpx" Dec 09 12:28:56 crc kubenswrapper[4703]: I1209 12:28:56.303409 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-client-internal" Dec 09 12:28:56 crc kubenswrapper[4703]: I1209 12:28:56.304011 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-scripts" Dec 09 12:28:56 crc kubenswrapper[4703]: I1209 12:28:56.305156 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-config-data" Dec 09 12:28:56 crc kubenswrapper[4703]: I1209 12:28:56.305437 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-cloudkitty-dockercfg-smxct" Dec 09 12:28:56 crc kubenswrapper[4703]: I1209 12:28:56.310860 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Dec 09 12:28:56 crc kubenswrapper[4703]: I1209 12:28:56.325666 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-t56kd" Dec 09 12:28:56 crc kubenswrapper[4703]: I1209 12:28:56.326149 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-storageinit-zflpx"] Dec 09 12:28:56 crc kubenswrapper[4703]: I1209 12:28:56.413994 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c23c3529-4f24-4180-84ee-f50c0824b3db-dns-svc\") pod \"c23c3529-4f24-4180-84ee-f50c0824b3db\" (UID: \"c23c3529-4f24-4180-84ee-f50c0824b3db\") " Dec 09 12:28:56 crc kubenswrapper[4703]: I1209 12:28:56.414050 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fvzxv\" (UniqueName: \"kubernetes.io/projected/c23c3529-4f24-4180-84ee-f50c0824b3db-kube-api-access-fvzxv\") pod \"c23c3529-4f24-4180-84ee-f50c0824b3db\" (UID: \"c23c3529-4f24-4180-84ee-f50c0824b3db\") " Dec 09 12:28:56 crc kubenswrapper[4703]: I1209 12:28:56.414130 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c23c3529-4f24-4180-84ee-f50c0824b3db-ovsdbserver-sb\") pod \"c23c3529-4f24-4180-84ee-f50c0824b3db\" (UID: \"c23c3529-4f24-4180-84ee-f50c0824b3db\") " Dec 09 12:28:56 crc kubenswrapper[4703]: I1209 12:28:56.414153 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c23c3529-4f24-4180-84ee-f50c0824b3db-ovsdbserver-nb\") pod \"c23c3529-4f24-4180-84ee-f50c0824b3db\" (UID: \"c23c3529-4f24-4180-84ee-f50c0824b3db\") " Dec 09 12:28:56 crc kubenswrapper[4703]: I1209 12:28:56.414316 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c23c3529-4f24-4180-84ee-f50c0824b3db-dns-swift-storage-0\") pod \"c23c3529-4f24-4180-84ee-f50c0824b3db\" (UID: \"c23c3529-4f24-4180-84ee-f50c0824b3db\") " Dec 09 12:28:56 crc kubenswrapper[4703]: I1209 12:28:56.414350 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c23c3529-4f24-4180-84ee-f50c0824b3db-config\") pod \"c23c3529-4f24-4180-84ee-f50c0824b3db\" (UID: \"c23c3529-4f24-4180-84ee-f50c0824b3db\") " Dec 09 12:28:56 crc kubenswrapper[4703]: I1209 12:28:56.414694 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f9a15f4-669d-4ad2-89a7-4c422b0b6bf1-scripts\") pod \"cloudkitty-storageinit-zflpx\" (UID: \"0f9a15f4-669d-4ad2-89a7-4c422b0b6bf1\") " pod="openstack/cloudkitty-storageinit-zflpx" Dec 09 12:28:56 crc kubenswrapper[4703]: I1209 12:28:56.414738 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkhh5\" (UniqueName: \"kubernetes.io/projected/0f9a15f4-669d-4ad2-89a7-4c422b0b6bf1-kube-api-access-mkhh5\") pod \"cloudkitty-storageinit-zflpx\" (UID: \"0f9a15f4-669d-4ad2-89a7-4c422b0b6bf1\") " pod="openstack/cloudkitty-storageinit-zflpx" Dec 09 12:28:56 crc kubenswrapper[4703]: I1209 12:28:56.414891 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/0f9a15f4-669d-4ad2-89a7-4c422b0b6bf1-certs\") pod \"cloudkitty-storageinit-zflpx\" (UID: \"0f9a15f4-669d-4ad2-89a7-4c422b0b6bf1\") " pod="openstack/cloudkitty-storageinit-zflpx" Dec 09 12:28:56 crc kubenswrapper[4703]: I1209 12:28:56.414936 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f9a15f4-669d-4ad2-89a7-4c422b0b6bf1-combined-ca-bundle\") pod \"cloudkitty-storageinit-zflpx\" (UID: \"0f9a15f4-669d-4ad2-89a7-4c422b0b6bf1\") " pod="openstack/cloudkitty-storageinit-zflpx" Dec 09 12:28:56 crc kubenswrapper[4703]: I1209 12:28:56.414979 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f9a15f4-669d-4ad2-89a7-4c422b0b6bf1-config-data\") pod \"cloudkitty-storageinit-zflpx\" (UID: \"0f9a15f4-669d-4ad2-89a7-4c422b0b6bf1\") " pod="openstack/cloudkitty-storageinit-zflpx" Dec 09 12:28:56 crc kubenswrapper[4703]: I1209 12:28:56.455979 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c23c3529-4f24-4180-84ee-f50c0824b3db-kube-api-access-fvzxv" (OuterVolumeSpecName: "kube-api-access-fvzxv") pod "c23c3529-4f24-4180-84ee-f50c0824b3db" (UID: "c23c3529-4f24-4180-84ee-f50c0824b3db"). InnerVolumeSpecName "kube-api-access-fvzxv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:28:56 crc kubenswrapper[4703]: I1209 12:28:56.501508 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c23c3529-4f24-4180-84ee-f50c0824b3db-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c23c3529-4f24-4180-84ee-f50c0824b3db" (UID: "c23c3529-4f24-4180-84ee-f50c0824b3db"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:28:56 crc kubenswrapper[4703]: I1209 12:28:56.503264 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c23c3529-4f24-4180-84ee-f50c0824b3db-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c23c3529-4f24-4180-84ee-f50c0824b3db" (UID: "c23c3529-4f24-4180-84ee-f50c0824b3db"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:28:56 crc kubenswrapper[4703]: I1209 12:28:56.511944 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c23c3529-4f24-4180-84ee-f50c0824b3db-config" (OuterVolumeSpecName: "config") pod "c23c3529-4f24-4180-84ee-f50c0824b3db" (UID: "c23c3529-4f24-4180-84ee-f50c0824b3db"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:28:56 crc kubenswrapper[4703]: I1209 12:28:56.515523 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c23c3529-4f24-4180-84ee-f50c0824b3db-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c23c3529-4f24-4180-84ee-f50c0824b3db" (UID: "c23c3529-4f24-4180-84ee-f50c0824b3db"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:28:56 crc kubenswrapper[4703]: I1209 12:28:56.517427 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/0f9a15f4-669d-4ad2-89a7-4c422b0b6bf1-certs\") pod \"cloudkitty-storageinit-zflpx\" (UID: \"0f9a15f4-669d-4ad2-89a7-4c422b0b6bf1\") " pod="openstack/cloudkitty-storageinit-zflpx" Dec 09 12:28:56 crc kubenswrapper[4703]: I1209 12:28:56.517475 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f9a15f4-669d-4ad2-89a7-4c422b0b6bf1-combined-ca-bundle\") pod \"cloudkitty-storageinit-zflpx\" (UID: \"0f9a15f4-669d-4ad2-89a7-4c422b0b6bf1\") " pod="openstack/cloudkitty-storageinit-zflpx" Dec 09 12:28:56 crc kubenswrapper[4703]: I1209 12:28:56.517530 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f9a15f4-669d-4ad2-89a7-4c422b0b6bf1-config-data\") pod \"cloudkitty-storageinit-zflpx\" (UID: \"0f9a15f4-669d-4ad2-89a7-4c422b0b6bf1\") " pod="openstack/cloudkitty-storageinit-zflpx" Dec 09 12:28:56 crc kubenswrapper[4703]: I1209 12:28:56.517693 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f9a15f4-669d-4ad2-89a7-4c422b0b6bf1-scripts\") pod \"cloudkitty-storageinit-zflpx\" (UID: \"0f9a15f4-669d-4ad2-89a7-4c422b0b6bf1\") " pod="openstack/cloudkitty-storageinit-zflpx" Dec 09 12:28:56 crc kubenswrapper[4703]: I1209 12:28:56.517729 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkhh5\" (UniqueName: \"kubernetes.io/projected/0f9a15f4-669d-4ad2-89a7-4c422b0b6bf1-kube-api-access-mkhh5\") pod \"cloudkitty-storageinit-zflpx\" (UID: \"0f9a15f4-669d-4ad2-89a7-4c422b0b6bf1\") " pod="openstack/cloudkitty-storageinit-zflpx" Dec 09 12:28:56 crc kubenswrapper[4703]: I1209 12:28:56.517833 4703 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c23c3529-4f24-4180-84ee-f50c0824b3db-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:56 crc kubenswrapper[4703]: I1209 12:28:56.517850 4703 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c23c3529-4f24-4180-84ee-f50c0824b3db-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:56 crc kubenswrapper[4703]: I1209 12:28:56.517865 4703 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c23c3529-4f24-4180-84ee-f50c0824b3db-config\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:56 crc kubenswrapper[4703]: I1209 12:28:56.517876 4703 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c23c3529-4f24-4180-84ee-f50c0824b3db-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:56 crc kubenswrapper[4703]: I1209 12:28:56.517886 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fvzxv\" (UniqueName: \"kubernetes.io/projected/c23c3529-4f24-4180-84ee-f50c0824b3db-kube-api-access-fvzxv\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:56 crc kubenswrapper[4703]: I1209 12:28:56.523392 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f9a15f4-669d-4ad2-89a7-4c422b0b6bf1-scripts\") pod \"cloudkitty-storageinit-zflpx\" (UID: \"0f9a15f4-669d-4ad2-89a7-4c422b0b6bf1\") " pod="openstack/cloudkitty-storageinit-zflpx" Dec 09 12:28:56 crc kubenswrapper[4703]: I1209 12:28:56.523804 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f9a15f4-669d-4ad2-89a7-4c422b0b6bf1-combined-ca-bundle\") pod \"cloudkitty-storageinit-zflpx\" (UID: \"0f9a15f4-669d-4ad2-89a7-4c422b0b6bf1\") " pod="openstack/cloudkitty-storageinit-zflpx" Dec 09 12:28:56 crc kubenswrapper[4703]: I1209 12:28:56.524243 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f9a15f4-669d-4ad2-89a7-4c422b0b6bf1-config-data\") pod \"cloudkitty-storageinit-zflpx\" (UID: \"0f9a15f4-669d-4ad2-89a7-4c422b0b6bf1\") " pod="openstack/cloudkitty-storageinit-zflpx" Dec 09 12:28:56 crc kubenswrapper[4703]: I1209 12:28:56.528729 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c23c3529-4f24-4180-84ee-f50c0824b3db-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c23c3529-4f24-4180-84ee-f50c0824b3db" (UID: "c23c3529-4f24-4180-84ee-f50c0824b3db"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:28:56 crc kubenswrapper[4703]: I1209 12:28:56.531809 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/0f9a15f4-669d-4ad2-89a7-4c422b0b6bf1-certs\") pod \"cloudkitty-storageinit-zflpx\" (UID: \"0f9a15f4-669d-4ad2-89a7-4c422b0b6bf1\") " pod="openstack/cloudkitty-storageinit-zflpx" Dec 09 12:28:56 crc kubenswrapper[4703]: I1209 12:28:56.539716 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkhh5\" (UniqueName: \"kubernetes.io/projected/0f9a15f4-669d-4ad2-89a7-4c422b0b6bf1-kube-api-access-mkhh5\") pod \"cloudkitty-storageinit-zflpx\" (UID: \"0f9a15f4-669d-4ad2-89a7-4c422b0b6bf1\") " pod="openstack/cloudkitty-storageinit-zflpx" Dec 09 12:28:56 crc kubenswrapper[4703]: I1209 12:28:56.619758 4703 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c23c3529-4f24-4180-84ee-f50c0824b3db-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:56 crc kubenswrapper[4703]: I1209 12:28:56.635514 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-storageinit-zflpx" Dec 09 12:28:56 crc kubenswrapper[4703]: I1209 12:28:56.840302 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-t56kd" event={"ID":"c23c3529-4f24-4180-84ee-f50c0824b3db","Type":"ContainerDied","Data":"1330ae07edafbde2470130e7d921db0fd80343aa24855b1efb0e95bc12024aab"} Dec 09 12:28:56 crc kubenswrapper[4703]: I1209 12:28:56.840566 4703 scope.go:117] "RemoveContainer" containerID="fd6d85c18f5436e936f17bcd1def02282c8292b6e7829b2b04585449ed5aee0b" Dec 09 12:28:56 crc kubenswrapper[4703]: I1209 12:28:56.840767 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-t56kd" Dec 09 12:28:56 crc kubenswrapper[4703]: I1209 12:28:56.898992 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-t56kd"] Dec 09 12:28:56 crc kubenswrapper[4703]: I1209 12:28:56.909402 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-t56kd"] Dec 09 12:28:57 crc kubenswrapper[4703]: E1209 12:28:57.005903 4703 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/ubi9/httpd-24:latest" Dec 09 12:28:57 crc kubenswrapper[4703]: E1209 12:28:57.006258 4703 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zkpxz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(8439e49e-7ee2-4be7-b4a2-f437a2124bd9): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 09 12:28:57 crc kubenswrapper[4703]: E1209 12:28:57.007484 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"]" pod="openstack/ceilometer-0" podUID="8439e49e-7ee2-4be7-b4a2-f437a2124bd9" Dec 09 12:28:57 crc kubenswrapper[4703]: I1209 12:28:57.054798 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-74fff9b6-zdlbc" Dec 09 12:28:57 crc kubenswrapper[4703]: I1209 12:28:57.089039 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c23c3529-4f24-4180-84ee-f50c0824b3db" path="/var/lib/kubelet/pods/c23c3529-4f24-4180-84ee-f50c0824b3db/volumes" Dec 09 12:28:57 crc kubenswrapper[4703]: I1209 12:28:57.114843 4703 scope.go:117] "RemoveContainer" containerID="70a1ac91fb3ce9fdd5994aef5f919b90b6b550dcf3e9a59ae3a2a1330b2052e9" Dec 09 12:28:57 crc kubenswrapper[4703]: I1209 12:28:57.139540 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5292ab55-fe87-4bb9-8975-5a06bcc517e0-ovndb-tls-certs\") pod \"5292ab55-fe87-4bb9-8975-5a06bcc517e0\" (UID: \"5292ab55-fe87-4bb9-8975-5a06bcc517e0\") " Dec 09 12:28:57 crc kubenswrapper[4703]: I1209 12:28:57.139677 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5292ab55-fe87-4bb9-8975-5a06bcc517e0-combined-ca-bundle\") pod \"5292ab55-fe87-4bb9-8975-5a06bcc517e0\" (UID: \"5292ab55-fe87-4bb9-8975-5a06bcc517e0\") " Dec 09 12:28:57 crc kubenswrapper[4703]: I1209 12:28:57.139735 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5292ab55-fe87-4bb9-8975-5a06bcc517e0-httpd-config\") pod \"5292ab55-fe87-4bb9-8975-5a06bcc517e0\" (UID: \"5292ab55-fe87-4bb9-8975-5a06bcc517e0\") " Dec 09 12:28:57 crc kubenswrapper[4703]: I1209 12:28:57.139795 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pp7ct\" (UniqueName: \"kubernetes.io/projected/5292ab55-fe87-4bb9-8975-5a06bcc517e0-kube-api-access-pp7ct\") pod \"5292ab55-fe87-4bb9-8975-5a06bcc517e0\" (UID: \"5292ab55-fe87-4bb9-8975-5a06bcc517e0\") " Dec 09 12:28:57 crc kubenswrapper[4703]: I1209 12:28:57.139907 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5292ab55-fe87-4bb9-8975-5a06bcc517e0-config\") pod \"5292ab55-fe87-4bb9-8975-5a06bcc517e0\" (UID: \"5292ab55-fe87-4bb9-8975-5a06bcc517e0\") " Dec 09 12:28:57 crc kubenswrapper[4703]: I1209 12:28:57.147460 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5292ab55-fe87-4bb9-8975-5a06bcc517e0-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "5292ab55-fe87-4bb9-8975-5a06bcc517e0" (UID: "5292ab55-fe87-4bb9-8975-5a06bcc517e0"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:28:57 crc kubenswrapper[4703]: I1209 12:28:57.151326 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5292ab55-fe87-4bb9-8975-5a06bcc517e0-kube-api-access-pp7ct" (OuterVolumeSpecName: "kube-api-access-pp7ct") pod "5292ab55-fe87-4bb9-8975-5a06bcc517e0" (UID: "5292ab55-fe87-4bb9-8975-5a06bcc517e0"). InnerVolumeSpecName "kube-api-access-pp7ct". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:28:57 crc kubenswrapper[4703]: I1209 12:28:57.244435 4703 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5292ab55-fe87-4bb9-8975-5a06bcc517e0-httpd-config\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:57 crc kubenswrapper[4703]: I1209 12:28:57.244708 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pp7ct\" (UniqueName: \"kubernetes.io/projected/5292ab55-fe87-4bb9-8975-5a06bcc517e0-kube-api-access-pp7ct\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:57 crc kubenswrapper[4703]: I1209 12:28:57.327599 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5292ab55-fe87-4bb9-8975-5a06bcc517e0-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "5292ab55-fe87-4bb9-8975-5a06bcc517e0" (UID: "5292ab55-fe87-4bb9-8975-5a06bcc517e0"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:28:57 crc kubenswrapper[4703]: I1209 12:28:57.348116 4703 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5292ab55-fe87-4bb9-8975-5a06bcc517e0-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:57 crc kubenswrapper[4703]: I1209 12:28:57.352925 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5292ab55-fe87-4bb9-8975-5a06bcc517e0-config" (OuterVolumeSpecName: "config") pod "5292ab55-fe87-4bb9-8975-5a06bcc517e0" (UID: "5292ab55-fe87-4bb9-8975-5a06bcc517e0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:28:57 crc kubenswrapper[4703]: I1209 12:28:57.376055 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5292ab55-fe87-4bb9-8975-5a06bcc517e0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5292ab55-fe87-4bb9-8975-5a06bcc517e0" (UID: "5292ab55-fe87-4bb9-8975-5a06bcc517e0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:28:57 crc kubenswrapper[4703]: I1209 12:28:57.451109 4703 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5292ab55-fe87-4bb9-8975-5a06bcc517e0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:57 crc kubenswrapper[4703]: I1209 12:28:57.451150 4703 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/5292ab55-fe87-4bb9-8975-5a06bcc517e0-config\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:57 crc kubenswrapper[4703]: I1209 12:28:57.611464 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Dec 09 12:28:57 crc kubenswrapper[4703]: W1209 12:28:57.612905 4703 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7453bd3b_1c8e_4401_b6b3_644e7aff4ca2.slice/crio-1564de6bb2e7f86d2af6f6be076b2f4974b2b52ef434efb9a4038add710709a3 WatchSource:0}: Error finding container 1564de6bb2e7f86d2af6f6be076b2f4974b2b52ef434efb9a4038add710709a3: Status 404 returned error can't find the container with id 1564de6bb2e7f86d2af6f6be076b2f4974b2b52ef434efb9a4038add710709a3 Dec 09 12:28:57 crc kubenswrapper[4703]: I1209 12:28:57.720045 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-m57j8"] Dec 09 12:28:57 crc kubenswrapper[4703]: W1209 12:28:57.729334 4703 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod03ce63d0_6e1f_4d88_a26c_05b867554db5.slice/crio-aa36a8f400a19486c0eb53c2b724b8b4249634a01fe08af3726690ed755dd464 WatchSource:0}: Error finding container aa36a8f400a19486c0eb53c2b724b8b4249634a01fe08af3726690ed755dd464: Status 404 returned error can't find the container with id aa36a8f400a19486c0eb53c2b724b8b4249634a01fe08af3726690ed755dd464 Dec 09 12:28:57 crc kubenswrapper[4703]: I1209 12:28:57.877362 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7453bd3b-1c8e-4401-b6b3-644e7aff4ca2","Type":"ContainerStarted","Data":"1564de6bb2e7f86d2af6f6be076b2f4974b2b52ef434efb9a4038add710709a3"} Dec 09 12:28:57 crc kubenswrapper[4703]: I1209 12:28:57.886503 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-74fff9b6-zdlbc" event={"ID":"5292ab55-fe87-4bb9-8975-5a06bcc517e0","Type":"ContainerDied","Data":"eb66cd1a9801a447461638231e1db40c83fd42bf3ddd39fcf1f36ed85af0afa2"} Dec 09 12:28:57 crc kubenswrapper[4703]: I1209 12:28:57.886565 4703 scope.go:117] "RemoveContainer" containerID="b9d54ab745e1ede6572649622e30d1f60eee4f9770db1af98b5bd7f8460b4099" Dec 09 12:28:57 crc kubenswrapper[4703]: I1209 12:28:57.886745 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-74fff9b6-zdlbc" Dec 09 12:28:57 crc kubenswrapper[4703]: I1209 12:28:57.893682 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-569b9f958-2lvsd" event={"ID":"8a74b2dd-de01-4b4f-ae23-00856d55f81a","Type":"ContainerStarted","Data":"21bb4afb68e24c2cc70e4c9b69021f939b4a4ddbeafc043350f5bc93308cd839"} Dec 09 12:28:57 crc kubenswrapper[4703]: I1209 12:28:57.896386 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7f4f589d9f-q5cvn" event={"ID":"dadce417-24c9-4497-91bc-4bbb67fae899","Type":"ContainerStarted","Data":"247cf8402258bc7fba8ccb36f353cc79f4fe97bbea82cf9be878074e48e0aac6"} Dec 09 12:28:57 crc kubenswrapper[4703]: I1209 12:28:57.898176 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m57j8" event={"ID":"03ce63d0-6e1f-4d88-a26c-05b867554db5","Type":"ContainerStarted","Data":"aa36a8f400a19486c0eb53c2b724b8b4249634a01fe08af3726690ed755dd464"} Dec 09 12:28:57 crc kubenswrapper[4703]: I1209 12:28:57.906540 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8439e49e-7ee2-4be7-b4a2-f437a2124bd9" containerName="ceilometer-notification-agent" containerID="cri-o://196a84f9faf1f8521ddb23fbe69489d831a45350fd9be2fd08437f7bb40847e2" gracePeriod=30 Dec 09 12:28:57 crc kubenswrapper[4703]: I1209 12:28:57.906669 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6fc6c9d8f6-snwb8" event={"ID":"bd9698c2-9ecf-4c99-9060-98201396be37","Type":"ContainerStarted","Data":"47e3627e9b167ce323a845eef16b213c5f6f7cd7fbd7e9f10a35d5fb61996813"} Dec 09 12:28:57 crc kubenswrapper[4703]: I1209 12:28:57.907125 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8439e49e-7ee2-4be7-b4a2-f437a2124bd9" containerName="sg-core" containerID="cri-o://65547492c074a0670140ecd1ec250f99d736b5ac6aa6dae034ea634c38c1e3d5" gracePeriod=30 Dec 09 12:28:57 crc kubenswrapper[4703]: I1209 12:28:57.975861 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-74fff9b6-zdlbc"] Dec 09 12:28:57 crc kubenswrapper[4703]: I1209 12:28:57.986146 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-74fff9b6-zdlbc"] Dec 09 12:28:57 crc kubenswrapper[4703]: I1209 12:28:57.997794 4703 scope.go:117] "RemoveContainer" containerID="4fff200c8f68ea22d3b22f55bea6562225b2518482ebdeddc38c45ae5f0acd87" Dec 09 12:28:58 crc kubenswrapper[4703]: I1209 12:28:58.089702 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-storageinit-zflpx"] Dec 09 12:28:58 crc kubenswrapper[4703]: I1209 12:28:58.102911 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 09 12:28:58 crc kubenswrapper[4703]: W1209 12:28:58.111040 4703 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2c7f991a_5d56_41ee_a0c3_005743b900b9.slice/crio-8c3bd03a39e6cc8cc42fbbcefabf5f9074ba915e4d56cee8e935e57eb707de41 WatchSource:0}: Error finding container 8c3bd03a39e6cc8cc42fbbcefabf5f9074ba915e4d56cee8e935e57eb707de41: Status 404 returned error can't find the container with id 8c3bd03a39e6cc8cc42fbbcefabf5f9074ba915e4d56cee8e935e57eb707de41 Dec 09 12:28:58 crc kubenswrapper[4703]: I1209 12:28:58.115181 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-tzbnl"] Dec 09 12:28:58 crc kubenswrapper[4703]: I1209 12:28:58.922031 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2c7f991a-5d56-41ee-a0c3-005743b900b9","Type":"ContainerStarted","Data":"8c3bd03a39e6cc8cc42fbbcefabf5f9074ba915e4d56cee8e935e57eb707de41"} Dec 09 12:28:58 crc kubenswrapper[4703]: I1209 12:28:58.931330 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7453bd3b-1c8e-4401-b6b3-644e7aff4ca2","Type":"ContainerStarted","Data":"341864cf418070075ecf2f45bd5aab48a85e5c5716fa23240910c198bfb425b3"} Dec 09 12:28:58 crc kubenswrapper[4703]: I1209 12:28:58.933605 4703 generic.go:334] "Generic (PLEG): container finished" podID="8439e49e-7ee2-4be7-b4a2-f437a2124bd9" containerID="65547492c074a0670140ecd1ec250f99d736b5ac6aa6dae034ea634c38c1e3d5" exitCode=2 Dec 09 12:28:58 crc kubenswrapper[4703]: I1209 12:28:58.933632 4703 generic.go:334] "Generic (PLEG): container finished" podID="8439e49e-7ee2-4be7-b4a2-f437a2124bd9" containerID="196a84f9faf1f8521ddb23fbe69489d831a45350fd9be2fd08437f7bb40847e2" exitCode=0 Dec 09 12:28:58 crc kubenswrapper[4703]: I1209 12:28:58.933677 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8439e49e-7ee2-4be7-b4a2-f437a2124bd9","Type":"ContainerDied","Data":"65547492c074a0670140ecd1ec250f99d736b5ac6aa6dae034ea634c38c1e3d5"} Dec 09 12:28:58 crc kubenswrapper[4703]: I1209 12:28:58.933706 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8439e49e-7ee2-4be7-b4a2-f437a2124bd9","Type":"ContainerDied","Data":"196a84f9faf1f8521ddb23fbe69489d831a45350fd9be2fd08437f7bb40847e2"} Dec 09 12:28:58 crc kubenswrapper[4703]: I1209 12:28:58.936032 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-569b9f958-2lvsd" event={"ID":"8a74b2dd-de01-4b4f-ae23-00856d55f81a","Type":"ContainerStarted","Data":"6a11c794403c7e16dd2dbb92a38f24b8746be881b7b1836b94e1c8ed4a6bc73a"} Dec 09 12:28:58 crc kubenswrapper[4703]: I1209 12:28:58.942920 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-storageinit-zflpx" event={"ID":"0f9a15f4-669d-4ad2-89a7-4c422b0b6bf1","Type":"ContainerStarted","Data":"d740a9dc6c4bf9dbe86f36ddda107bf71bc493667564b38b84e6fcf64e2c4497"} Dec 09 12:28:58 crc kubenswrapper[4703]: I1209 12:28:58.942967 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-storageinit-zflpx" event={"ID":"0f9a15f4-669d-4ad2-89a7-4c422b0b6bf1","Type":"ContainerStarted","Data":"c6963214d77104d821b0f66f0a639a84f6a31affde34b1c557eeccb1d70813de"} Dec 09 12:28:58 crc kubenswrapper[4703]: I1209 12:28:58.947296 4703 generic.go:334] "Generic (PLEG): container finished" podID="03ce63d0-6e1f-4d88-a26c-05b867554db5" containerID="0228cb72e4c9d928c1122bbcfbbf8512ebc7f66513a5cfdcdfc65cbfc2a7c9d8" exitCode=0 Dec 09 12:28:58 crc kubenswrapper[4703]: I1209 12:28:58.947373 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m57j8" event={"ID":"03ce63d0-6e1f-4d88-a26c-05b867554db5","Type":"ContainerDied","Data":"0228cb72e4c9d928c1122bbcfbbf8512ebc7f66513a5cfdcdfc65cbfc2a7c9d8"} Dec 09 12:28:58 crc kubenswrapper[4703]: I1209 12:28:58.962580 4703 generic.go:334] "Generic (PLEG): container finished" podID="e4b5b400-6d6b-41b6-a431-c22f26c76096" containerID="a7343e01079205370635071e0032bf300d0ac93dac4e84a92bd8a6dc860661d6" exitCode=0 Dec 09 12:28:58 crc kubenswrapper[4703]: I1209 12:28:58.962694 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-tzbnl" event={"ID":"e4b5b400-6d6b-41b6-a431-c22f26c76096","Type":"ContainerDied","Data":"a7343e01079205370635071e0032bf300d0ac93dac4e84a92bd8a6dc860661d6"} Dec 09 12:28:58 crc kubenswrapper[4703]: I1209 12:28:58.962728 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-tzbnl" event={"ID":"e4b5b400-6d6b-41b6-a431-c22f26c76096","Type":"ContainerStarted","Data":"9123afa486ea1d7f9144678429e288ebababf63fba131de7297c579a29f91431"} Dec 09 12:28:58 crc kubenswrapper[4703]: I1209 12:28:58.975558 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-569b9f958-2lvsd" podStartSLOduration=4.376879262 podStartE2EDuration="23.97552966s" podCreationTimestamp="2025-12-09 12:28:35 +0000 UTC" firstStartedPulling="2025-12-09 12:28:37.532471705 +0000 UTC m=+1416.781235224" lastFinishedPulling="2025-12-09 12:28:57.131122103 +0000 UTC m=+1436.379885622" observedRunningTime="2025-12-09 12:28:58.962885774 +0000 UTC m=+1438.211649303" watchObservedRunningTime="2025-12-09 12:28:58.97552966 +0000 UTC m=+1438.224293179" Dec 09 12:28:59 crc kubenswrapper[4703]: I1209 12:28:59.004135 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6fc6c9d8f6-snwb8" event={"ID":"bd9698c2-9ecf-4c99-9060-98201396be37","Type":"ContainerStarted","Data":"324c9d6046cc4dae688fa940aa75b0e1406d3a4d8d82037d617323647761309a"} Dec 09 12:28:59 crc kubenswrapper[4703]: I1209 12:28:59.006402 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6fc6c9d8f6-snwb8" Dec 09 12:28:59 crc kubenswrapper[4703]: I1209 12:28:59.006450 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6fc6c9d8f6-snwb8" Dec 09 12:28:59 crc kubenswrapper[4703]: I1209 12:28:59.007771 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-storageinit-zflpx" podStartSLOduration=3.007752525 podStartE2EDuration="3.007752525s" podCreationTimestamp="2025-12-09 12:28:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:28:58.998212226 +0000 UTC m=+1438.246975745" watchObservedRunningTime="2025-12-09 12:28:59.007752525 +0000 UTC m=+1438.256516044" Dec 09 12:28:59 crc kubenswrapper[4703]: I1209 12:28:59.023456 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7f4f589d9f-q5cvn" event={"ID":"dadce417-24c9-4497-91bc-4bbb67fae899","Type":"ContainerStarted","Data":"ba11ba0fb05ebc9a9a0cdff320f0fed0c60a8a2433db1bfbf4b197969d6fbeab"} Dec 09 12:28:59 crc kubenswrapper[4703]: I1209 12:28:59.113692 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5292ab55-fe87-4bb9-8975-5a06bcc517e0" path="/var/lib/kubelet/pods/5292ab55-fe87-4bb9-8975-5a06bcc517e0/volumes" Dec 09 12:28:59 crc kubenswrapper[4703]: I1209 12:28:59.130348 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-7f4f589d9f-q5cvn" podStartSLOduration=4.180735584 podStartE2EDuration="24.130320487s" podCreationTimestamp="2025-12-09 12:28:35 +0000 UTC" firstStartedPulling="2025-12-09 12:28:37.019692427 +0000 UTC m=+1416.268455946" lastFinishedPulling="2025-12-09 12:28:56.96927733 +0000 UTC m=+1436.218040849" observedRunningTime="2025-12-09 12:28:59.095690211 +0000 UTC m=+1438.344453730" watchObservedRunningTime="2025-12-09 12:28:59.130320487 +0000 UTC m=+1438.379084006" Dec 09 12:28:59 crc kubenswrapper[4703]: I1209 12:28:59.158523 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 12:28:59 crc kubenswrapper[4703]: I1209 12:28:59.215992 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6fc6c9d8f6-snwb8" podStartSLOduration=19.215968506 podStartE2EDuration="19.215968506s" podCreationTimestamp="2025-12-09 12:28:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:28:59.153002843 +0000 UTC m=+1438.401766362" watchObservedRunningTime="2025-12-09 12:28:59.215968506 +0000 UTC m=+1438.464732025" Dec 09 12:28:59 crc kubenswrapper[4703]: I1209 12:28:59.241645 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8439e49e-7ee2-4be7-b4a2-f437a2124bd9-config-data\") pod \"8439e49e-7ee2-4be7-b4a2-f437a2124bd9\" (UID: \"8439e49e-7ee2-4be7-b4a2-f437a2124bd9\") " Dec 09 12:28:59 crc kubenswrapper[4703]: I1209 12:28:59.242058 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8439e49e-7ee2-4be7-b4a2-f437a2124bd9-run-httpd\") pod \"8439e49e-7ee2-4be7-b4a2-f437a2124bd9\" (UID: \"8439e49e-7ee2-4be7-b4a2-f437a2124bd9\") " Dec 09 12:28:59 crc kubenswrapper[4703]: I1209 12:28:59.242589 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkpxz\" (UniqueName: \"kubernetes.io/projected/8439e49e-7ee2-4be7-b4a2-f437a2124bd9-kube-api-access-zkpxz\") pod \"8439e49e-7ee2-4be7-b4a2-f437a2124bd9\" (UID: \"8439e49e-7ee2-4be7-b4a2-f437a2124bd9\") " Dec 09 12:28:59 crc kubenswrapper[4703]: I1209 12:28:59.242668 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8439e49e-7ee2-4be7-b4a2-f437a2124bd9-scripts\") pod \"8439e49e-7ee2-4be7-b4a2-f437a2124bd9\" (UID: \"8439e49e-7ee2-4be7-b4a2-f437a2124bd9\") " Dec 09 12:28:59 crc kubenswrapper[4703]: I1209 12:28:59.242718 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8439e49e-7ee2-4be7-b4a2-f437a2124bd9-log-httpd\") pod \"8439e49e-7ee2-4be7-b4a2-f437a2124bd9\" (UID: \"8439e49e-7ee2-4be7-b4a2-f437a2124bd9\") " Dec 09 12:28:59 crc kubenswrapper[4703]: I1209 12:28:59.242740 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8439e49e-7ee2-4be7-b4a2-f437a2124bd9-combined-ca-bundle\") pod \"8439e49e-7ee2-4be7-b4a2-f437a2124bd9\" (UID: \"8439e49e-7ee2-4be7-b4a2-f437a2124bd9\") " Dec 09 12:28:59 crc kubenswrapper[4703]: I1209 12:28:59.242842 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8439e49e-7ee2-4be7-b4a2-f437a2124bd9-sg-core-conf-yaml\") pod \"8439e49e-7ee2-4be7-b4a2-f437a2124bd9\" (UID: \"8439e49e-7ee2-4be7-b4a2-f437a2124bd9\") " Dec 09 12:28:59 crc kubenswrapper[4703]: I1209 12:28:59.243685 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8439e49e-7ee2-4be7-b4a2-f437a2124bd9-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "8439e49e-7ee2-4be7-b4a2-f437a2124bd9" (UID: "8439e49e-7ee2-4be7-b4a2-f437a2124bd9"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:28:59 crc kubenswrapper[4703]: I1209 12:28:59.243973 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8439e49e-7ee2-4be7-b4a2-f437a2124bd9-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "8439e49e-7ee2-4be7-b4a2-f437a2124bd9" (UID: "8439e49e-7ee2-4be7-b4a2-f437a2124bd9"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:28:59 crc kubenswrapper[4703]: I1209 12:28:59.251486 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8439e49e-7ee2-4be7-b4a2-f437a2124bd9-scripts" (OuterVolumeSpecName: "scripts") pod "8439e49e-7ee2-4be7-b4a2-f437a2124bd9" (UID: "8439e49e-7ee2-4be7-b4a2-f437a2124bd9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:28:59 crc kubenswrapper[4703]: I1209 12:28:59.252838 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8439e49e-7ee2-4be7-b4a2-f437a2124bd9-kube-api-access-zkpxz" (OuterVolumeSpecName: "kube-api-access-zkpxz") pod "8439e49e-7ee2-4be7-b4a2-f437a2124bd9" (UID: "8439e49e-7ee2-4be7-b4a2-f437a2124bd9"). InnerVolumeSpecName "kube-api-access-zkpxz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:28:59 crc kubenswrapper[4703]: I1209 12:28:59.294541 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8439e49e-7ee2-4be7-b4a2-f437a2124bd9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8439e49e-7ee2-4be7-b4a2-f437a2124bd9" (UID: "8439e49e-7ee2-4be7-b4a2-f437a2124bd9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:28:59 crc kubenswrapper[4703]: I1209 12:28:59.314855 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8439e49e-7ee2-4be7-b4a2-f437a2124bd9-config-data" (OuterVolumeSpecName: "config-data") pod "8439e49e-7ee2-4be7-b4a2-f437a2124bd9" (UID: "8439e49e-7ee2-4be7-b4a2-f437a2124bd9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:28:59 crc kubenswrapper[4703]: I1209 12:28:59.322391 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8439e49e-7ee2-4be7-b4a2-f437a2124bd9-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "8439e49e-7ee2-4be7-b4a2-f437a2124bd9" (UID: "8439e49e-7ee2-4be7-b4a2-f437a2124bd9"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:28:59 crc kubenswrapper[4703]: I1209 12:28:59.351591 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkpxz\" (UniqueName: \"kubernetes.io/projected/8439e49e-7ee2-4be7-b4a2-f437a2124bd9-kube-api-access-zkpxz\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:59 crc kubenswrapper[4703]: I1209 12:28:59.351638 4703 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8439e49e-7ee2-4be7-b4a2-f437a2124bd9-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:59 crc kubenswrapper[4703]: I1209 12:28:59.351651 4703 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8439e49e-7ee2-4be7-b4a2-f437a2124bd9-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:59 crc kubenswrapper[4703]: I1209 12:28:59.351667 4703 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8439e49e-7ee2-4be7-b4a2-f437a2124bd9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:59 crc kubenswrapper[4703]: I1209 12:28:59.351679 4703 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8439e49e-7ee2-4be7-b4a2-f437a2124bd9-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:59 crc kubenswrapper[4703]: I1209 12:28:59.351691 4703 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8439e49e-7ee2-4be7-b4a2-f437a2124bd9-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:59 crc kubenswrapper[4703]: I1209 12:28:59.351701 4703 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8439e49e-7ee2-4be7-b4a2-f437a2124bd9-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:00 crc kubenswrapper[4703]: I1209 12:29:00.069722 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2c7f991a-5d56-41ee-a0c3-005743b900b9","Type":"ContainerStarted","Data":"d97e9fdfe98dc422ad1cce9839437d8b0f25b9c846cd6019011703236ff7f7b2"} Dec 09 12:29:00 crc kubenswrapper[4703]: I1209 12:29:00.078167 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-tzbnl" event={"ID":"e4b5b400-6d6b-41b6-a431-c22f26c76096","Type":"ContainerStarted","Data":"0672e39abc998c95a013de1042361e369fade063b9105418f668641df98d302c"} Dec 09 12:29:00 crc kubenswrapper[4703]: I1209 12:29:00.078539 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c9776ccc5-tzbnl" Dec 09 12:29:00 crc kubenswrapper[4703]: I1209 12:29:00.088905 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7453bd3b-1c8e-4401-b6b3-644e7aff4ca2","Type":"ContainerStarted","Data":"eb5f5fc72aa49f2e665a1f22401611976d61113a9988059288776c4befc0cb22"} Dec 09 12:29:00 crc kubenswrapper[4703]: I1209 12:29:00.089927 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="7453bd3b-1c8e-4401-b6b3-644e7aff4ca2" containerName="cinder-api-log" containerID="cri-o://341864cf418070075ecf2f45bd5aab48a85e5c5716fa23240910c198bfb425b3" gracePeriod=30 Dec 09 12:29:00 crc kubenswrapper[4703]: I1209 12:29:00.090248 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Dec 09 12:29:00 crc kubenswrapper[4703]: I1209 12:29:00.090329 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="7453bd3b-1c8e-4401-b6b3-644e7aff4ca2" containerName="cinder-api" containerID="cri-o://eb5f5fc72aa49f2e665a1f22401611976d61113a9988059288776c4befc0cb22" gracePeriod=30 Dec 09 12:29:00 crc kubenswrapper[4703]: I1209 12:29:00.111329 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c9776ccc5-tzbnl" podStartSLOduration=13.111306368 podStartE2EDuration="13.111306368s" podCreationTimestamp="2025-12-09 12:28:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:29:00.109738959 +0000 UTC m=+1439.358502488" watchObservedRunningTime="2025-12-09 12:29:00.111306368 +0000 UTC m=+1439.360069887" Dec 09 12:29:00 crc kubenswrapper[4703]: I1209 12:29:00.118747 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 12:29:00 crc kubenswrapper[4703]: I1209 12:29:00.118896 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8439e49e-7ee2-4be7-b4a2-f437a2124bd9","Type":"ContainerDied","Data":"9181fd1c18fec456c579317f8049bec85fe27fd3bbc835b729fa2c0c5a2558fe"} Dec 09 12:29:00 crc kubenswrapper[4703]: I1209 12:29:00.118938 4703 scope.go:117] "RemoveContainer" containerID="65547492c074a0670140ecd1ec250f99d736b5ac6aa6dae034ea634c38c1e3d5" Dec 09 12:29:00 crc kubenswrapper[4703]: I1209 12:29:00.165444 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=13.1654198 podStartE2EDuration="13.1654198s" podCreationTimestamp="2025-12-09 12:28:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:29:00.131904043 +0000 UTC m=+1439.380667562" watchObservedRunningTime="2025-12-09 12:29:00.1654198 +0000 UTC m=+1439.414183319" Dec 09 12:29:00 crc kubenswrapper[4703]: I1209 12:29:00.209962 4703 scope.go:117] "RemoveContainer" containerID="196a84f9faf1f8521ddb23fbe69489d831a45350fd9be2fd08437f7bb40847e2" Dec 09 12:29:00 crc kubenswrapper[4703]: I1209 12:29:00.336689 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 09 12:29:00 crc kubenswrapper[4703]: I1209 12:29:00.366891 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 09 12:29:00 crc kubenswrapper[4703]: I1209 12:29:00.377553 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 09 12:29:00 crc kubenswrapper[4703]: E1209 12:29:00.378106 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5292ab55-fe87-4bb9-8975-5a06bcc517e0" containerName="neutron-api" Dec 09 12:29:00 crc kubenswrapper[4703]: I1209 12:29:00.378129 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="5292ab55-fe87-4bb9-8975-5a06bcc517e0" containerName="neutron-api" Dec 09 12:29:00 crc kubenswrapper[4703]: E1209 12:29:00.378153 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8439e49e-7ee2-4be7-b4a2-f437a2124bd9" containerName="sg-core" Dec 09 12:29:00 crc kubenswrapper[4703]: I1209 12:29:00.378163 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="8439e49e-7ee2-4be7-b4a2-f437a2124bd9" containerName="sg-core" Dec 09 12:29:00 crc kubenswrapper[4703]: E1209 12:29:00.378200 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5292ab55-fe87-4bb9-8975-5a06bcc517e0" containerName="neutron-httpd" Dec 09 12:29:00 crc kubenswrapper[4703]: I1209 12:29:00.378210 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="5292ab55-fe87-4bb9-8975-5a06bcc517e0" containerName="neutron-httpd" Dec 09 12:29:00 crc kubenswrapper[4703]: E1209 12:29:00.378242 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c23c3529-4f24-4180-84ee-f50c0824b3db" containerName="dnsmasq-dns" Dec 09 12:29:00 crc kubenswrapper[4703]: I1209 12:29:00.378253 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="c23c3529-4f24-4180-84ee-f50c0824b3db" containerName="dnsmasq-dns" Dec 09 12:29:00 crc kubenswrapper[4703]: E1209 12:29:00.378270 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c23c3529-4f24-4180-84ee-f50c0824b3db" containerName="init" Dec 09 12:29:00 crc kubenswrapper[4703]: I1209 12:29:00.378277 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="c23c3529-4f24-4180-84ee-f50c0824b3db" containerName="init" Dec 09 12:29:00 crc kubenswrapper[4703]: E1209 12:29:00.378301 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8439e49e-7ee2-4be7-b4a2-f437a2124bd9" containerName="ceilometer-notification-agent" Dec 09 12:29:00 crc kubenswrapper[4703]: I1209 12:29:00.378311 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="8439e49e-7ee2-4be7-b4a2-f437a2124bd9" containerName="ceilometer-notification-agent" Dec 09 12:29:00 crc kubenswrapper[4703]: I1209 12:29:00.378563 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="8439e49e-7ee2-4be7-b4a2-f437a2124bd9" containerName="sg-core" Dec 09 12:29:00 crc kubenswrapper[4703]: I1209 12:29:00.378586 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="c23c3529-4f24-4180-84ee-f50c0824b3db" containerName="dnsmasq-dns" Dec 09 12:29:00 crc kubenswrapper[4703]: I1209 12:29:00.378600 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="8439e49e-7ee2-4be7-b4a2-f437a2124bd9" containerName="ceilometer-notification-agent" Dec 09 12:29:00 crc kubenswrapper[4703]: I1209 12:29:00.378622 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="5292ab55-fe87-4bb9-8975-5a06bcc517e0" containerName="neutron-httpd" Dec 09 12:29:00 crc kubenswrapper[4703]: I1209 12:29:00.378641 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="5292ab55-fe87-4bb9-8975-5a06bcc517e0" containerName="neutron-api" Dec 09 12:29:00 crc kubenswrapper[4703]: I1209 12:29:00.382125 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 12:29:00 crc kubenswrapper[4703]: I1209 12:29:00.387415 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 09 12:29:00 crc kubenswrapper[4703]: I1209 12:29:00.387622 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 09 12:29:00 crc kubenswrapper[4703]: I1209 12:29:00.389511 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 09 12:29:00 crc kubenswrapper[4703]: I1209 12:29:00.498231 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5e4fae3-dab6-4aab-87fa-01b51c6f05db-config-data\") pod \"ceilometer-0\" (UID: \"e5e4fae3-dab6-4aab-87fa-01b51c6f05db\") " pod="openstack/ceilometer-0" Dec 09 12:29:00 crc kubenswrapper[4703]: I1209 12:29:00.498293 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e5e4fae3-dab6-4aab-87fa-01b51c6f05db-run-httpd\") pod \"ceilometer-0\" (UID: \"e5e4fae3-dab6-4aab-87fa-01b51c6f05db\") " pod="openstack/ceilometer-0" Dec 09 12:29:00 crc kubenswrapper[4703]: I1209 12:29:00.498646 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e5e4fae3-dab6-4aab-87fa-01b51c6f05db-log-httpd\") pod \"ceilometer-0\" (UID: \"e5e4fae3-dab6-4aab-87fa-01b51c6f05db\") " pod="openstack/ceilometer-0" Dec 09 12:29:00 crc kubenswrapper[4703]: I1209 12:29:00.498771 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e5e4fae3-dab6-4aab-87fa-01b51c6f05db-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e5e4fae3-dab6-4aab-87fa-01b51c6f05db\") " pod="openstack/ceilometer-0" Dec 09 12:29:00 crc kubenswrapper[4703]: I1209 12:29:00.498916 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xk6bp\" (UniqueName: \"kubernetes.io/projected/e5e4fae3-dab6-4aab-87fa-01b51c6f05db-kube-api-access-xk6bp\") pod \"ceilometer-0\" (UID: \"e5e4fae3-dab6-4aab-87fa-01b51c6f05db\") " pod="openstack/ceilometer-0" Dec 09 12:29:00 crc kubenswrapper[4703]: I1209 12:29:00.499043 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5e4fae3-dab6-4aab-87fa-01b51c6f05db-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e5e4fae3-dab6-4aab-87fa-01b51c6f05db\") " pod="openstack/ceilometer-0" Dec 09 12:29:00 crc kubenswrapper[4703]: I1209 12:29:00.499122 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5e4fae3-dab6-4aab-87fa-01b51c6f05db-scripts\") pod \"ceilometer-0\" (UID: \"e5e4fae3-dab6-4aab-87fa-01b51c6f05db\") " pod="openstack/ceilometer-0" Dec 09 12:29:00 crc kubenswrapper[4703]: I1209 12:29:00.600981 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e5e4fae3-dab6-4aab-87fa-01b51c6f05db-log-httpd\") pod \"ceilometer-0\" (UID: \"e5e4fae3-dab6-4aab-87fa-01b51c6f05db\") " pod="openstack/ceilometer-0" Dec 09 12:29:00 crc kubenswrapper[4703]: I1209 12:29:00.601080 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e5e4fae3-dab6-4aab-87fa-01b51c6f05db-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e5e4fae3-dab6-4aab-87fa-01b51c6f05db\") " pod="openstack/ceilometer-0" Dec 09 12:29:00 crc kubenswrapper[4703]: I1209 12:29:00.601127 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xk6bp\" (UniqueName: \"kubernetes.io/projected/e5e4fae3-dab6-4aab-87fa-01b51c6f05db-kube-api-access-xk6bp\") pod \"ceilometer-0\" (UID: \"e5e4fae3-dab6-4aab-87fa-01b51c6f05db\") " pod="openstack/ceilometer-0" Dec 09 12:29:00 crc kubenswrapper[4703]: I1209 12:29:00.601157 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5e4fae3-dab6-4aab-87fa-01b51c6f05db-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e5e4fae3-dab6-4aab-87fa-01b51c6f05db\") " pod="openstack/ceilometer-0" Dec 09 12:29:00 crc kubenswrapper[4703]: I1209 12:29:00.601226 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5e4fae3-dab6-4aab-87fa-01b51c6f05db-scripts\") pod \"ceilometer-0\" (UID: \"e5e4fae3-dab6-4aab-87fa-01b51c6f05db\") " pod="openstack/ceilometer-0" Dec 09 12:29:00 crc kubenswrapper[4703]: I1209 12:29:00.601264 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5e4fae3-dab6-4aab-87fa-01b51c6f05db-config-data\") pod \"ceilometer-0\" (UID: \"e5e4fae3-dab6-4aab-87fa-01b51c6f05db\") " pod="openstack/ceilometer-0" Dec 09 12:29:00 crc kubenswrapper[4703]: I1209 12:29:00.601304 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e5e4fae3-dab6-4aab-87fa-01b51c6f05db-run-httpd\") pod \"ceilometer-0\" (UID: \"e5e4fae3-dab6-4aab-87fa-01b51c6f05db\") " pod="openstack/ceilometer-0" Dec 09 12:29:00 crc kubenswrapper[4703]: I1209 12:29:00.601823 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e5e4fae3-dab6-4aab-87fa-01b51c6f05db-run-httpd\") pod \"ceilometer-0\" (UID: \"e5e4fae3-dab6-4aab-87fa-01b51c6f05db\") " pod="openstack/ceilometer-0" Dec 09 12:29:00 crc kubenswrapper[4703]: I1209 12:29:00.602079 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e5e4fae3-dab6-4aab-87fa-01b51c6f05db-log-httpd\") pod \"ceilometer-0\" (UID: \"e5e4fae3-dab6-4aab-87fa-01b51c6f05db\") " pod="openstack/ceilometer-0" Dec 09 12:29:00 crc kubenswrapper[4703]: I1209 12:29:00.608998 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5e4fae3-dab6-4aab-87fa-01b51c6f05db-config-data\") pod \"ceilometer-0\" (UID: \"e5e4fae3-dab6-4aab-87fa-01b51c6f05db\") " pod="openstack/ceilometer-0" Dec 09 12:29:00 crc kubenswrapper[4703]: I1209 12:29:00.609746 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5e4fae3-dab6-4aab-87fa-01b51c6f05db-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e5e4fae3-dab6-4aab-87fa-01b51c6f05db\") " pod="openstack/ceilometer-0" Dec 09 12:29:00 crc kubenswrapper[4703]: I1209 12:29:00.610263 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5e4fae3-dab6-4aab-87fa-01b51c6f05db-scripts\") pod \"ceilometer-0\" (UID: \"e5e4fae3-dab6-4aab-87fa-01b51c6f05db\") " pod="openstack/ceilometer-0" Dec 09 12:29:00 crc kubenswrapper[4703]: I1209 12:29:00.610449 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e5e4fae3-dab6-4aab-87fa-01b51c6f05db-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e5e4fae3-dab6-4aab-87fa-01b51c6f05db\") " pod="openstack/ceilometer-0" Dec 09 12:29:00 crc kubenswrapper[4703]: I1209 12:29:00.620262 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xk6bp\" (UniqueName: \"kubernetes.io/projected/e5e4fae3-dab6-4aab-87fa-01b51c6f05db-kube-api-access-xk6bp\") pod \"ceilometer-0\" (UID: \"e5e4fae3-dab6-4aab-87fa-01b51c6f05db\") " pod="openstack/ceilometer-0" Dec 09 12:29:00 crc kubenswrapper[4703]: I1209 12:29:00.712620 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 12:29:01 crc kubenswrapper[4703]: I1209 12:29:01.088306 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8439e49e-7ee2-4be7-b4a2-f437a2124bd9" path="/var/lib/kubelet/pods/8439e49e-7ee2-4be7-b4a2-f437a2124bd9/volumes" Dec 09 12:29:01 crc kubenswrapper[4703]: I1209 12:29:01.133635 4703 generic.go:334] "Generic (PLEG): container finished" podID="7453bd3b-1c8e-4401-b6b3-644e7aff4ca2" containerID="341864cf418070075ecf2f45bd5aab48a85e5c5716fa23240910c198bfb425b3" exitCode=143 Dec 09 12:29:01 crc kubenswrapper[4703]: I1209 12:29:01.133700 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7453bd3b-1c8e-4401-b6b3-644e7aff4ca2","Type":"ContainerDied","Data":"341864cf418070075ecf2f45bd5aab48a85e5c5716fa23240910c198bfb425b3"} Dec 09 12:29:01 crc kubenswrapper[4703]: I1209 12:29:01.140307 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m57j8" event={"ID":"03ce63d0-6e1f-4d88-a26c-05b867554db5","Type":"ContainerStarted","Data":"a14c9a018ccdb3b32b872d1f789f41c1b40d95edf3fdae8166f9924ad4afafd6"} Dec 09 12:29:01 crc kubenswrapper[4703]: I1209 12:29:01.167397 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 09 12:29:02 crc kubenswrapper[4703]: I1209 12:29:02.168467 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e5e4fae3-dab6-4aab-87fa-01b51c6f05db","Type":"ContainerStarted","Data":"314ac44285b5a3a856e849a37b17ad1b466db5f5fcf6d07623dad0f486cd1e0d"} Dec 09 12:29:02 crc kubenswrapper[4703]: I1209 12:29:02.172733 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2c7f991a-5d56-41ee-a0c3-005743b900b9","Type":"ContainerStarted","Data":"d07006d6a61b3b7a5d4ce06765d558f99bce004e3a477926ef4e379538a22286"} Dec 09 12:29:02 crc kubenswrapper[4703]: I1209 12:29:02.205462 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=14.397975424 podStartE2EDuration="15.205429482s" podCreationTimestamp="2025-12-09 12:28:47 +0000 UTC" firstStartedPulling="2025-12-09 12:28:58.11433457 +0000 UTC m=+1437.363098089" lastFinishedPulling="2025-12-09 12:28:58.921788628 +0000 UTC m=+1438.170552147" observedRunningTime="2025-12-09 12:29:02.194348586 +0000 UTC m=+1441.443112105" watchObservedRunningTime="2025-12-09 12:29:02.205429482 +0000 UTC m=+1441.454193001" Dec 09 12:29:02 crc kubenswrapper[4703]: I1209 12:29:02.783110 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Dec 09 12:29:03 crc kubenswrapper[4703]: I1209 12:29:03.182024 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e5e4fae3-dab6-4aab-87fa-01b51c6f05db","Type":"ContainerStarted","Data":"54f6bc64b0672882246e0ba372ef6612f084e6efceea5dfb934d2eb7f66bdaa5"} Dec 09 12:29:04 crc kubenswrapper[4703]: I1209 12:29:04.194803 4703 generic.go:334] "Generic (PLEG): container finished" podID="0f9a15f4-669d-4ad2-89a7-4c422b0b6bf1" containerID="d740a9dc6c4bf9dbe86f36ddda107bf71bc493667564b38b84e6fcf64e2c4497" exitCode=0 Dec 09 12:29:04 crc kubenswrapper[4703]: I1209 12:29:04.194886 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-storageinit-zflpx" event={"ID":"0f9a15f4-669d-4ad2-89a7-4c422b0b6bf1","Type":"ContainerDied","Data":"d740a9dc6c4bf9dbe86f36ddda107bf71bc493667564b38b84e6fcf64e2c4497"} Dec 09 12:29:05 crc kubenswrapper[4703]: I1209 12:29:05.209458 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e5e4fae3-dab6-4aab-87fa-01b51c6f05db","Type":"ContainerStarted","Data":"c25d8021eedebb2ddc3ad815f7ee1d179a16c4c0ee25805fc6614451e866cc89"} Dec 09 12:29:05 crc kubenswrapper[4703]: I1209 12:29:05.211837 4703 generic.go:334] "Generic (PLEG): container finished" podID="03ce63d0-6e1f-4d88-a26c-05b867554db5" containerID="a14c9a018ccdb3b32b872d1f789f41c1b40d95edf3fdae8166f9924ad4afafd6" exitCode=0 Dec 09 12:29:05 crc kubenswrapper[4703]: I1209 12:29:05.211904 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m57j8" event={"ID":"03ce63d0-6e1f-4d88-a26c-05b867554db5","Type":"ContainerDied","Data":"a14c9a018ccdb3b32b872d1f789f41c1b40d95edf3fdae8166f9924ad4afafd6"} Dec 09 12:29:05 crc kubenswrapper[4703]: I1209 12:29:05.708967 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-storageinit-zflpx" Dec 09 12:29:05 crc kubenswrapper[4703]: I1209 12:29:05.769587 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f9a15f4-669d-4ad2-89a7-4c422b0b6bf1-config-data\") pod \"0f9a15f4-669d-4ad2-89a7-4c422b0b6bf1\" (UID: \"0f9a15f4-669d-4ad2-89a7-4c422b0b6bf1\") " Dec 09 12:29:05 crc kubenswrapper[4703]: I1209 12:29:05.769889 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f9a15f4-669d-4ad2-89a7-4c422b0b6bf1-combined-ca-bundle\") pod \"0f9a15f4-669d-4ad2-89a7-4c422b0b6bf1\" (UID: \"0f9a15f4-669d-4ad2-89a7-4c422b0b6bf1\") " Dec 09 12:29:05 crc kubenswrapper[4703]: I1209 12:29:05.769951 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mkhh5\" (UniqueName: \"kubernetes.io/projected/0f9a15f4-669d-4ad2-89a7-4c422b0b6bf1-kube-api-access-mkhh5\") pod \"0f9a15f4-669d-4ad2-89a7-4c422b0b6bf1\" (UID: \"0f9a15f4-669d-4ad2-89a7-4c422b0b6bf1\") " Dec 09 12:29:05 crc kubenswrapper[4703]: I1209 12:29:05.769985 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/0f9a15f4-669d-4ad2-89a7-4c422b0b6bf1-certs\") pod \"0f9a15f4-669d-4ad2-89a7-4c422b0b6bf1\" (UID: \"0f9a15f4-669d-4ad2-89a7-4c422b0b6bf1\") " Dec 09 12:29:05 crc kubenswrapper[4703]: I1209 12:29:05.770021 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f9a15f4-669d-4ad2-89a7-4c422b0b6bf1-scripts\") pod \"0f9a15f4-669d-4ad2-89a7-4c422b0b6bf1\" (UID: \"0f9a15f4-669d-4ad2-89a7-4c422b0b6bf1\") " Dec 09 12:29:05 crc kubenswrapper[4703]: I1209 12:29:05.778519 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f9a15f4-669d-4ad2-89a7-4c422b0b6bf1-kube-api-access-mkhh5" (OuterVolumeSpecName: "kube-api-access-mkhh5") pod "0f9a15f4-669d-4ad2-89a7-4c422b0b6bf1" (UID: "0f9a15f4-669d-4ad2-89a7-4c422b0b6bf1"). InnerVolumeSpecName "kube-api-access-mkhh5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:29:05 crc kubenswrapper[4703]: I1209 12:29:05.783798 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f9a15f4-669d-4ad2-89a7-4c422b0b6bf1-scripts" (OuterVolumeSpecName: "scripts") pod "0f9a15f4-669d-4ad2-89a7-4c422b0b6bf1" (UID: "0f9a15f4-669d-4ad2-89a7-4c422b0b6bf1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:29:05 crc kubenswrapper[4703]: I1209 12:29:05.787369 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f9a15f4-669d-4ad2-89a7-4c422b0b6bf1-certs" (OuterVolumeSpecName: "certs") pod "0f9a15f4-669d-4ad2-89a7-4c422b0b6bf1" (UID: "0f9a15f4-669d-4ad2-89a7-4c422b0b6bf1"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:29:05 crc kubenswrapper[4703]: I1209 12:29:05.807959 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f9a15f4-669d-4ad2-89a7-4c422b0b6bf1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0f9a15f4-669d-4ad2-89a7-4c422b0b6bf1" (UID: "0f9a15f4-669d-4ad2-89a7-4c422b0b6bf1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:29:05 crc kubenswrapper[4703]: I1209 12:29:05.816405 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f9a15f4-669d-4ad2-89a7-4c422b0b6bf1-config-data" (OuterVolumeSpecName: "config-data") pod "0f9a15f4-669d-4ad2-89a7-4c422b0b6bf1" (UID: "0f9a15f4-669d-4ad2-89a7-4c422b0b6bf1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:29:05 crc kubenswrapper[4703]: I1209 12:29:05.872624 4703 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f9a15f4-669d-4ad2-89a7-4c422b0b6bf1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:05 crc kubenswrapper[4703]: I1209 12:29:05.872670 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mkhh5\" (UniqueName: \"kubernetes.io/projected/0f9a15f4-669d-4ad2-89a7-4c422b0b6bf1-kube-api-access-mkhh5\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:05 crc kubenswrapper[4703]: I1209 12:29:05.872866 4703 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/projected/0f9a15f4-669d-4ad2-89a7-4c422b0b6bf1-certs\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:05 crc kubenswrapper[4703]: I1209 12:29:05.872876 4703 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f9a15f4-669d-4ad2-89a7-4c422b0b6bf1-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:05 crc kubenswrapper[4703]: I1209 12:29:05.872886 4703 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f9a15f4-669d-4ad2-89a7-4c422b0b6bf1-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:06 crc kubenswrapper[4703]: I1209 12:29:06.224363 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e5e4fae3-dab6-4aab-87fa-01b51c6f05db","Type":"ContainerStarted","Data":"3ea74029e6d7b1f994d4533286f61d1cd15de0e704e9c73332d4981e3903c8fb"} Dec 09 12:29:06 crc kubenswrapper[4703]: I1209 12:29:06.226625 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-storageinit-zflpx" Dec 09 12:29:06 crc kubenswrapper[4703]: I1209 12:29:06.226501 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-storageinit-zflpx" event={"ID":"0f9a15f4-669d-4ad2-89a7-4c422b0b6bf1","Type":"ContainerDied","Data":"c6963214d77104d821b0f66f0a639a84f6a31affde34b1c557eeccb1d70813de"} Dec 09 12:29:06 crc kubenswrapper[4703]: I1209 12:29:06.227372 4703 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c6963214d77104d821b0f66f0a639a84f6a31affde34b1c557eeccb1d70813de" Dec 09 12:29:06 crc kubenswrapper[4703]: I1209 12:29:06.443138 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-proc-0"] Dec 09 12:29:06 crc kubenswrapper[4703]: E1209 12:29:06.444477 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f9a15f4-669d-4ad2-89a7-4c422b0b6bf1" containerName="cloudkitty-storageinit" Dec 09 12:29:06 crc kubenswrapper[4703]: I1209 12:29:06.444511 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f9a15f4-669d-4ad2-89a7-4c422b0b6bf1" containerName="cloudkitty-storageinit" Dec 09 12:29:06 crc kubenswrapper[4703]: I1209 12:29:06.445104 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f9a15f4-669d-4ad2-89a7-4c422b0b6bf1" containerName="cloudkitty-storageinit" Dec 09 12:29:06 crc kubenswrapper[4703]: I1209 12:29:06.459816 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Dec 09 12:29:06 crc kubenswrapper[4703]: I1209 12:29:06.464849 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-scripts" Dec 09 12:29:06 crc kubenswrapper[4703]: I1209 12:29:06.470413 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-proc-config-data" Dec 09 12:29:06 crc kubenswrapper[4703]: I1209 12:29:06.470760 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-client-internal" Dec 09 12:29:06 crc kubenswrapper[4703]: I1209 12:29:06.481002 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-config-data" Dec 09 12:29:06 crc kubenswrapper[4703]: I1209 12:29:06.506896 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-cloudkitty-dockercfg-smxct" Dec 09 12:29:06 crc kubenswrapper[4703]: I1209 12:29:06.520815 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-proc-0"] Dec 09 12:29:06 crc kubenswrapper[4703]: I1209 12:29:06.569575 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-tzbnl"] Dec 09 12:29:06 crc kubenswrapper[4703]: I1209 12:29:06.569890 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c9776ccc5-tzbnl" podUID="e4b5b400-6d6b-41b6-a431-c22f26c76096" containerName="dnsmasq-dns" containerID="cri-o://0672e39abc998c95a013de1042361e369fade063b9105418f668641df98d302c" gracePeriod=10 Dec 09 12:29:06 crc kubenswrapper[4703]: I1209 12:29:06.583657 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c9776ccc5-tzbnl" Dec 09 12:29:06 crc kubenswrapper[4703]: I1209 12:29:06.604710 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0273ae71-ef65-4c05-a133-45a5078aeca9-scripts\") pod \"cloudkitty-proc-0\" (UID: \"0273ae71-ef65-4c05-a133-45a5078aeca9\") " pod="openstack/cloudkitty-proc-0" Dec 09 12:29:06 crc kubenswrapper[4703]: I1209 12:29:06.604796 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/0273ae71-ef65-4c05-a133-45a5078aeca9-certs\") pod \"cloudkitty-proc-0\" (UID: \"0273ae71-ef65-4c05-a133-45a5078aeca9\") " pod="openstack/cloudkitty-proc-0" Dec 09 12:29:06 crc kubenswrapper[4703]: I1209 12:29:06.604848 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0273ae71-ef65-4c05-a133-45a5078aeca9-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"0273ae71-ef65-4c05-a133-45a5078aeca9\") " pod="openstack/cloudkitty-proc-0" Dec 09 12:29:06 crc kubenswrapper[4703]: I1209 12:29:06.604873 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0273ae71-ef65-4c05-a133-45a5078aeca9-config-data\") pod \"cloudkitty-proc-0\" (UID: \"0273ae71-ef65-4c05-a133-45a5078aeca9\") " pod="openstack/cloudkitty-proc-0" Dec 09 12:29:06 crc kubenswrapper[4703]: I1209 12:29:06.604950 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0273ae71-ef65-4c05-a133-45a5078aeca9-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"0273ae71-ef65-4c05-a133-45a5078aeca9\") " pod="openstack/cloudkitty-proc-0" Dec 09 12:29:06 crc kubenswrapper[4703]: I1209 12:29:06.605076 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l275f\" (UniqueName: \"kubernetes.io/projected/0273ae71-ef65-4c05-a133-45a5078aeca9-kube-api-access-l275f\") pod \"cloudkitty-proc-0\" (UID: \"0273ae71-ef65-4c05-a133-45a5078aeca9\") " pod="openstack/cloudkitty-proc-0" Dec 09 12:29:06 crc kubenswrapper[4703]: I1209 12:29:06.653276 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-67bdc55879-vmqd9"] Dec 09 12:29:06 crc kubenswrapper[4703]: I1209 12:29:06.655358 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67bdc55879-vmqd9" Dec 09 12:29:06 crc kubenswrapper[4703]: I1209 12:29:06.704718 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67bdc55879-vmqd9"] Dec 09 12:29:06 crc kubenswrapper[4703]: I1209 12:29:06.706631 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0273ae71-ef65-4c05-a133-45a5078aeca9-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"0273ae71-ef65-4c05-a133-45a5078aeca9\") " pod="openstack/cloudkitty-proc-0" Dec 09 12:29:06 crc kubenswrapper[4703]: I1209 12:29:06.706686 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0273ae71-ef65-4c05-a133-45a5078aeca9-config-data\") pod \"cloudkitty-proc-0\" (UID: \"0273ae71-ef65-4c05-a133-45a5078aeca9\") " pod="openstack/cloudkitty-proc-0" Dec 09 12:29:06 crc kubenswrapper[4703]: I1209 12:29:06.706764 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0273ae71-ef65-4c05-a133-45a5078aeca9-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"0273ae71-ef65-4c05-a133-45a5078aeca9\") " pod="openstack/cloudkitty-proc-0" Dec 09 12:29:06 crc kubenswrapper[4703]: I1209 12:29:06.706905 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l275f\" (UniqueName: \"kubernetes.io/projected/0273ae71-ef65-4c05-a133-45a5078aeca9-kube-api-access-l275f\") pod \"cloudkitty-proc-0\" (UID: \"0273ae71-ef65-4c05-a133-45a5078aeca9\") " pod="openstack/cloudkitty-proc-0" Dec 09 12:29:06 crc kubenswrapper[4703]: I1209 12:29:06.706954 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0273ae71-ef65-4c05-a133-45a5078aeca9-scripts\") pod \"cloudkitty-proc-0\" (UID: \"0273ae71-ef65-4c05-a133-45a5078aeca9\") " pod="openstack/cloudkitty-proc-0" Dec 09 12:29:06 crc kubenswrapper[4703]: I1209 12:29:06.707017 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/0273ae71-ef65-4c05-a133-45a5078aeca9-certs\") pod \"cloudkitty-proc-0\" (UID: \"0273ae71-ef65-4c05-a133-45a5078aeca9\") " pod="openstack/cloudkitty-proc-0" Dec 09 12:29:06 crc kubenswrapper[4703]: I1209 12:29:06.737433 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0273ae71-ef65-4c05-a133-45a5078aeca9-scripts\") pod \"cloudkitty-proc-0\" (UID: \"0273ae71-ef65-4c05-a133-45a5078aeca9\") " pod="openstack/cloudkitty-proc-0" Dec 09 12:29:06 crc kubenswrapper[4703]: I1209 12:29:06.737882 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0273ae71-ef65-4c05-a133-45a5078aeca9-config-data\") pod \"cloudkitty-proc-0\" (UID: \"0273ae71-ef65-4c05-a133-45a5078aeca9\") " pod="openstack/cloudkitty-proc-0" Dec 09 12:29:06 crc kubenswrapper[4703]: I1209 12:29:06.741347 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/0273ae71-ef65-4c05-a133-45a5078aeca9-certs\") pod \"cloudkitty-proc-0\" (UID: \"0273ae71-ef65-4c05-a133-45a5078aeca9\") " pod="openstack/cloudkitty-proc-0" Dec 09 12:29:06 crc kubenswrapper[4703]: I1209 12:29:06.741978 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0273ae71-ef65-4c05-a133-45a5078aeca9-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"0273ae71-ef65-4c05-a133-45a5078aeca9\") " pod="openstack/cloudkitty-proc-0" Dec 09 12:29:06 crc kubenswrapper[4703]: I1209 12:29:06.754897 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0273ae71-ef65-4c05-a133-45a5078aeca9-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"0273ae71-ef65-4c05-a133-45a5078aeca9\") " pod="openstack/cloudkitty-proc-0" Dec 09 12:29:06 crc kubenswrapper[4703]: I1209 12:29:06.797912 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l275f\" (UniqueName: \"kubernetes.io/projected/0273ae71-ef65-4c05-a133-45a5078aeca9-kube-api-access-l275f\") pod \"cloudkitty-proc-0\" (UID: \"0273ae71-ef65-4c05-a133-45a5078aeca9\") " pod="openstack/cloudkitty-proc-0" Dec 09 12:29:06 crc kubenswrapper[4703]: I1209 12:29:06.808600 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a71f8c26-a813-4d5b-9ce4-c9b6075a3153-config\") pod \"dnsmasq-dns-67bdc55879-vmqd9\" (UID: \"a71f8c26-a813-4d5b-9ce4-c9b6075a3153\") " pod="openstack/dnsmasq-dns-67bdc55879-vmqd9" Dec 09 12:29:06 crc kubenswrapper[4703]: I1209 12:29:06.808860 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a71f8c26-a813-4d5b-9ce4-c9b6075a3153-dns-swift-storage-0\") pod \"dnsmasq-dns-67bdc55879-vmqd9\" (UID: \"a71f8c26-a813-4d5b-9ce4-c9b6075a3153\") " pod="openstack/dnsmasq-dns-67bdc55879-vmqd9" Dec 09 12:29:06 crc kubenswrapper[4703]: I1209 12:29:06.808976 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96zdt\" (UniqueName: \"kubernetes.io/projected/a71f8c26-a813-4d5b-9ce4-c9b6075a3153-kube-api-access-96zdt\") pod \"dnsmasq-dns-67bdc55879-vmqd9\" (UID: \"a71f8c26-a813-4d5b-9ce4-c9b6075a3153\") " pod="openstack/dnsmasq-dns-67bdc55879-vmqd9" Dec 09 12:29:06 crc kubenswrapper[4703]: I1209 12:29:06.809046 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a71f8c26-a813-4d5b-9ce4-c9b6075a3153-ovsdbserver-nb\") pod \"dnsmasq-dns-67bdc55879-vmqd9\" (UID: \"a71f8c26-a813-4d5b-9ce4-c9b6075a3153\") " pod="openstack/dnsmasq-dns-67bdc55879-vmqd9" Dec 09 12:29:06 crc kubenswrapper[4703]: I1209 12:29:06.809143 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a71f8c26-a813-4d5b-9ce4-c9b6075a3153-dns-svc\") pod \"dnsmasq-dns-67bdc55879-vmqd9\" (UID: \"a71f8c26-a813-4d5b-9ce4-c9b6075a3153\") " pod="openstack/dnsmasq-dns-67bdc55879-vmqd9" Dec 09 12:29:06 crc kubenswrapper[4703]: I1209 12:29:06.809343 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a71f8c26-a813-4d5b-9ce4-c9b6075a3153-ovsdbserver-sb\") pod \"dnsmasq-dns-67bdc55879-vmqd9\" (UID: \"a71f8c26-a813-4d5b-9ce4-c9b6075a3153\") " pod="openstack/dnsmasq-dns-67bdc55879-vmqd9" Dec 09 12:29:06 crc kubenswrapper[4703]: I1209 12:29:06.823163 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Dec 09 12:29:06 crc kubenswrapper[4703]: I1209 12:29:06.881760 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-api-0"] Dec 09 12:29:06 crc kubenswrapper[4703]: I1209 12:29:06.895748 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-api-0"] Dec 09 12:29:06 crc kubenswrapper[4703]: I1209 12:29:06.895882 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Dec 09 12:29:06 crc kubenswrapper[4703]: I1209 12:29:06.901428 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-api-config-data" Dec 09 12:29:06 crc kubenswrapper[4703]: I1209 12:29:06.917565 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a71f8c26-a813-4d5b-9ce4-c9b6075a3153-dns-swift-storage-0\") pod \"dnsmasq-dns-67bdc55879-vmqd9\" (UID: \"a71f8c26-a813-4d5b-9ce4-c9b6075a3153\") " pod="openstack/dnsmasq-dns-67bdc55879-vmqd9" Dec 09 12:29:06 crc kubenswrapper[4703]: I1209 12:29:06.917627 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96zdt\" (UniqueName: \"kubernetes.io/projected/a71f8c26-a813-4d5b-9ce4-c9b6075a3153-kube-api-access-96zdt\") pod \"dnsmasq-dns-67bdc55879-vmqd9\" (UID: \"a71f8c26-a813-4d5b-9ce4-c9b6075a3153\") " pod="openstack/dnsmasq-dns-67bdc55879-vmqd9" Dec 09 12:29:06 crc kubenswrapper[4703]: I1209 12:29:06.917656 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a71f8c26-a813-4d5b-9ce4-c9b6075a3153-ovsdbserver-nb\") pod \"dnsmasq-dns-67bdc55879-vmqd9\" (UID: \"a71f8c26-a813-4d5b-9ce4-c9b6075a3153\") " pod="openstack/dnsmasq-dns-67bdc55879-vmqd9" Dec 09 12:29:06 crc kubenswrapper[4703]: I1209 12:29:06.917679 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a71f8c26-a813-4d5b-9ce4-c9b6075a3153-dns-svc\") pod \"dnsmasq-dns-67bdc55879-vmqd9\" (UID: \"a71f8c26-a813-4d5b-9ce4-c9b6075a3153\") " pod="openstack/dnsmasq-dns-67bdc55879-vmqd9" Dec 09 12:29:06 crc kubenswrapper[4703]: I1209 12:29:06.917773 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a71f8c26-a813-4d5b-9ce4-c9b6075a3153-ovsdbserver-sb\") pod \"dnsmasq-dns-67bdc55879-vmqd9\" (UID: \"a71f8c26-a813-4d5b-9ce4-c9b6075a3153\") " pod="openstack/dnsmasq-dns-67bdc55879-vmqd9" Dec 09 12:29:06 crc kubenswrapper[4703]: I1209 12:29:06.917853 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a71f8c26-a813-4d5b-9ce4-c9b6075a3153-config\") pod \"dnsmasq-dns-67bdc55879-vmqd9\" (UID: \"a71f8c26-a813-4d5b-9ce4-c9b6075a3153\") " pod="openstack/dnsmasq-dns-67bdc55879-vmqd9" Dec 09 12:29:06 crc kubenswrapper[4703]: I1209 12:29:06.918864 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a71f8c26-a813-4d5b-9ce4-c9b6075a3153-config\") pod \"dnsmasq-dns-67bdc55879-vmqd9\" (UID: \"a71f8c26-a813-4d5b-9ce4-c9b6075a3153\") " pod="openstack/dnsmasq-dns-67bdc55879-vmqd9" Dec 09 12:29:06 crc kubenswrapper[4703]: I1209 12:29:06.919424 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a71f8c26-a813-4d5b-9ce4-c9b6075a3153-dns-swift-storage-0\") pod \"dnsmasq-dns-67bdc55879-vmqd9\" (UID: \"a71f8c26-a813-4d5b-9ce4-c9b6075a3153\") " pod="openstack/dnsmasq-dns-67bdc55879-vmqd9" Dec 09 12:29:06 crc kubenswrapper[4703]: I1209 12:29:06.919734 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a71f8c26-a813-4d5b-9ce4-c9b6075a3153-ovsdbserver-nb\") pod \"dnsmasq-dns-67bdc55879-vmqd9\" (UID: \"a71f8c26-a813-4d5b-9ce4-c9b6075a3153\") " pod="openstack/dnsmasq-dns-67bdc55879-vmqd9" Dec 09 12:29:06 crc kubenswrapper[4703]: I1209 12:29:06.919997 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a71f8c26-a813-4d5b-9ce4-c9b6075a3153-dns-svc\") pod \"dnsmasq-dns-67bdc55879-vmqd9\" (UID: \"a71f8c26-a813-4d5b-9ce4-c9b6075a3153\") " pod="openstack/dnsmasq-dns-67bdc55879-vmqd9" Dec 09 12:29:06 crc kubenswrapper[4703]: I1209 12:29:06.921817 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a71f8c26-a813-4d5b-9ce4-c9b6075a3153-ovsdbserver-sb\") pod \"dnsmasq-dns-67bdc55879-vmqd9\" (UID: \"a71f8c26-a813-4d5b-9ce4-c9b6075a3153\") " pod="openstack/dnsmasq-dns-67bdc55879-vmqd9" Dec 09 12:29:06 crc kubenswrapper[4703]: I1209 12:29:06.949950 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96zdt\" (UniqueName: \"kubernetes.io/projected/a71f8c26-a813-4d5b-9ce4-c9b6075a3153-kube-api-access-96zdt\") pod \"dnsmasq-dns-67bdc55879-vmqd9\" (UID: \"a71f8c26-a813-4d5b-9ce4-c9b6075a3153\") " pod="openstack/dnsmasq-dns-67bdc55879-vmqd9" Dec 09 12:29:07 crc kubenswrapper[4703]: I1209 12:29:07.019744 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62eeb497-1ec6-4eb9-b130-b3f901ad26f0-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"62eeb497-1ec6-4eb9-b130-b3f901ad26f0\") " pod="openstack/cloudkitty-api-0" Dec 09 12:29:07 crc kubenswrapper[4703]: I1209 12:29:07.019801 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/62eeb497-1ec6-4eb9-b130-b3f901ad26f0-logs\") pod \"cloudkitty-api-0\" (UID: \"62eeb497-1ec6-4eb9-b130-b3f901ad26f0\") " pod="openstack/cloudkitty-api-0" Dec 09 12:29:07 crc kubenswrapper[4703]: I1209 12:29:07.019846 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/62eeb497-1ec6-4eb9-b130-b3f901ad26f0-scripts\") pod \"cloudkitty-api-0\" (UID: \"62eeb497-1ec6-4eb9-b130-b3f901ad26f0\") " pod="openstack/cloudkitty-api-0" Dec 09 12:29:07 crc kubenswrapper[4703]: I1209 12:29:07.019860 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/62eeb497-1ec6-4eb9-b130-b3f901ad26f0-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"62eeb497-1ec6-4eb9-b130-b3f901ad26f0\") " pod="openstack/cloudkitty-api-0" Dec 09 12:29:07 crc kubenswrapper[4703]: I1209 12:29:07.019909 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/62eeb497-1ec6-4eb9-b130-b3f901ad26f0-certs\") pod \"cloudkitty-api-0\" (UID: \"62eeb497-1ec6-4eb9-b130-b3f901ad26f0\") " pod="openstack/cloudkitty-api-0" Dec 09 12:29:07 crc kubenswrapper[4703]: I1209 12:29:07.020003 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cs8cr\" (UniqueName: \"kubernetes.io/projected/62eeb497-1ec6-4eb9-b130-b3f901ad26f0-kube-api-access-cs8cr\") pod \"cloudkitty-api-0\" (UID: \"62eeb497-1ec6-4eb9-b130-b3f901ad26f0\") " pod="openstack/cloudkitty-api-0" Dec 09 12:29:07 crc kubenswrapper[4703]: I1209 12:29:07.020062 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62eeb497-1ec6-4eb9-b130-b3f901ad26f0-config-data\") pod \"cloudkitty-api-0\" (UID: \"62eeb497-1ec6-4eb9-b130-b3f901ad26f0\") " pod="openstack/cloudkitty-api-0" Dec 09 12:29:07 crc kubenswrapper[4703]: I1209 12:29:07.057177 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67bdc55879-vmqd9" Dec 09 12:29:07 crc kubenswrapper[4703]: I1209 12:29:07.124071 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62eeb497-1ec6-4eb9-b130-b3f901ad26f0-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"62eeb497-1ec6-4eb9-b130-b3f901ad26f0\") " pod="openstack/cloudkitty-api-0" Dec 09 12:29:07 crc kubenswrapper[4703]: I1209 12:29:07.124140 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/62eeb497-1ec6-4eb9-b130-b3f901ad26f0-logs\") pod \"cloudkitty-api-0\" (UID: \"62eeb497-1ec6-4eb9-b130-b3f901ad26f0\") " pod="openstack/cloudkitty-api-0" Dec 09 12:29:07 crc kubenswrapper[4703]: I1209 12:29:07.124235 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/62eeb497-1ec6-4eb9-b130-b3f901ad26f0-scripts\") pod \"cloudkitty-api-0\" (UID: \"62eeb497-1ec6-4eb9-b130-b3f901ad26f0\") " pod="openstack/cloudkitty-api-0" Dec 09 12:29:07 crc kubenswrapper[4703]: I1209 12:29:07.124258 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/62eeb497-1ec6-4eb9-b130-b3f901ad26f0-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"62eeb497-1ec6-4eb9-b130-b3f901ad26f0\") " pod="openstack/cloudkitty-api-0" Dec 09 12:29:07 crc kubenswrapper[4703]: I1209 12:29:07.124300 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/62eeb497-1ec6-4eb9-b130-b3f901ad26f0-certs\") pod \"cloudkitty-api-0\" (UID: \"62eeb497-1ec6-4eb9-b130-b3f901ad26f0\") " pod="openstack/cloudkitty-api-0" Dec 09 12:29:07 crc kubenswrapper[4703]: I1209 12:29:07.124402 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cs8cr\" (UniqueName: \"kubernetes.io/projected/62eeb497-1ec6-4eb9-b130-b3f901ad26f0-kube-api-access-cs8cr\") pod \"cloudkitty-api-0\" (UID: \"62eeb497-1ec6-4eb9-b130-b3f901ad26f0\") " pod="openstack/cloudkitty-api-0" Dec 09 12:29:07 crc kubenswrapper[4703]: I1209 12:29:07.124483 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62eeb497-1ec6-4eb9-b130-b3f901ad26f0-config-data\") pod \"cloudkitty-api-0\" (UID: \"62eeb497-1ec6-4eb9-b130-b3f901ad26f0\") " pod="openstack/cloudkitty-api-0" Dec 09 12:29:07 crc kubenswrapper[4703]: I1209 12:29:07.124679 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/62eeb497-1ec6-4eb9-b130-b3f901ad26f0-logs\") pod \"cloudkitty-api-0\" (UID: \"62eeb497-1ec6-4eb9-b130-b3f901ad26f0\") " pod="openstack/cloudkitty-api-0" Dec 09 12:29:07 crc kubenswrapper[4703]: I1209 12:29:07.133862 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/62eeb497-1ec6-4eb9-b130-b3f901ad26f0-scripts\") pod \"cloudkitty-api-0\" (UID: \"62eeb497-1ec6-4eb9-b130-b3f901ad26f0\") " pod="openstack/cloudkitty-api-0" Dec 09 12:29:07 crc kubenswrapper[4703]: I1209 12:29:07.134936 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62eeb497-1ec6-4eb9-b130-b3f901ad26f0-config-data\") pod \"cloudkitty-api-0\" (UID: \"62eeb497-1ec6-4eb9-b130-b3f901ad26f0\") " pod="openstack/cloudkitty-api-0" Dec 09 12:29:07 crc kubenswrapper[4703]: I1209 12:29:07.140245 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/62eeb497-1ec6-4eb9-b130-b3f901ad26f0-certs\") pod \"cloudkitty-api-0\" (UID: \"62eeb497-1ec6-4eb9-b130-b3f901ad26f0\") " pod="openstack/cloudkitty-api-0" Dec 09 12:29:07 crc kubenswrapper[4703]: I1209 12:29:07.142828 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62eeb497-1ec6-4eb9-b130-b3f901ad26f0-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"62eeb497-1ec6-4eb9-b130-b3f901ad26f0\") " pod="openstack/cloudkitty-api-0" Dec 09 12:29:07 crc kubenswrapper[4703]: I1209 12:29:07.162803 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/62eeb497-1ec6-4eb9-b130-b3f901ad26f0-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"62eeb497-1ec6-4eb9-b130-b3f901ad26f0\") " pod="openstack/cloudkitty-api-0" Dec 09 12:29:07 crc kubenswrapper[4703]: I1209 12:29:07.181741 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cs8cr\" (UniqueName: \"kubernetes.io/projected/62eeb497-1ec6-4eb9-b130-b3f901ad26f0-kube-api-access-cs8cr\") pod \"cloudkitty-api-0\" (UID: \"62eeb497-1ec6-4eb9-b130-b3f901ad26f0\") " pod="openstack/cloudkitty-api-0" Dec 09 12:29:07 crc kubenswrapper[4703]: I1209 12:29:07.199834 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-865d57f67b-x7hvw" Dec 09 12:29:07 crc kubenswrapper[4703]: I1209 12:29:07.393000 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Dec 09 12:29:07 crc kubenswrapper[4703]: I1209 12:29:07.477268 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-proc-0"] Dec 09 12:29:07 crc kubenswrapper[4703]: I1209 12:29:07.756741 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67bdc55879-vmqd9"] Dec 09 12:29:07 crc kubenswrapper[4703]: I1209 12:29:07.815358 4703 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5c9776ccc5-tzbnl" podUID="e4b5b400-6d6b-41b6-a431-c22f26c76096" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.178:5353: connect: connection refused" Dec 09 12:29:07 crc kubenswrapper[4703]: I1209 12:29:07.992546 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-api-0"] Dec 09 12:29:08 crc kubenswrapper[4703]: I1209 12:29:08.127381 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Dec 09 12:29:08 crc kubenswrapper[4703]: I1209 12:29:08.211758 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 09 12:29:08 crc kubenswrapper[4703]: I1209 12:29:08.287249 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67bdc55879-vmqd9" event={"ID":"a71f8c26-a813-4d5b-9ce4-c9b6075a3153","Type":"ContainerStarted","Data":"277e9bcfbf658b9ada62d6fa32435fb64ddf89869298538fb505b1d4ad22acc8"} Dec 09 12:29:08 crc kubenswrapper[4703]: I1209 12:29:08.289364 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"0273ae71-ef65-4c05-a133-45a5078aeca9","Type":"ContainerStarted","Data":"e57641c835214038236ebb7b876fc5f5b780b82299b43f7d8ebe2c80e33949ae"} Dec 09 12:29:08 crc kubenswrapper[4703]: I1209 12:29:08.294786 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"62eeb497-1ec6-4eb9-b130-b3f901ad26f0","Type":"ContainerStarted","Data":"3847fe3fa7f06e774de21d42af451bfd7ac9c46d862625b246d8f6a0bb24eb68"} Dec 09 12:29:08 crc kubenswrapper[4703]: I1209 12:29:08.295000 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="2c7f991a-5d56-41ee-a0c3-005743b900b9" containerName="cinder-scheduler" containerID="cri-o://d97e9fdfe98dc422ad1cce9839437d8b0f25b9c846cd6019011703236ff7f7b2" gracePeriod=30 Dec 09 12:29:08 crc kubenswrapper[4703]: I1209 12:29:08.295569 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="2c7f991a-5d56-41ee-a0c3-005743b900b9" containerName="probe" containerID="cri-o://d07006d6a61b3b7a5d4ce06765d558f99bce004e3a477926ef4e379538a22286" gracePeriod=30 Dec 09 12:29:10 crc kubenswrapper[4703]: I1209 12:29:10.233471 4703 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-797ff5dd46-77fms" podUID="db6f122b-a853-4ecb-8d82-2a8a04c8224e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.95:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 09 12:29:10 crc kubenswrapper[4703]: I1209 12:29:10.373451 4703 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-6fc6c9d8f6-snwb8" podUID="bd9698c2-9ecf-4c99-9060-98201396be37" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.176:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 09 12:29:10 crc kubenswrapper[4703]: I1209 12:29:10.373993 4703 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-6fc6c9d8f6-snwb8" podUID="bd9698c2-9ecf-4c99-9060-98201396be37" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.176:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 09 12:29:10 crc kubenswrapper[4703]: I1209 12:29:10.853051 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-api-0"] Dec 09 12:29:11 crc kubenswrapper[4703]: I1209 12:29:11.187798 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-tzbnl" Dec 09 12:29:11 crc kubenswrapper[4703]: I1209 12:29:11.243272 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sw5mq\" (UniqueName: \"kubernetes.io/projected/e4b5b400-6d6b-41b6-a431-c22f26c76096-kube-api-access-sw5mq\") pod \"e4b5b400-6d6b-41b6-a431-c22f26c76096\" (UID: \"e4b5b400-6d6b-41b6-a431-c22f26c76096\") " Dec 09 12:29:11 crc kubenswrapper[4703]: I1209 12:29:11.243418 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e4b5b400-6d6b-41b6-a431-c22f26c76096-ovsdbserver-sb\") pod \"e4b5b400-6d6b-41b6-a431-c22f26c76096\" (UID: \"e4b5b400-6d6b-41b6-a431-c22f26c76096\") " Dec 09 12:29:11 crc kubenswrapper[4703]: I1209 12:29:11.243482 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e4b5b400-6d6b-41b6-a431-c22f26c76096-ovsdbserver-nb\") pod \"e4b5b400-6d6b-41b6-a431-c22f26c76096\" (UID: \"e4b5b400-6d6b-41b6-a431-c22f26c76096\") " Dec 09 12:29:11 crc kubenswrapper[4703]: I1209 12:29:11.243603 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e4b5b400-6d6b-41b6-a431-c22f26c76096-dns-svc\") pod \"e4b5b400-6d6b-41b6-a431-c22f26c76096\" (UID: \"e4b5b400-6d6b-41b6-a431-c22f26c76096\") " Dec 09 12:29:11 crc kubenswrapper[4703]: I1209 12:29:11.243677 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e4b5b400-6d6b-41b6-a431-c22f26c76096-dns-swift-storage-0\") pod \"e4b5b400-6d6b-41b6-a431-c22f26c76096\" (UID: \"e4b5b400-6d6b-41b6-a431-c22f26c76096\") " Dec 09 12:29:11 crc kubenswrapper[4703]: I1209 12:29:11.243844 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4b5b400-6d6b-41b6-a431-c22f26c76096-config\") pod \"e4b5b400-6d6b-41b6-a431-c22f26c76096\" (UID: \"e4b5b400-6d6b-41b6-a431-c22f26c76096\") " Dec 09 12:29:11 crc kubenswrapper[4703]: I1209 12:29:11.281659 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4b5b400-6d6b-41b6-a431-c22f26c76096-kube-api-access-sw5mq" (OuterVolumeSpecName: "kube-api-access-sw5mq") pod "e4b5b400-6d6b-41b6-a431-c22f26c76096" (UID: "e4b5b400-6d6b-41b6-a431-c22f26c76096"). InnerVolumeSpecName "kube-api-access-sw5mq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:29:11 crc kubenswrapper[4703]: I1209 12:29:11.348220 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sw5mq\" (UniqueName: \"kubernetes.io/projected/e4b5b400-6d6b-41b6-a431-c22f26c76096-kube-api-access-sw5mq\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:11 crc kubenswrapper[4703]: I1209 12:29:11.379434 4703 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6fc6c9d8f6-snwb8" podUID="bd9698c2-9ecf-4c99-9060-98201396be37" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.176:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 09 12:29:11 crc kubenswrapper[4703]: I1209 12:29:11.380173 4703 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6fc6c9d8f6-snwb8" podUID="bd9698c2-9ecf-4c99-9060-98201396be37" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.176:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 09 12:29:11 crc kubenswrapper[4703]: I1209 12:29:11.391391 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"62eeb497-1ec6-4eb9-b130-b3f901ad26f0","Type":"ContainerStarted","Data":"2b0f1efd57ac5d4844d3c9a0320692922b3163e998e1b9f2c0e4028a660b10e3"} Dec 09 12:29:11 crc kubenswrapper[4703]: I1209 12:29:11.427966 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m57j8" event={"ID":"03ce63d0-6e1f-4d88-a26c-05b867554db5","Type":"ContainerStarted","Data":"68afdfa339f9a8f0da9ae3cbe4ac53e990e55a3ecce98b631fa8b0a4c4682a7f"} Dec 09 12:29:11 crc kubenswrapper[4703]: I1209 12:29:11.437561 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6fc6c9d8f6-snwb8" Dec 09 12:29:11 crc kubenswrapper[4703]: I1209 12:29:11.459548 4703 generic.go:334] "Generic (PLEG): container finished" podID="2c7f991a-5d56-41ee-a0c3-005743b900b9" containerID="d07006d6a61b3b7a5d4ce06765d558f99bce004e3a477926ef4e379538a22286" exitCode=0 Dec 09 12:29:11 crc kubenswrapper[4703]: I1209 12:29:11.459677 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2c7f991a-5d56-41ee-a0c3-005743b900b9","Type":"ContainerDied","Data":"d07006d6a61b3b7a5d4ce06765d558f99bce004e3a477926ef4e379538a22286"} Dec 09 12:29:11 crc kubenswrapper[4703]: I1209 12:29:11.478799 4703 generic.go:334] "Generic (PLEG): container finished" podID="a71f8c26-a813-4d5b-9ce4-c9b6075a3153" containerID="3ec64ab8b25ed181288e3c77b88c650509b4af8b0f7782418d89a46335589e07" exitCode=0 Dec 09 12:29:11 crc kubenswrapper[4703]: I1209 12:29:11.478903 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67bdc55879-vmqd9" event={"ID":"a71f8c26-a813-4d5b-9ce4-c9b6075a3153","Type":"ContainerDied","Data":"3ec64ab8b25ed181288e3c77b88c650509b4af8b0f7782418d89a46335589e07"} Dec 09 12:29:11 crc kubenswrapper[4703]: I1209 12:29:11.484817 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-m57j8" podStartSLOduration=14.071179452 podStartE2EDuration="21.484787299s" podCreationTimestamp="2025-12-09 12:28:50 +0000 UTC" firstStartedPulling="2025-12-09 12:28:58.949230783 +0000 UTC m=+1438.197994312" lastFinishedPulling="2025-12-09 12:29:06.36283864 +0000 UTC m=+1445.611602159" observedRunningTime="2025-12-09 12:29:11.45995063 +0000 UTC m=+1450.708714149" watchObservedRunningTime="2025-12-09 12:29:11.484787299 +0000 UTC m=+1450.733550818" Dec 09 12:29:11 crc kubenswrapper[4703]: I1209 12:29:11.496004 4703 generic.go:334] "Generic (PLEG): container finished" podID="e4b5b400-6d6b-41b6-a431-c22f26c76096" containerID="0672e39abc998c95a013de1042361e369fade063b9105418f668641df98d302c" exitCode=0 Dec 09 12:29:11 crc kubenswrapper[4703]: I1209 12:29:11.496077 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-tzbnl" event={"ID":"e4b5b400-6d6b-41b6-a431-c22f26c76096","Type":"ContainerDied","Data":"0672e39abc998c95a013de1042361e369fade063b9105418f668641df98d302c"} Dec 09 12:29:11 crc kubenswrapper[4703]: I1209 12:29:11.496120 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-tzbnl" event={"ID":"e4b5b400-6d6b-41b6-a431-c22f26c76096","Type":"ContainerDied","Data":"9123afa486ea1d7f9144678429e288ebababf63fba131de7297c579a29f91431"} Dec 09 12:29:11 crc kubenswrapper[4703]: I1209 12:29:11.496151 4703 scope.go:117] "RemoveContainer" containerID="0672e39abc998c95a013de1042361e369fade063b9105418f668641df98d302c" Dec 09 12:29:11 crc kubenswrapper[4703]: I1209 12:29:11.496351 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-tzbnl" Dec 09 12:29:11 crc kubenswrapper[4703]: I1209 12:29:11.547989 4703 scope.go:117] "RemoveContainer" containerID="a7343e01079205370635071e0032bf300d0ac93dac4e84a92bd8a6dc860661d6" Dec 09 12:29:11 crc kubenswrapper[4703]: I1209 12:29:11.862554 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4b5b400-6d6b-41b6-a431-c22f26c76096-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e4b5b400-6d6b-41b6-a431-c22f26c76096" (UID: "e4b5b400-6d6b-41b6-a431-c22f26c76096"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:29:11 crc kubenswrapper[4703]: I1209 12:29:11.871804 4703 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e4b5b400-6d6b-41b6-a431-c22f26c76096-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:11 crc kubenswrapper[4703]: I1209 12:29:11.885537 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4b5b400-6d6b-41b6-a431-c22f26c76096-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e4b5b400-6d6b-41b6-a431-c22f26c76096" (UID: "e4b5b400-6d6b-41b6-a431-c22f26c76096"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:29:11 crc kubenswrapper[4703]: I1209 12:29:11.950265 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4b5b400-6d6b-41b6-a431-c22f26c76096-config" (OuterVolumeSpecName: "config") pod "e4b5b400-6d6b-41b6-a431-c22f26c76096" (UID: "e4b5b400-6d6b-41b6-a431-c22f26c76096"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:29:11 crc kubenswrapper[4703]: I1209 12:29:11.964029 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4b5b400-6d6b-41b6-a431-c22f26c76096-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e4b5b400-6d6b-41b6-a431-c22f26c76096" (UID: "e4b5b400-6d6b-41b6-a431-c22f26c76096"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:29:11 crc kubenswrapper[4703]: I1209 12:29:11.974267 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4b5b400-6d6b-41b6-a431-c22f26c76096-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e4b5b400-6d6b-41b6-a431-c22f26c76096" (UID: "e4b5b400-6d6b-41b6-a431-c22f26c76096"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:29:11 crc kubenswrapper[4703]: I1209 12:29:11.978712 4703 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4b5b400-6d6b-41b6-a431-c22f26c76096-config\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:11 crc kubenswrapper[4703]: I1209 12:29:11.979143 4703 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e4b5b400-6d6b-41b6-a431-c22f26c76096-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:11 crc kubenswrapper[4703]: I1209 12:29:11.979162 4703 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e4b5b400-6d6b-41b6-a431-c22f26c76096-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:11 crc kubenswrapper[4703]: I1209 12:29:11.979174 4703 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e4b5b400-6d6b-41b6-a431-c22f26c76096-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:12 crc kubenswrapper[4703]: I1209 12:29:12.180411 4703 scope.go:117] "RemoveContainer" containerID="0672e39abc998c95a013de1042361e369fade063b9105418f668641df98d302c" Dec 09 12:29:12 crc kubenswrapper[4703]: E1209 12:29:12.184926 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0672e39abc998c95a013de1042361e369fade063b9105418f668641df98d302c\": container with ID starting with 0672e39abc998c95a013de1042361e369fade063b9105418f668641df98d302c not found: ID does not exist" containerID="0672e39abc998c95a013de1042361e369fade063b9105418f668641df98d302c" Dec 09 12:29:12 crc kubenswrapper[4703]: I1209 12:29:12.185024 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0672e39abc998c95a013de1042361e369fade063b9105418f668641df98d302c"} err="failed to get container status \"0672e39abc998c95a013de1042361e369fade063b9105418f668641df98d302c\": rpc error: code = NotFound desc = could not find container \"0672e39abc998c95a013de1042361e369fade063b9105418f668641df98d302c\": container with ID starting with 0672e39abc998c95a013de1042361e369fade063b9105418f668641df98d302c not found: ID does not exist" Dec 09 12:29:12 crc kubenswrapper[4703]: I1209 12:29:12.185064 4703 scope.go:117] "RemoveContainer" containerID="a7343e01079205370635071e0032bf300d0ac93dac4e84a92bd8a6dc860661d6" Dec 09 12:29:12 crc kubenswrapper[4703]: E1209 12:29:12.188419 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7343e01079205370635071e0032bf300d0ac93dac4e84a92bd8a6dc860661d6\": container with ID starting with a7343e01079205370635071e0032bf300d0ac93dac4e84a92bd8a6dc860661d6 not found: ID does not exist" containerID="a7343e01079205370635071e0032bf300d0ac93dac4e84a92bd8a6dc860661d6" Dec 09 12:29:12 crc kubenswrapper[4703]: I1209 12:29:12.188469 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7343e01079205370635071e0032bf300d0ac93dac4e84a92bd8a6dc860661d6"} err="failed to get container status \"a7343e01079205370635071e0032bf300d0ac93dac4e84a92bd8a6dc860661d6\": rpc error: code = NotFound desc = could not find container \"a7343e01079205370635071e0032bf300d0ac93dac4e84a92bd8a6dc860661d6\": container with ID starting with a7343e01079205370635071e0032bf300d0ac93dac4e84a92bd8a6dc860661d6 not found: ID does not exist" Dec 09 12:29:12 crc kubenswrapper[4703]: I1209 12:29:12.301461 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-tzbnl"] Dec 09 12:29:12 crc kubenswrapper[4703]: I1209 12:29:12.323509 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-tzbnl"] Dec 09 12:29:12 crc kubenswrapper[4703]: I1209 12:29:12.572768 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e5e4fae3-dab6-4aab-87fa-01b51c6f05db","Type":"ContainerStarted","Data":"1210ec49f9faa0ed785e1039b04d2ebed4be74421708caf63fccfadb292c5adf"} Dec 09 12:29:12 crc kubenswrapper[4703]: I1209 12:29:12.573112 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 09 12:29:12 crc kubenswrapper[4703]: I1209 12:29:12.590395 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"62eeb497-1ec6-4eb9-b130-b3f901ad26f0","Type":"ContainerStarted","Data":"0a76465836eb3c5ff0542df0c969fed3fa0324db99770405252cd7921b9c051c"} Dec 09 12:29:12 crc kubenswrapper[4703]: I1209 12:29:12.590633 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cloudkitty-api-0" podUID="62eeb497-1ec6-4eb9-b130-b3f901ad26f0" containerName="cloudkitty-api-log" containerID="cri-o://2b0f1efd57ac5d4844d3c9a0320692922b3163e998e1b9f2c0e4028a660b10e3" gracePeriod=30 Dec 09 12:29:12 crc kubenswrapper[4703]: I1209 12:29:12.590819 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-api-0" Dec 09 12:29:12 crc kubenswrapper[4703]: I1209 12:29:12.590876 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cloudkitty-api-0" podUID="62eeb497-1ec6-4eb9-b130-b3f901ad26f0" containerName="cloudkitty-api" containerID="cri-o://0a76465836eb3c5ff0542df0c969fed3fa0324db99770405252cd7921b9c051c" gracePeriod=30 Dec 09 12:29:12 crc kubenswrapper[4703]: I1209 12:29:12.606526 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.025723809 podStartE2EDuration="12.606496596s" podCreationTimestamp="2025-12-09 12:29:00 +0000 UTC" firstStartedPulling="2025-12-09 12:29:01.17160846 +0000 UTC m=+1440.420371979" lastFinishedPulling="2025-12-09 12:29:10.752381247 +0000 UTC m=+1450.001144766" observedRunningTime="2025-12-09 12:29:12.598267121 +0000 UTC m=+1451.847030650" watchObservedRunningTime="2025-12-09 12:29:12.606496596 +0000 UTC m=+1451.855260145" Dec 09 12:29:12 crc kubenswrapper[4703]: I1209 12:29:12.610580 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67bdc55879-vmqd9" event={"ID":"a71f8c26-a813-4d5b-9ce4-c9b6075a3153","Type":"ContainerStarted","Data":"3d10f8ed77019f4d15e88bd101653da99f26003f8b29e93f8ede48be5156ed20"} Dec 09 12:29:12 crc kubenswrapper[4703]: I1209 12:29:12.611563 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-67bdc55879-vmqd9" Dec 09 12:29:12 crc kubenswrapper[4703]: I1209 12:29:12.640519 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-api-0" podStartSLOduration=6.640492736 podStartE2EDuration="6.640492736s" podCreationTimestamp="2025-12-09 12:29:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:29:12.629965052 +0000 UTC m=+1451.878728591" watchObservedRunningTime="2025-12-09 12:29:12.640492736 +0000 UTC m=+1451.889256325" Dec 09 12:29:12 crc kubenswrapper[4703]: I1209 12:29:12.656956 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-67bdc55879-vmqd9" podStartSLOduration=6.656936426 podStartE2EDuration="6.656936426s" podCreationTimestamp="2025-12-09 12:29:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:29:12.653657154 +0000 UTC m=+1451.902420673" watchObservedRunningTime="2025-12-09 12:29:12.656936426 +0000 UTC m=+1451.905699935" Dec 09 12:29:13 crc kubenswrapper[4703]: I1209 12:29:13.085958 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4b5b400-6d6b-41b6-a431-c22f26c76096" path="/var/lib/kubelet/pods/e4b5b400-6d6b-41b6-a431-c22f26c76096/volumes" Dec 09 12:29:13 crc kubenswrapper[4703]: I1209 12:29:13.249460 4703 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="7453bd3b-1c8e-4401-b6b3-644e7aff4ca2" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.179:8776/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 09 12:29:13 crc kubenswrapper[4703]: I1209 12:29:13.689982 4703 generic.go:334] "Generic (PLEG): container finished" podID="62eeb497-1ec6-4eb9-b130-b3f901ad26f0" containerID="0a76465836eb3c5ff0542df0c969fed3fa0324db99770405252cd7921b9c051c" exitCode=0 Dec 09 12:29:13 crc kubenswrapper[4703]: I1209 12:29:13.690028 4703 generic.go:334] "Generic (PLEG): container finished" podID="62eeb497-1ec6-4eb9-b130-b3f901ad26f0" containerID="2b0f1efd57ac5d4844d3c9a0320692922b3163e998e1b9f2c0e4028a660b10e3" exitCode=143 Dec 09 12:29:13 crc kubenswrapper[4703]: I1209 12:29:13.690166 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"62eeb497-1ec6-4eb9-b130-b3f901ad26f0","Type":"ContainerDied","Data":"0a76465836eb3c5ff0542df0c969fed3fa0324db99770405252cd7921b9c051c"} Dec 09 12:29:13 crc kubenswrapper[4703]: I1209 12:29:13.690226 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"62eeb497-1ec6-4eb9-b130-b3f901ad26f0","Type":"ContainerDied","Data":"2b0f1efd57ac5d4844d3c9a0320692922b3163e998e1b9f2c0e4028a660b10e3"} Dec 09 12:29:13 crc kubenswrapper[4703]: I1209 12:29:13.706993 4703 generic.go:334] "Generic (PLEG): container finished" podID="2c7f991a-5d56-41ee-a0c3-005743b900b9" containerID="d97e9fdfe98dc422ad1cce9839437d8b0f25b9c846cd6019011703236ff7f7b2" exitCode=0 Dec 09 12:29:13 crc kubenswrapper[4703]: I1209 12:29:13.707287 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2c7f991a-5d56-41ee-a0c3-005743b900b9","Type":"ContainerDied","Data":"d97e9fdfe98dc422ad1cce9839437d8b0f25b9c846cd6019011703236ff7f7b2"} Dec 09 12:29:14 crc kubenswrapper[4703]: I1209 12:29:14.359262 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-5f58796dd5-gpp8b" Dec 09 12:29:14 crc kubenswrapper[4703]: I1209 12:29:14.667354 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6fc6c9d8f6-snwb8" Dec 09 12:29:14 crc kubenswrapper[4703]: I1209 12:29:14.746781 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-94fd6bfbb-8c7xs"] Dec 09 12:29:14 crc kubenswrapper[4703]: I1209 12:29:14.747117 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-94fd6bfbb-8c7xs" podUID="8ff08dc4-6bf7-4c61-bdf4-289c0c653a2f" containerName="barbican-api-log" containerID="cri-o://6dead61dfb312ac79981c8eabd409d599baae5e2925bfec2efcfb51b184b5bb1" gracePeriod=30 Dec 09 12:29:14 crc kubenswrapper[4703]: I1209 12:29:14.747724 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-94fd6bfbb-8c7xs" podUID="8ff08dc4-6bf7-4c61-bdf4-289c0c653a2f" containerName="barbican-api" containerID="cri-o://9adfb512ae2ab274efee1d0e64ecacb8729c1d7ec2e24fd57c3de2c74b98259d" gracePeriod=30 Dec 09 12:29:15 crc kubenswrapper[4703]: I1209 12:29:15.744402 4703 generic.go:334] "Generic (PLEG): container finished" podID="8ff08dc4-6bf7-4c61-bdf4-289c0c653a2f" containerID="6dead61dfb312ac79981c8eabd409d599baae5e2925bfec2efcfb51b184b5bb1" exitCode=143 Dec 09 12:29:15 crc kubenswrapper[4703]: I1209 12:29:15.744504 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-94fd6bfbb-8c7xs" event={"ID":"8ff08dc4-6bf7-4c61-bdf4-289c0c653a2f","Type":"ContainerDied","Data":"6dead61dfb312ac79981c8eabd409d599baae5e2925bfec2efcfb51b184b5bb1"} Dec 09 12:29:17 crc kubenswrapper[4703]: I1209 12:29:17.062382 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-67bdc55879-vmqd9" Dec 09 12:29:17 crc kubenswrapper[4703]: I1209 12:29:17.197182 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-4ccp2"] Dec 09 12:29:17 crc kubenswrapper[4703]: I1209 12:29:17.197483 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-85ff748b95-4ccp2" podUID="d7ce5f82-bf77-4fcc-9ae9-55c0732c5012" containerName="dnsmasq-dns" containerID="cri-o://f01f5c84e5ca6a649f1cbdccb4c4e2f793d17d73c0b57633fa3f02de7c6ea09d" gracePeriod=10 Dec 09 12:29:17 crc kubenswrapper[4703]: I1209 12:29:17.235246 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Dec 09 12:29:17 crc kubenswrapper[4703]: E1209 12:29:17.236058 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4b5b400-6d6b-41b6-a431-c22f26c76096" containerName="dnsmasq-dns" Dec 09 12:29:17 crc kubenswrapper[4703]: I1209 12:29:17.236088 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4b5b400-6d6b-41b6-a431-c22f26c76096" containerName="dnsmasq-dns" Dec 09 12:29:17 crc kubenswrapper[4703]: E1209 12:29:17.236133 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4b5b400-6d6b-41b6-a431-c22f26c76096" containerName="init" Dec 09 12:29:17 crc kubenswrapper[4703]: I1209 12:29:17.236144 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4b5b400-6d6b-41b6-a431-c22f26c76096" containerName="init" Dec 09 12:29:17 crc kubenswrapper[4703]: I1209 12:29:17.236470 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4b5b400-6d6b-41b6-a431-c22f26c76096" containerName="dnsmasq-dns" Dec 09 12:29:17 crc kubenswrapper[4703]: I1209 12:29:17.237652 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 09 12:29:17 crc kubenswrapper[4703]: I1209 12:29:17.246233 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 09 12:29:17 crc kubenswrapper[4703]: I1209 12:29:17.246762 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Dec 09 12:29:17 crc kubenswrapper[4703]: I1209 12:29:17.246787 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Dec 09 12:29:17 crc kubenswrapper[4703]: I1209 12:29:17.249799 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-4kqzz" Dec 09 12:29:17 crc kubenswrapper[4703]: I1209 12:29:17.332931 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jr69c\" (UniqueName: \"kubernetes.io/projected/82a9b4ac-cb47-454e-802d-0f24b798103b-kube-api-access-jr69c\") pod \"openstackclient\" (UID: \"82a9b4ac-cb47-454e-802d-0f24b798103b\") " pod="openstack/openstackclient" Dec 09 12:29:17 crc kubenswrapper[4703]: I1209 12:29:17.333551 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/82a9b4ac-cb47-454e-802d-0f24b798103b-openstack-config\") pod \"openstackclient\" (UID: \"82a9b4ac-cb47-454e-802d-0f24b798103b\") " pod="openstack/openstackclient" Dec 09 12:29:17 crc kubenswrapper[4703]: I1209 12:29:17.333752 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/82a9b4ac-cb47-454e-802d-0f24b798103b-openstack-config-secret\") pod \"openstackclient\" (UID: \"82a9b4ac-cb47-454e-802d-0f24b798103b\") " pod="openstack/openstackclient" Dec 09 12:29:17 crc kubenswrapper[4703]: I1209 12:29:17.333877 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82a9b4ac-cb47-454e-802d-0f24b798103b-combined-ca-bundle\") pod \"openstackclient\" (UID: \"82a9b4ac-cb47-454e-802d-0f24b798103b\") " pod="openstack/openstackclient" Dec 09 12:29:17 crc kubenswrapper[4703]: I1209 12:29:17.436434 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/82a9b4ac-cb47-454e-802d-0f24b798103b-openstack-config-secret\") pod \"openstackclient\" (UID: \"82a9b4ac-cb47-454e-802d-0f24b798103b\") " pod="openstack/openstackclient" Dec 09 12:29:17 crc kubenswrapper[4703]: I1209 12:29:17.437047 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82a9b4ac-cb47-454e-802d-0f24b798103b-combined-ca-bundle\") pod \"openstackclient\" (UID: \"82a9b4ac-cb47-454e-802d-0f24b798103b\") " pod="openstack/openstackclient" Dec 09 12:29:17 crc kubenswrapper[4703]: I1209 12:29:17.438094 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jr69c\" (UniqueName: \"kubernetes.io/projected/82a9b4ac-cb47-454e-802d-0f24b798103b-kube-api-access-jr69c\") pod \"openstackclient\" (UID: \"82a9b4ac-cb47-454e-802d-0f24b798103b\") " pod="openstack/openstackclient" Dec 09 12:29:17 crc kubenswrapper[4703]: I1209 12:29:17.438318 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/82a9b4ac-cb47-454e-802d-0f24b798103b-openstack-config\") pod \"openstackclient\" (UID: \"82a9b4ac-cb47-454e-802d-0f24b798103b\") " pod="openstack/openstackclient" Dec 09 12:29:17 crc kubenswrapper[4703]: I1209 12:29:17.439623 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/82a9b4ac-cb47-454e-802d-0f24b798103b-openstack-config\") pod \"openstackclient\" (UID: \"82a9b4ac-cb47-454e-802d-0f24b798103b\") " pod="openstack/openstackclient" Dec 09 12:29:17 crc kubenswrapper[4703]: I1209 12:29:17.447216 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/82a9b4ac-cb47-454e-802d-0f24b798103b-openstack-config-secret\") pod \"openstackclient\" (UID: \"82a9b4ac-cb47-454e-802d-0f24b798103b\") " pod="openstack/openstackclient" Dec 09 12:29:17 crc kubenswrapper[4703]: I1209 12:29:17.461344 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82a9b4ac-cb47-454e-802d-0f24b798103b-combined-ca-bundle\") pod \"openstackclient\" (UID: \"82a9b4ac-cb47-454e-802d-0f24b798103b\") " pod="openstack/openstackclient" Dec 09 12:29:17 crc kubenswrapper[4703]: I1209 12:29:17.483784 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jr69c\" (UniqueName: \"kubernetes.io/projected/82a9b4ac-cb47-454e-802d-0f24b798103b-kube-api-access-jr69c\") pod \"openstackclient\" (UID: \"82a9b4ac-cb47-454e-802d-0f24b798103b\") " pod="openstack/openstackclient" Dec 09 12:29:17 crc kubenswrapper[4703]: I1209 12:29:17.573227 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 09 12:29:18 crc kubenswrapper[4703]: I1209 12:29:18.001606 4703 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-94fd6bfbb-8c7xs" podUID="8ff08dc4-6bf7-4c61-bdf4-289c0c653a2f" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.175:9311/healthcheck\": read tcp 10.217.0.2:46584->10.217.0.175:9311: read: connection reset by peer" Dec 09 12:29:18 crc kubenswrapper[4703]: I1209 12:29:18.002484 4703 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-94fd6bfbb-8c7xs" podUID="8ff08dc4-6bf7-4c61-bdf4-289c0c653a2f" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.175:9311/healthcheck\": read tcp 10.217.0.2:46586->10.217.0.175:9311: read: connection reset by peer" Dec 09 12:29:18 crc kubenswrapper[4703]: I1209 12:29:18.021435 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Dec 09 12:29:18 crc kubenswrapper[4703]: I1209 12:29:18.025724 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 09 12:29:18 crc kubenswrapper[4703]: I1209 12:29:18.051465 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c7f991a-5d56-41ee-a0c3-005743b900b9-config-data\") pod \"2c7f991a-5d56-41ee-a0c3-005743b900b9\" (UID: \"2c7f991a-5d56-41ee-a0c3-005743b900b9\") " Dec 09 12:29:18 crc kubenswrapper[4703]: I1209 12:29:18.051547 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c7f991a-5d56-41ee-a0c3-005743b900b9-scripts\") pod \"2c7f991a-5d56-41ee-a0c3-005743b900b9\" (UID: \"2c7f991a-5d56-41ee-a0c3-005743b900b9\") " Dec 09 12:29:18 crc kubenswrapper[4703]: I1209 12:29:18.051665 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p87wn\" (UniqueName: \"kubernetes.io/projected/2c7f991a-5d56-41ee-a0c3-005743b900b9-kube-api-access-p87wn\") pod \"2c7f991a-5d56-41ee-a0c3-005743b900b9\" (UID: \"2c7f991a-5d56-41ee-a0c3-005743b900b9\") " Dec 09 12:29:18 crc kubenswrapper[4703]: I1209 12:29:18.051712 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/62eeb497-1ec6-4eb9-b130-b3f901ad26f0-certs\") pod \"62eeb497-1ec6-4eb9-b130-b3f901ad26f0\" (UID: \"62eeb497-1ec6-4eb9-b130-b3f901ad26f0\") " Dec 09 12:29:18 crc kubenswrapper[4703]: I1209 12:29:18.051736 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cs8cr\" (UniqueName: \"kubernetes.io/projected/62eeb497-1ec6-4eb9-b130-b3f901ad26f0-kube-api-access-cs8cr\") pod \"62eeb497-1ec6-4eb9-b130-b3f901ad26f0\" (UID: \"62eeb497-1ec6-4eb9-b130-b3f901ad26f0\") " Dec 09 12:29:18 crc kubenswrapper[4703]: I1209 12:29:18.051858 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2c7f991a-5d56-41ee-a0c3-005743b900b9-etc-machine-id\") pod \"2c7f991a-5d56-41ee-a0c3-005743b900b9\" (UID: \"2c7f991a-5d56-41ee-a0c3-005743b900b9\") " Dec 09 12:29:18 crc kubenswrapper[4703]: I1209 12:29:18.051911 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c7f991a-5d56-41ee-a0c3-005743b900b9-combined-ca-bundle\") pod \"2c7f991a-5d56-41ee-a0c3-005743b900b9\" (UID: \"2c7f991a-5d56-41ee-a0c3-005743b900b9\") " Dec 09 12:29:18 crc kubenswrapper[4703]: I1209 12:29:18.051980 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2c7f991a-5d56-41ee-a0c3-005743b900b9-config-data-custom\") pod \"2c7f991a-5d56-41ee-a0c3-005743b900b9\" (UID: \"2c7f991a-5d56-41ee-a0c3-005743b900b9\") " Dec 09 12:29:18 crc kubenswrapper[4703]: I1209 12:29:18.052050 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/62eeb497-1ec6-4eb9-b130-b3f901ad26f0-logs\") pod \"62eeb497-1ec6-4eb9-b130-b3f901ad26f0\" (UID: \"62eeb497-1ec6-4eb9-b130-b3f901ad26f0\") " Dec 09 12:29:18 crc kubenswrapper[4703]: I1209 12:29:18.052084 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/62eeb497-1ec6-4eb9-b130-b3f901ad26f0-config-data-custom\") pod \"62eeb497-1ec6-4eb9-b130-b3f901ad26f0\" (UID: \"62eeb497-1ec6-4eb9-b130-b3f901ad26f0\") " Dec 09 12:29:18 crc kubenswrapper[4703]: I1209 12:29:18.052170 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62eeb497-1ec6-4eb9-b130-b3f901ad26f0-combined-ca-bundle\") pod \"62eeb497-1ec6-4eb9-b130-b3f901ad26f0\" (UID: \"62eeb497-1ec6-4eb9-b130-b3f901ad26f0\") " Dec 09 12:29:18 crc kubenswrapper[4703]: I1209 12:29:18.052226 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/62eeb497-1ec6-4eb9-b130-b3f901ad26f0-scripts\") pod \"62eeb497-1ec6-4eb9-b130-b3f901ad26f0\" (UID: \"62eeb497-1ec6-4eb9-b130-b3f901ad26f0\") " Dec 09 12:29:18 crc kubenswrapper[4703]: I1209 12:29:18.052280 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62eeb497-1ec6-4eb9-b130-b3f901ad26f0-config-data\") pod \"62eeb497-1ec6-4eb9-b130-b3f901ad26f0\" (UID: \"62eeb497-1ec6-4eb9-b130-b3f901ad26f0\") " Dec 09 12:29:18 crc kubenswrapper[4703]: I1209 12:29:18.053471 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c7f991a-5d56-41ee-a0c3-005743b900b9-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "2c7f991a-5d56-41ee-a0c3-005743b900b9" (UID: "2c7f991a-5d56-41ee-a0c3-005743b900b9"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 12:29:18 crc kubenswrapper[4703]: I1209 12:29:18.057482 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62eeb497-1ec6-4eb9-b130-b3f901ad26f0-logs" (OuterVolumeSpecName: "logs") pod "62eeb497-1ec6-4eb9-b130-b3f901ad26f0" (UID: "62eeb497-1ec6-4eb9-b130-b3f901ad26f0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:29:18 crc kubenswrapper[4703]: I1209 12:29:18.085264 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c7f991a-5d56-41ee-a0c3-005743b900b9-scripts" (OuterVolumeSpecName: "scripts") pod "2c7f991a-5d56-41ee-a0c3-005743b900b9" (UID: "2c7f991a-5d56-41ee-a0c3-005743b900b9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:29:18 crc kubenswrapper[4703]: I1209 12:29:18.085736 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c7f991a-5d56-41ee-a0c3-005743b900b9-kube-api-access-p87wn" (OuterVolumeSpecName: "kube-api-access-p87wn") pod "2c7f991a-5d56-41ee-a0c3-005743b900b9" (UID: "2c7f991a-5d56-41ee-a0c3-005743b900b9"). InnerVolumeSpecName "kube-api-access-p87wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:29:18 crc kubenswrapper[4703]: I1209 12:29:18.087346 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62eeb497-1ec6-4eb9-b130-b3f901ad26f0-certs" (OuterVolumeSpecName: "certs") pod "62eeb497-1ec6-4eb9-b130-b3f901ad26f0" (UID: "62eeb497-1ec6-4eb9-b130-b3f901ad26f0"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:29:18 crc kubenswrapper[4703]: I1209 12:29:18.087422 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62eeb497-1ec6-4eb9-b130-b3f901ad26f0-kube-api-access-cs8cr" (OuterVolumeSpecName: "kube-api-access-cs8cr") pod "62eeb497-1ec6-4eb9-b130-b3f901ad26f0" (UID: "62eeb497-1ec6-4eb9-b130-b3f901ad26f0"). InnerVolumeSpecName "kube-api-access-cs8cr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:29:18 crc kubenswrapper[4703]: I1209 12:29:18.089711 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62eeb497-1ec6-4eb9-b130-b3f901ad26f0-scripts" (OuterVolumeSpecName: "scripts") pod "62eeb497-1ec6-4eb9-b130-b3f901ad26f0" (UID: "62eeb497-1ec6-4eb9-b130-b3f901ad26f0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:29:18 crc kubenswrapper[4703]: I1209 12:29:18.089811 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62eeb497-1ec6-4eb9-b130-b3f901ad26f0-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "62eeb497-1ec6-4eb9-b130-b3f901ad26f0" (UID: "62eeb497-1ec6-4eb9-b130-b3f901ad26f0"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:29:18 crc kubenswrapper[4703]: I1209 12:29:18.092458 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c7f991a-5d56-41ee-a0c3-005743b900b9-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "2c7f991a-5d56-41ee-a0c3-005743b900b9" (UID: "2c7f991a-5d56-41ee-a0c3-005743b900b9"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:29:18 crc kubenswrapper[4703]: I1209 12:29:18.148334 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62eeb497-1ec6-4eb9-b130-b3f901ad26f0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "62eeb497-1ec6-4eb9-b130-b3f901ad26f0" (UID: "62eeb497-1ec6-4eb9-b130-b3f901ad26f0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:29:18 crc kubenswrapper[4703]: I1209 12:29:18.155423 4703 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/projected/62eeb497-1ec6-4eb9-b130-b3f901ad26f0-certs\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:18 crc kubenswrapper[4703]: I1209 12:29:18.155858 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cs8cr\" (UniqueName: \"kubernetes.io/projected/62eeb497-1ec6-4eb9-b130-b3f901ad26f0-kube-api-access-cs8cr\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:18 crc kubenswrapper[4703]: I1209 12:29:18.156090 4703 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2c7f991a-5d56-41ee-a0c3-005743b900b9-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:18 crc kubenswrapper[4703]: I1209 12:29:18.156227 4703 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2c7f991a-5d56-41ee-a0c3-005743b900b9-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:18 crc kubenswrapper[4703]: I1209 12:29:18.156364 4703 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/62eeb497-1ec6-4eb9-b130-b3f901ad26f0-logs\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:18 crc kubenswrapper[4703]: I1209 12:29:18.156490 4703 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/62eeb497-1ec6-4eb9-b130-b3f901ad26f0-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:18 crc kubenswrapper[4703]: I1209 12:29:18.156603 4703 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62eeb497-1ec6-4eb9-b130-b3f901ad26f0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:18 crc kubenswrapper[4703]: I1209 12:29:18.156704 4703 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/62eeb497-1ec6-4eb9-b130-b3f901ad26f0-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:18 crc kubenswrapper[4703]: I1209 12:29:18.156826 4703 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c7f991a-5d56-41ee-a0c3-005743b900b9-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:18 crc kubenswrapper[4703]: I1209 12:29:18.157026 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p87wn\" (UniqueName: \"kubernetes.io/projected/2c7f991a-5d56-41ee-a0c3-005743b900b9-kube-api-access-p87wn\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:18 crc kubenswrapper[4703]: I1209 12:29:18.157966 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62eeb497-1ec6-4eb9-b130-b3f901ad26f0-config-data" (OuterVolumeSpecName: "config-data") pod "62eeb497-1ec6-4eb9-b130-b3f901ad26f0" (UID: "62eeb497-1ec6-4eb9-b130-b3f901ad26f0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:29:18 crc kubenswrapper[4703]: I1209 12:29:18.215543 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c7f991a-5d56-41ee-a0c3-005743b900b9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2c7f991a-5d56-41ee-a0c3-005743b900b9" (UID: "2c7f991a-5d56-41ee-a0c3-005743b900b9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:29:18 crc kubenswrapper[4703]: I1209 12:29:18.264144 4703 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62eeb497-1ec6-4eb9-b130-b3f901ad26f0-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:18 crc kubenswrapper[4703]: I1209 12:29:18.266424 4703 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c7f991a-5d56-41ee-a0c3-005743b900b9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:18 crc kubenswrapper[4703]: I1209 12:29:18.277487 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c7f991a-5d56-41ee-a0c3-005743b900b9-config-data" (OuterVolumeSpecName: "config-data") pod "2c7f991a-5d56-41ee-a0c3-005743b900b9" (UID: "2c7f991a-5d56-41ee-a0c3-005743b900b9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:29:18 crc kubenswrapper[4703]: I1209 12:29:18.313851 4703 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="7453bd3b-1c8e-4401-b6b3-644e7aff4ca2" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.179:8776/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 09 12:29:18 crc kubenswrapper[4703]: I1209 12:29:18.369478 4703 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c7f991a-5d56-41ee-a0c3-005743b900b9-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:18 crc kubenswrapper[4703]: I1209 12:29:18.523140 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 09 12:29:18 crc kubenswrapper[4703]: I1209 12:29:18.810083 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Dec 09 12:29:18 crc kubenswrapper[4703]: I1209 12:29:18.810098 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"62eeb497-1ec6-4eb9-b130-b3f901ad26f0","Type":"ContainerDied","Data":"3847fe3fa7f06e774de21d42af451bfd7ac9c46d862625b246d8f6a0bb24eb68"} Dec 09 12:29:18 crc kubenswrapper[4703]: I1209 12:29:18.812411 4703 scope.go:117] "RemoveContainer" containerID="0a76465836eb3c5ff0542df0c969fed3fa0324db99770405252cd7921b9c051c" Dec 09 12:29:18 crc kubenswrapper[4703]: I1209 12:29:18.827159 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"82a9b4ac-cb47-454e-802d-0f24b798103b","Type":"ContainerStarted","Data":"707dcfe49c40b195c13d83575323c71bec8e3aa45d73b7d5b677c59c9fc266e2"} Dec 09 12:29:18 crc kubenswrapper[4703]: I1209 12:29:18.845468 4703 generic.go:334] "Generic (PLEG): container finished" podID="d7ce5f82-bf77-4fcc-9ae9-55c0732c5012" containerID="f01f5c84e5ca6a649f1cbdccb4c4e2f793d17d73c0b57633fa3f02de7c6ea09d" exitCode=0 Dec 09 12:29:18 crc kubenswrapper[4703]: I1209 12:29:18.845635 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-4ccp2" event={"ID":"d7ce5f82-bf77-4fcc-9ae9-55c0732c5012","Type":"ContainerDied","Data":"f01f5c84e5ca6a649f1cbdccb4c4e2f793d17d73c0b57633fa3f02de7c6ea09d"} Dec 09 12:29:18 crc kubenswrapper[4703]: I1209 12:29:18.880320 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 09 12:29:18 crc kubenswrapper[4703]: I1209 12:29:18.880559 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2c7f991a-5d56-41ee-a0c3-005743b900b9","Type":"ContainerDied","Data":"8c3bd03a39e6cc8cc42fbbcefabf5f9074ba915e4d56cee8e935e57eb707de41"} Dec 09 12:29:18 crc kubenswrapper[4703]: I1209 12:29:18.908945 4703 scope.go:117] "RemoveContainer" containerID="2b0f1efd57ac5d4844d3c9a0320692922b3163e998e1b9f2c0e4028a660b10e3" Dec 09 12:29:18 crc kubenswrapper[4703]: I1209 12:29:18.913818 4703 generic.go:334] "Generic (PLEG): container finished" podID="8ff08dc4-6bf7-4c61-bdf4-289c0c653a2f" containerID="9adfb512ae2ab274efee1d0e64ecacb8729c1d7ec2e24fd57c3de2c74b98259d" exitCode=0 Dec 09 12:29:18 crc kubenswrapper[4703]: I1209 12:29:18.913889 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-94fd6bfbb-8c7xs" event={"ID":"8ff08dc4-6bf7-4c61-bdf4-289c0c653a2f","Type":"ContainerDied","Data":"9adfb512ae2ab274efee1d0e64ecacb8729c1d7ec2e24fd57c3de2c74b98259d"} Dec 09 12:29:18 crc kubenswrapper[4703]: I1209 12:29:18.953557 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 09 12:29:18 crc kubenswrapper[4703]: I1209 12:29:18.976856 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.005997 4703 scope.go:117] "RemoveContainer" containerID="d07006d6a61b3b7a5d4ce06765d558f99bce004e3a477926ef4e379538a22286" Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.040883 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-api-0"] Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.053381 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-94fd6bfbb-8c7xs" Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.114617 4703 scope.go:117] "RemoveContainer" containerID="d97e9fdfe98dc422ad1cce9839437d8b0f25b9c846cd6019011703236ff7f7b2" Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.139933 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-4ccp2" Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.152062 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ff08dc4-6bf7-4c61-bdf4-289c0c653a2f-logs\") pod \"8ff08dc4-6bf7-4c61-bdf4-289c0c653a2f\" (UID: \"8ff08dc4-6bf7-4c61-bdf4-289c0c653a2f\") " Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.152129 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ff08dc4-6bf7-4c61-bdf4-289c0c653a2f-config-data\") pod \"8ff08dc4-6bf7-4c61-bdf4-289c0c653a2f\" (UID: \"8ff08dc4-6bf7-4c61-bdf4-289c0c653a2f\") " Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.152326 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8ff08dc4-6bf7-4c61-bdf4-289c0c653a2f-config-data-custom\") pod \"8ff08dc4-6bf7-4c61-bdf4-289c0c653a2f\" (UID: \"8ff08dc4-6bf7-4c61-bdf4-289c0c653a2f\") " Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.152613 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ff08dc4-6bf7-4c61-bdf4-289c0c653a2f-combined-ca-bundle\") pod \"8ff08dc4-6bf7-4c61-bdf4-289c0c653a2f\" (UID: \"8ff08dc4-6bf7-4c61-bdf4-289c0c653a2f\") " Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.152656 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cp5ws\" (UniqueName: \"kubernetes.io/projected/8ff08dc4-6bf7-4c61-bdf4-289c0c653a2f-kube-api-access-cp5ws\") pod \"8ff08dc4-6bf7-4c61-bdf4-289c0c653a2f\" (UID: \"8ff08dc4-6bf7-4c61-bdf4-289c0c653a2f\") " Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.155351 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c7f991a-5d56-41ee-a0c3-005743b900b9" path="/var/lib/kubelet/pods/2c7f991a-5d56-41ee-a0c3-005743b900b9/volumes" Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.156957 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ff08dc4-6bf7-4c61-bdf4-289c0c653a2f-logs" (OuterVolumeSpecName: "logs") pod "8ff08dc4-6bf7-4c61-bdf4-289c0c653a2f" (UID: "8ff08dc4-6bf7-4c61-bdf4-289c0c653a2f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.161629 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ff08dc4-6bf7-4c61-bdf4-289c0c653a2f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "8ff08dc4-6bf7-4c61-bdf4-289c0c653a2f" (UID: "8ff08dc4-6bf7-4c61-bdf4-289c0c653a2f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.167118 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ff08dc4-6bf7-4c61-bdf4-289c0c653a2f-kube-api-access-cp5ws" (OuterVolumeSpecName: "kube-api-access-cp5ws") pod "8ff08dc4-6bf7-4c61-bdf4-289c0c653a2f" (UID: "8ff08dc4-6bf7-4c61-bdf4-289c0c653a2f"). InnerVolumeSpecName "kube-api-access-cp5ws". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.169372 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-api-0"] Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.169426 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Dec 09 12:29:19 crc kubenswrapper[4703]: E1209 12:29:19.170083 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c7f991a-5d56-41ee-a0c3-005743b900b9" containerName="probe" Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.170099 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c7f991a-5d56-41ee-a0c3-005743b900b9" containerName="probe" Dec 09 12:29:19 crc kubenswrapper[4703]: E1209 12:29:19.170112 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7ce5f82-bf77-4fcc-9ae9-55c0732c5012" containerName="init" Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.170118 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7ce5f82-bf77-4fcc-9ae9-55c0732c5012" containerName="init" Dec 09 12:29:19 crc kubenswrapper[4703]: E1209 12:29:19.170138 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c7f991a-5d56-41ee-a0c3-005743b900b9" containerName="cinder-scheduler" Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.170144 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c7f991a-5d56-41ee-a0c3-005743b900b9" containerName="cinder-scheduler" Dec 09 12:29:19 crc kubenswrapper[4703]: E1209 12:29:19.170150 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ff08dc4-6bf7-4c61-bdf4-289c0c653a2f" containerName="barbican-api-log" Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.170156 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ff08dc4-6bf7-4c61-bdf4-289c0c653a2f" containerName="barbican-api-log" Dec 09 12:29:19 crc kubenswrapper[4703]: E1209 12:29:19.170168 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7ce5f82-bf77-4fcc-9ae9-55c0732c5012" containerName="dnsmasq-dns" Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.170173 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7ce5f82-bf77-4fcc-9ae9-55c0732c5012" containerName="dnsmasq-dns" Dec 09 12:29:19 crc kubenswrapper[4703]: E1209 12:29:19.170206 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62eeb497-1ec6-4eb9-b130-b3f901ad26f0" containerName="cloudkitty-api-log" Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.170213 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="62eeb497-1ec6-4eb9-b130-b3f901ad26f0" containerName="cloudkitty-api-log" Dec 09 12:29:19 crc kubenswrapper[4703]: E1209 12:29:19.170226 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62eeb497-1ec6-4eb9-b130-b3f901ad26f0" containerName="cloudkitty-api" Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.170232 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="62eeb497-1ec6-4eb9-b130-b3f901ad26f0" containerName="cloudkitty-api" Dec 09 12:29:19 crc kubenswrapper[4703]: E1209 12:29:19.170250 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ff08dc4-6bf7-4c61-bdf4-289c0c653a2f" containerName="barbican-api" Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.170255 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ff08dc4-6bf7-4c61-bdf4-289c0c653a2f" containerName="barbican-api" Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.170438 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7ce5f82-bf77-4fcc-9ae9-55c0732c5012" containerName="dnsmasq-dns" Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.170453 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="62eeb497-1ec6-4eb9-b130-b3f901ad26f0" containerName="cloudkitty-api-log" Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.170465 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c7f991a-5d56-41ee-a0c3-005743b900b9" containerName="cinder-scheduler" Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.170476 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c7f991a-5d56-41ee-a0c3-005743b900b9" containerName="probe" Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.170486 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="62eeb497-1ec6-4eb9-b130-b3f901ad26f0" containerName="cloudkitty-api" Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.170500 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ff08dc4-6bf7-4c61-bdf4-289c0c653a2f" containerName="barbican-api-log" Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.170508 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ff08dc4-6bf7-4c61-bdf4-289c0c653a2f" containerName="barbican-api" Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.184469 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.188990 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.204539 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-api-0"] Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.207104 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.220305 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-api-0"] Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.243288 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.243947 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-internal-svc" Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.244290 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-public-svc" Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.244512 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-api-config-data" Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.255247 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7ce5f82-bf77-4fcc-9ae9-55c0732c5012-config\") pod \"d7ce5f82-bf77-4fcc-9ae9-55c0732c5012\" (UID: \"d7ce5f82-bf77-4fcc-9ae9-55c0732c5012\") " Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.255594 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d7ce5f82-bf77-4fcc-9ae9-55c0732c5012-ovsdbserver-sb\") pod \"d7ce5f82-bf77-4fcc-9ae9-55c0732c5012\" (UID: \"d7ce5f82-bf77-4fcc-9ae9-55c0732c5012\") " Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.255728 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d7ce5f82-bf77-4fcc-9ae9-55c0732c5012-ovsdbserver-nb\") pod \"d7ce5f82-bf77-4fcc-9ae9-55c0732c5012\" (UID: \"d7ce5f82-bf77-4fcc-9ae9-55c0732c5012\") " Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.255780 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c675c\" (UniqueName: \"kubernetes.io/projected/d7ce5f82-bf77-4fcc-9ae9-55c0732c5012-kube-api-access-c675c\") pod \"d7ce5f82-bf77-4fcc-9ae9-55c0732c5012\" (UID: \"d7ce5f82-bf77-4fcc-9ae9-55c0732c5012\") " Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.255838 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d7ce5f82-bf77-4fcc-9ae9-55c0732c5012-dns-svc\") pod \"d7ce5f82-bf77-4fcc-9ae9-55c0732c5012\" (UID: \"d7ce5f82-bf77-4fcc-9ae9-55c0732c5012\") " Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.255911 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d7ce5f82-bf77-4fcc-9ae9-55c0732c5012-dns-swift-storage-0\") pod \"d7ce5f82-bf77-4fcc-9ae9-55c0732c5012\" (UID: \"d7ce5f82-bf77-4fcc-9ae9-55c0732c5012\") " Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.260663 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0caaf13-e6d6-4666-a620-b09e9988bb1c-config-data\") pod \"cinder-scheduler-0\" (UID: \"b0caaf13-e6d6-4666-a620-b09e9988bb1c\") " pod="openstack/cinder-scheduler-0" Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.263501 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7ce5f82-bf77-4fcc-9ae9-55c0732c5012-kube-api-access-c675c" (OuterVolumeSpecName: "kube-api-access-c675c") pod "d7ce5f82-bf77-4fcc-9ae9-55c0732c5012" (UID: "d7ce5f82-bf77-4fcc-9ae9-55c0732c5012"). InnerVolumeSpecName "kube-api-access-c675c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.268145 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ff08dc4-6bf7-4c61-bdf4-289c0c653a2f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8ff08dc4-6bf7-4c61-bdf4-289c0c653a2f" (UID: "8ff08dc4-6bf7-4c61-bdf4-289c0c653a2f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.268355 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b0caaf13-e6d6-4666-a620-b09e9988bb1c-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"b0caaf13-e6d6-4666-a620-b09e9988bb1c\") " pod="openstack/cinder-scheduler-0" Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.268460 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0caaf13-e6d6-4666-a620-b09e9988bb1c-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"b0caaf13-e6d6-4666-a620-b09e9988bb1c\") " pod="openstack/cinder-scheduler-0" Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.268543 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gr8t\" (UniqueName: \"kubernetes.io/projected/b0caaf13-e6d6-4666-a620-b09e9988bb1c-kube-api-access-9gr8t\") pod \"cinder-scheduler-0\" (UID: \"b0caaf13-e6d6-4666-a620-b09e9988bb1c\") " pod="openstack/cinder-scheduler-0" Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.268893 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0caaf13-e6d6-4666-a620-b09e9988bb1c-scripts\") pod \"cinder-scheduler-0\" (UID: \"b0caaf13-e6d6-4666-a620-b09e9988bb1c\") " pod="openstack/cinder-scheduler-0" Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.269510 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b0caaf13-e6d6-4666-a620-b09e9988bb1c-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"b0caaf13-e6d6-4666-a620-b09e9988bb1c\") " pod="openstack/cinder-scheduler-0" Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.269931 4703 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8ff08dc4-6bf7-4c61-bdf4-289c0c653a2f-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.269964 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c675c\" (UniqueName: \"kubernetes.io/projected/d7ce5f82-bf77-4fcc-9ae9-55c0732c5012-kube-api-access-c675c\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.270001 4703 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ff08dc4-6bf7-4c61-bdf4-289c0c653a2f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.270014 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cp5ws\" (UniqueName: \"kubernetes.io/projected/8ff08dc4-6bf7-4c61-bdf4-289c0c653a2f-kube-api-access-cp5ws\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.270027 4703 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ff08dc4-6bf7-4c61-bdf4-289c0c653a2f-logs\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.325455 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ff08dc4-6bf7-4c61-bdf4-289c0c653a2f-config-data" (OuterVolumeSpecName: "config-data") pod "8ff08dc4-6bf7-4c61-bdf4-289c0c653a2f" (UID: "8ff08dc4-6bf7-4c61-bdf4-289c0c653a2f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.372482 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/20fe1291-16c1-4602-b70d-fad8bda0f61b-certs\") pod \"cloudkitty-api-0\" (UID: \"20fe1291-16c1-4602-b70d-fad8bda0f61b\") " pod="openstack/cloudkitty-api-0" Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.372591 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ld7zb\" (UniqueName: \"kubernetes.io/projected/20fe1291-16c1-4602-b70d-fad8bda0f61b-kube-api-access-ld7zb\") pod \"cloudkitty-api-0\" (UID: \"20fe1291-16c1-4602-b70d-fad8bda0f61b\") " pod="openstack/cloudkitty-api-0" Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.372617 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20fe1291-16c1-4602-b70d-fad8bda0f61b-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"20fe1291-16c1-4602-b70d-fad8bda0f61b\") " pod="openstack/cloudkitty-api-0" Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.372663 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0caaf13-e6d6-4666-a620-b09e9988bb1c-config-data\") pod \"cinder-scheduler-0\" (UID: \"b0caaf13-e6d6-4666-a620-b09e9988bb1c\") " pod="openstack/cinder-scheduler-0" Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.372686 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20fe1291-16c1-4602-b70d-fad8bda0f61b-scripts\") pod \"cloudkitty-api-0\" (UID: \"20fe1291-16c1-4602-b70d-fad8bda0f61b\") " pod="openstack/cloudkitty-api-0" Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.372762 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b0caaf13-e6d6-4666-a620-b09e9988bb1c-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"b0caaf13-e6d6-4666-a620-b09e9988bb1c\") " pod="openstack/cinder-scheduler-0" Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.373478 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0caaf13-e6d6-4666-a620-b09e9988bb1c-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"b0caaf13-e6d6-4666-a620-b09e9988bb1c\") " pod="openstack/cinder-scheduler-0" Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.373515 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/20fe1291-16c1-4602-b70d-fad8bda0f61b-public-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"20fe1291-16c1-4602-b70d-fad8bda0f61b\") " pod="openstack/cloudkitty-api-0" Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.373565 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gr8t\" (UniqueName: \"kubernetes.io/projected/b0caaf13-e6d6-4666-a620-b09e9988bb1c-kube-api-access-9gr8t\") pod \"cinder-scheduler-0\" (UID: \"b0caaf13-e6d6-4666-a620-b09e9988bb1c\") " pod="openstack/cinder-scheduler-0" Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.373647 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/20fe1291-16c1-4602-b70d-fad8bda0f61b-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"20fe1291-16c1-4602-b70d-fad8bda0f61b\") " pod="openstack/cloudkitty-api-0" Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.373683 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/20fe1291-16c1-4602-b70d-fad8bda0f61b-internal-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"20fe1291-16c1-4602-b70d-fad8bda0f61b\") " pod="openstack/cloudkitty-api-0" Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.373717 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0caaf13-e6d6-4666-a620-b09e9988bb1c-scripts\") pod \"cinder-scheduler-0\" (UID: \"b0caaf13-e6d6-4666-a620-b09e9988bb1c\") " pod="openstack/cinder-scheduler-0" Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.373802 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20fe1291-16c1-4602-b70d-fad8bda0f61b-logs\") pod \"cloudkitty-api-0\" (UID: \"20fe1291-16c1-4602-b70d-fad8bda0f61b\") " pod="openstack/cloudkitty-api-0" Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.373927 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b0caaf13-e6d6-4666-a620-b09e9988bb1c-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"b0caaf13-e6d6-4666-a620-b09e9988bb1c\") " pod="openstack/cinder-scheduler-0" Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.373977 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20fe1291-16c1-4602-b70d-fad8bda0f61b-config-data\") pod \"cloudkitty-api-0\" (UID: \"20fe1291-16c1-4602-b70d-fad8bda0f61b\") " pod="openstack/cloudkitty-api-0" Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.374073 4703 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ff08dc4-6bf7-4c61-bdf4-289c0c653a2f-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.374166 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b0caaf13-e6d6-4666-a620-b09e9988bb1c-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"b0caaf13-e6d6-4666-a620-b09e9988bb1c\") " pod="openstack/cinder-scheduler-0" Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.384455 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b0caaf13-e6d6-4666-a620-b09e9988bb1c-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"b0caaf13-e6d6-4666-a620-b09e9988bb1c\") " pod="openstack/cinder-scheduler-0" Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.385570 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0caaf13-e6d6-4666-a620-b09e9988bb1c-config-data\") pod \"cinder-scheduler-0\" (UID: \"b0caaf13-e6d6-4666-a620-b09e9988bb1c\") " pod="openstack/cinder-scheduler-0" Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.389680 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0caaf13-e6d6-4666-a620-b09e9988bb1c-scripts\") pod \"cinder-scheduler-0\" (UID: \"b0caaf13-e6d6-4666-a620-b09e9988bb1c\") " pod="openstack/cinder-scheduler-0" Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.403083 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7ce5f82-bf77-4fcc-9ae9-55c0732c5012-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d7ce5f82-bf77-4fcc-9ae9-55c0732c5012" (UID: "d7ce5f82-bf77-4fcc-9ae9-55c0732c5012"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.409661 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gr8t\" (UniqueName: \"kubernetes.io/projected/b0caaf13-e6d6-4666-a620-b09e9988bb1c-kube-api-access-9gr8t\") pod \"cinder-scheduler-0\" (UID: \"b0caaf13-e6d6-4666-a620-b09e9988bb1c\") " pod="openstack/cinder-scheduler-0" Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.415131 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0caaf13-e6d6-4666-a620-b09e9988bb1c-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"b0caaf13-e6d6-4666-a620-b09e9988bb1c\") " pod="openstack/cinder-scheduler-0" Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.418003 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7ce5f82-bf77-4fcc-9ae9-55c0732c5012-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d7ce5f82-bf77-4fcc-9ae9-55c0732c5012" (UID: "d7ce5f82-bf77-4fcc-9ae9-55c0732c5012"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.424182 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7ce5f82-bf77-4fcc-9ae9-55c0732c5012-config" (OuterVolumeSpecName: "config") pod "d7ce5f82-bf77-4fcc-9ae9-55c0732c5012" (UID: "d7ce5f82-bf77-4fcc-9ae9-55c0732c5012"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.424910 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7ce5f82-bf77-4fcc-9ae9-55c0732c5012-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d7ce5f82-bf77-4fcc-9ae9-55c0732c5012" (UID: "d7ce5f82-bf77-4fcc-9ae9-55c0732c5012"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.438121 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7ce5f82-bf77-4fcc-9ae9-55c0732c5012-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d7ce5f82-bf77-4fcc-9ae9-55c0732c5012" (UID: "d7ce5f82-bf77-4fcc-9ae9-55c0732c5012"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.475844 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20fe1291-16c1-4602-b70d-fad8bda0f61b-logs\") pod \"cloudkitty-api-0\" (UID: \"20fe1291-16c1-4602-b70d-fad8bda0f61b\") " pod="openstack/cloudkitty-api-0" Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.475951 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20fe1291-16c1-4602-b70d-fad8bda0f61b-config-data\") pod \"cloudkitty-api-0\" (UID: \"20fe1291-16c1-4602-b70d-fad8bda0f61b\") " pod="openstack/cloudkitty-api-0" Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.476007 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/20fe1291-16c1-4602-b70d-fad8bda0f61b-certs\") pod \"cloudkitty-api-0\" (UID: \"20fe1291-16c1-4602-b70d-fad8bda0f61b\") " pod="openstack/cloudkitty-api-0" Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.476047 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ld7zb\" (UniqueName: \"kubernetes.io/projected/20fe1291-16c1-4602-b70d-fad8bda0f61b-kube-api-access-ld7zb\") pod \"cloudkitty-api-0\" (UID: \"20fe1291-16c1-4602-b70d-fad8bda0f61b\") " pod="openstack/cloudkitty-api-0" Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.476071 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20fe1291-16c1-4602-b70d-fad8bda0f61b-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"20fe1291-16c1-4602-b70d-fad8bda0f61b\") " pod="openstack/cloudkitty-api-0" Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.476099 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20fe1291-16c1-4602-b70d-fad8bda0f61b-scripts\") pod \"cloudkitty-api-0\" (UID: \"20fe1291-16c1-4602-b70d-fad8bda0f61b\") " pod="openstack/cloudkitty-api-0" Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.476157 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/20fe1291-16c1-4602-b70d-fad8bda0f61b-public-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"20fe1291-16c1-4602-b70d-fad8bda0f61b\") " pod="openstack/cloudkitty-api-0" Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.476239 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/20fe1291-16c1-4602-b70d-fad8bda0f61b-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"20fe1291-16c1-4602-b70d-fad8bda0f61b\") " pod="openstack/cloudkitty-api-0" Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.476271 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/20fe1291-16c1-4602-b70d-fad8bda0f61b-internal-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"20fe1291-16c1-4602-b70d-fad8bda0f61b\") " pod="openstack/cloudkitty-api-0" Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.476360 4703 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d7ce5f82-bf77-4fcc-9ae9-55c0732c5012-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.476372 4703 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d7ce5f82-bf77-4fcc-9ae9-55c0732c5012-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.476383 4703 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7ce5f82-bf77-4fcc-9ae9-55c0732c5012-config\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.476393 4703 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d7ce5f82-bf77-4fcc-9ae9-55c0732c5012-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.476402 4703 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d7ce5f82-bf77-4fcc-9ae9-55c0732c5012-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.477808 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20fe1291-16c1-4602-b70d-fad8bda0f61b-logs\") pod \"cloudkitty-api-0\" (UID: \"20fe1291-16c1-4602-b70d-fad8bda0f61b\") " pod="openstack/cloudkitty-api-0" Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.481523 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20fe1291-16c1-4602-b70d-fad8bda0f61b-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"20fe1291-16c1-4602-b70d-fad8bda0f61b\") " pod="openstack/cloudkitty-api-0" Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.481990 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20fe1291-16c1-4602-b70d-fad8bda0f61b-config-data\") pod \"cloudkitty-api-0\" (UID: \"20fe1291-16c1-4602-b70d-fad8bda0f61b\") " pod="openstack/cloudkitty-api-0" Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.483595 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/20fe1291-16c1-4602-b70d-fad8bda0f61b-internal-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"20fe1291-16c1-4602-b70d-fad8bda0f61b\") " pod="openstack/cloudkitty-api-0" Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.484809 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20fe1291-16c1-4602-b70d-fad8bda0f61b-scripts\") pod \"cloudkitty-api-0\" (UID: \"20fe1291-16c1-4602-b70d-fad8bda0f61b\") " pod="openstack/cloudkitty-api-0" Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.485178 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/20fe1291-16c1-4602-b70d-fad8bda0f61b-certs\") pod \"cloudkitty-api-0\" (UID: \"20fe1291-16c1-4602-b70d-fad8bda0f61b\") " pod="openstack/cloudkitty-api-0" Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.487623 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/20fe1291-16c1-4602-b70d-fad8bda0f61b-public-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"20fe1291-16c1-4602-b70d-fad8bda0f61b\") " pod="openstack/cloudkitty-api-0" Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.489180 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/20fe1291-16c1-4602-b70d-fad8bda0f61b-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"20fe1291-16c1-4602-b70d-fad8bda0f61b\") " pod="openstack/cloudkitty-api-0" Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.501634 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ld7zb\" (UniqueName: \"kubernetes.io/projected/20fe1291-16c1-4602-b70d-fad8bda0f61b-kube-api-access-ld7zb\") pod \"cloudkitty-api-0\" (UID: \"20fe1291-16c1-4602-b70d-fad8bda0f61b\") " pod="openstack/cloudkitty-api-0" Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.574533 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.604629 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.947687 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-4ccp2" event={"ID":"d7ce5f82-bf77-4fcc-9ae9-55c0732c5012","Type":"ContainerDied","Data":"28affe60e7c3ddd5d4f2433c6611c538ab9d1dafe943b8ab250aa788fdbd2658"} Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.947990 4703 scope.go:117] "RemoveContainer" containerID="f01f5c84e5ca6a649f1cbdccb4c4e2f793d17d73c0b57633fa3f02de7c6ea09d" Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.948138 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-4ccp2" Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.966569 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-94fd6bfbb-8c7xs" event={"ID":"8ff08dc4-6bf7-4c61-bdf4-289c0c653a2f","Type":"ContainerDied","Data":"ca97f713c271635edecd9b50e9aa8c3c14188b16f8660bce2a71cd6274753397"} Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.966641 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-94fd6bfbb-8c7xs" Dec 09 12:29:19 crc kubenswrapper[4703]: I1209 12:29:19.973228 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"0273ae71-ef65-4c05-a133-45a5078aeca9","Type":"ContainerStarted","Data":"04a69e4bfa55e690fa4851872202f84f41c444adf172c08a65c3f280d6ee6ca7"} Dec 09 12:29:20 crc kubenswrapper[4703]: I1209 12:29:20.003657 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-4ccp2"] Dec 09 12:29:20 crc kubenswrapper[4703]: I1209 12:29:20.009333 4703 scope.go:117] "RemoveContainer" containerID="7221928aee52153437649465743c33f4b1548af84eeac21bcea0e7162ddcbbee" Dec 09 12:29:20 crc kubenswrapper[4703]: I1209 12:29:20.047014 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-4ccp2"] Dec 09 12:29:20 crc kubenswrapper[4703]: I1209 12:29:20.052140 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-proc-0" podStartSLOduration=2.963947679 podStartE2EDuration="14.052106683s" podCreationTimestamp="2025-12-09 12:29:06 +0000 UTC" firstStartedPulling="2025-12-09 12:29:07.485604874 +0000 UTC m=+1446.734368393" lastFinishedPulling="2025-12-09 12:29:18.573763868 +0000 UTC m=+1457.822527397" observedRunningTime="2025-12-09 12:29:20.022610526 +0000 UTC m=+1459.271374045" watchObservedRunningTime="2025-12-09 12:29:20.052106683 +0000 UTC m=+1459.300870202" Dec 09 12:29:20 crc kubenswrapper[4703]: I1209 12:29:20.093619 4703 scope.go:117] "RemoveContainer" containerID="9adfb512ae2ab274efee1d0e64ecacb8729c1d7ec2e24fd57c3de2c74b98259d" Dec 09 12:29:20 crc kubenswrapper[4703]: I1209 12:29:20.093943 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-proc-0"] Dec 09 12:29:20 crc kubenswrapper[4703]: I1209 12:29:20.110945 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-94fd6bfbb-8c7xs"] Dec 09 12:29:20 crc kubenswrapper[4703]: I1209 12:29:20.121594 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-94fd6bfbb-8c7xs"] Dec 09 12:29:20 crc kubenswrapper[4703]: I1209 12:29:20.189904 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 09 12:29:20 crc kubenswrapper[4703]: I1209 12:29:20.224363 4703 scope.go:117] "RemoveContainer" containerID="6dead61dfb312ac79981c8eabd409d599baae5e2925bfec2efcfb51b184b5bb1" Dec 09 12:29:20 crc kubenswrapper[4703]: I1209 12:29:20.353859 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-api-0"] Dec 09 12:29:21 crc kubenswrapper[4703]: I1209 12:29:21.068236 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"20fe1291-16c1-4602-b70d-fad8bda0f61b","Type":"ContainerStarted","Data":"b37a91e29a5ad7315ee61f56d3956db87b536daf9fd1be3c322b7330a72b36da"} Dec 09 12:29:21 crc kubenswrapper[4703]: I1209 12:29:21.068623 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"20fe1291-16c1-4602-b70d-fad8bda0f61b","Type":"ContainerStarted","Data":"b10461ad61a8efce6a0371e7fb96e1d1f8cb13d03e9afc288caf22abc393c00f"} Dec 09 12:29:21 crc kubenswrapper[4703]: I1209 12:29:21.106836 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62eeb497-1ec6-4eb9-b130-b3f901ad26f0" path="/var/lib/kubelet/pods/62eeb497-1ec6-4eb9-b130-b3f901ad26f0/volumes" Dec 09 12:29:21 crc kubenswrapper[4703]: I1209 12:29:21.107663 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ff08dc4-6bf7-4c61-bdf4-289c0c653a2f" path="/var/lib/kubelet/pods/8ff08dc4-6bf7-4c61-bdf4-289c0c653a2f/volumes" Dec 09 12:29:21 crc kubenswrapper[4703]: I1209 12:29:21.108304 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7ce5f82-bf77-4fcc-9ae9-55c0732c5012" path="/var/lib/kubelet/pods/d7ce5f82-bf77-4fcc-9ae9-55c0732c5012/volumes" Dec 09 12:29:21 crc kubenswrapper[4703]: I1209 12:29:21.109677 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b0caaf13-e6d6-4666-a620-b09e9988bb1c","Type":"ContainerStarted","Data":"ca11f6e73d254af5ed3a6367722edd83cca1af23a03c2673af0a5ced2dad6c9b"} Dec 09 12:29:21 crc kubenswrapper[4703]: I1209 12:29:21.177428 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-m57j8" Dec 09 12:29:21 crc kubenswrapper[4703]: I1209 12:29:21.177527 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-m57j8" Dec 09 12:29:22 crc kubenswrapper[4703]: I1209 12:29:22.169581 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b0caaf13-e6d6-4666-a620-b09e9988bb1c","Type":"ContainerStarted","Data":"d386c03ae18f60ff6fae04d3728b2a343e700eaa52fc54fac031e22e82253773"} Dec 09 12:29:22 crc kubenswrapper[4703]: I1209 12:29:22.196675 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"20fe1291-16c1-4602-b70d-fad8bda0f61b","Type":"ContainerStarted","Data":"5baa7d847791ad6bb50cadd4a858a1af3597c7d74c38ae20bc1aee63d5c3e3f5"} Dec 09 12:29:22 crc kubenswrapper[4703]: I1209 12:29:22.197298 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cloudkitty-proc-0" podUID="0273ae71-ef65-4c05-a133-45a5078aeca9" containerName="cloudkitty-proc" containerID="cri-o://04a69e4bfa55e690fa4851872202f84f41c444adf172c08a65c3f280d6ee6ca7" gracePeriod=30 Dec 09 12:29:22 crc kubenswrapper[4703]: I1209 12:29:22.251815 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-api-0" podStartSLOduration=4.251786293 podStartE2EDuration="4.251786293s" podCreationTimestamp="2025-12-09 12:29:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:29:22.225133587 +0000 UTC m=+1461.473897116" watchObservedRunningTime="2025-12-09 12:29:22.251786293 +0000 UTC m=+1461.500549812" Dec 09 12:29:22 crc kubenswrapper[4703]: I1209 12:29:22.277659 4703 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-m57j8" podUID="03ce63d0-6e1f-4d88-a26c-05b867554db5" containerName="registry-server" probeResult="failure" output=< Dec 09 12:29:22 crc kubenswrapper[4703]: timeout: failed to connect service ":50051" within 1s Dec 09 12:29:22 crc kubenswrapper[4703]: > Dec 09 12:29:22 crc kubenswrapper[4703]: I1209 12:29:22.601506 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Dec 09 12:29:23 crc kubenswrapper[4703]: I1209 12:29:23.218580 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b0caaf13-e6d6-4666-a620-b09e9988bb1c","Type":"ContainerStarted","Data":"a5ee200c5932efaeb0f755e8045d21d6300c73a5515c9171f4077fee4a16cf5d"} Dec 09 12:29:23 crc kubenswrapper[4703]: I1209 12:29:23.219234 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-api-0" Dec 09 12:29:23 crc kubenswrapper[4703]: I1209 12:29:23.251268 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=5.251242466 podStartE2EDuration="5.251242466s" podCreationTimestamp="2025-12-09 12:29:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:29:23.240146609 +0000 UTC m=+1462.488910128" watchObservedRunningTime="2025-12-09 12:29:23.251242466 +0000 UTC m=+1462.500005985" Dec 09 12:29:24 crc kubenswrapper[4703]: I1209 12:29:24.574634 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Dec 09 12:29:25 crc kubenswrapper[4703]: I1209 12:29:25.255134 4703 generic.go:334] "Generic (PLEG): container finished" podID="0273ae71-ef65-4c05-a133-45a5078aeca9" containerID="04a69e4bfa55e690fa4851872202f84f41c444adf172c08a65c3f280d6ee6ca7" exitCode=0 Dec 09 12:29:25 crc kubenswrapper[4703]: I1209 12:29:25.255342 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"0273ae71-ef65-4c05-a133-45a5078aeca9","Type":"ContainerDied","Data":"04a69e4bfa55e690fa4851872202f84f41c444adf172c08a65c3f280d6ee6ca7"} Dec 09 12:29:25 crc kubenswrapper[4703]: I1209 12:29:25.496764 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Dec 09 12:29:25 crc kubenswrapper[4703]: I1209 12:29:25.578092 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0273ae71-ef65-4c05-a133-45a5078aeca9-combined-ca-bundle\") pod \"0273ae71-ef65-4c05-a133-45a5078aeca9\" (UID: \"0273ae71-ef65-4c05-a133-45a5078aeca9\") " Dec 09 12:29:25 crc kubenswrapper[4703]: I1209 12:29:25.578152 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0273ae71-ef65-4c05-a133-45a5078aeca9-scripts\") pod \"0273ae71-ef65-4c05-a133-45a5078aeca9\" (UID: \"0273ae71-ef65-4c05-a133-45a5078aeca9\") " Dec 09 12:29:25 crc kubenswrapper[4703]: I1209 12:29:25.578192 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/0273ae71-ef65-4c05-a133-45a5078aeca9-certs\") pod \"0273ae71-ef65-4c05-a133-45a5078aeca9\" (UID: \"0273ae71-ef65-4c05-a133-45a5078aeca9\") " Dec 09 12:29:25 crc kubenswrapper[4703]: I1209 12:29:25.578230 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0273ae71-ef65-4c05-a133-45a5078aeca9-config-data-custom\") pod \"0273ae71-ef65-4c05-a133-45a5078aeca9\" (UID: \"0273ae71-ef65-4c05-a133-45a5078aeca9\") " Dec 09 12:29:25 crc kubenswrapper[4703]: I1209 12:29:25.578334 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l275f\" (UniqueName: \"kubernetes.io/projected/0273ae71-ef65-4c05-a133-45a5078aeca9-kube-api-access-l275f\") pod \"0273ae71-ef65-4c05-a133-45a5078aeca9\" (UID: \"0273ae71-ef65-4c05-a133-45a5078aeca9\") " Dec 09 12:29:25 crc kubenswrapper[4703]: I1209 12:29:25.578570 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0273ae71-ef65-4c05-a133-45a5078aeca9-config-data\") pod \"0273ae71-ef65-4c05-a133-45a5078aeca9\" (UID: \"0273ae71-ef65-4c05-a133-45a5078aeca9\") " Dec 09 12:29:25 crc kubenswrapper[4703]: I1209 12:29:25.588744 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0273ae71-ef65-4c05-a133-45a5078aeca9-certs" (OuterVolumeSpecName: "certs") pod "0273ae71-ef65-4c05-a133-45a5078aeca9" (UID: "0273ae71-ef65-4c05-a133-45a5078aeca9"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:29:25 crc kubenswrapper[4703]: I1209 12:29:25.590303 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0273ae71-ef65-4c05-a133-45a5078aeca9-scripts" (OuterVolumeSpecName: "scripts") pod "0273ae71-ef65-4c05-a133-45a5078aeca9" (UID: "0273ae71-ef65-4c05-a133-45a5078aeca9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:29:25 crc kubenswrapper[4703]: I1209 12:29:25.592512 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0273ae71-ef65-4c05-a133-45a5078aeca9-kube-api-access-l275f" (OuterVolumeSpecName: "kube-api-access-l275f") pod "0273ae71-ef65-4c05-a133-45a5078aeca9" (UID: "0273ae71-ef65-4c05-a133-45a5078aeca9"). InnerVolumeSpecName "kube-api-access-l275f". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:29:25 crc kubenswrapper[4703]: I1209 12:29:25.598548 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0273ae71-ef65-4c05-a133-45a5078aeca9-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "0273ae71-ef65-4c05-a133-45a5078aeca9" (UID: "0273ae71-ef65-4c05-a133-45a5078aeca9"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:29:25 crc kubenswrapper[4703]: I1209 12:29:25.626075 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0273ae71-ef65-4c05-a133-45a5078aeca9-config-data" (OuterVolumeSpecName: "config-data") pod "0273ae71-ef65-4c05-a133-45a5078aeca9" (UID: "0273ae71-ef65-4c05-a133-45a5078aeca9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:29:25 crc kubenswrapper[4703]: I1209 12:29:25.632020 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0273ae71-ef65-4c05-a133-45a5078aeca9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0273ae71-ef65-4c05-a133-45a5078aeca9" (UID: "0273ae71-ef65-4c05-a133-45a5078aeca9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:29:25 crc kubenswrapper[4703]: I1209 12:29:25.680332 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l275f\" (UniqueName: \"kubernetes.io/projected/0273ae71-ef65-4c05-a133-45a5078aeca9-kube-api-access-l275f\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:25 crc kubenswrapper[4703]: I1209 12:29:25.680380 4703 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0273ae71-ef65-4c05-a133-45a5078aeca9-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:25 crc kubenswrapper[4703]: I1209 12:29:25.680396 4703 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0273ae71-ef65-4c05-a133-45a5078aeca9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:25 crc kubenswrapper[4703]: I1209 12:29:25.680408 4703 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0273ae71-ef65-4c05-a133-45a5078aeca9-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:25 crc kubenswrapper[4703]: I1209 12:29:25.680421 4703 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/projected/0273ae71-ef65-4c05-a133-45a5078aeca9-certs\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:25 crc kubenswrapper[4703]: I1209 12:29:25.680432 4703 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0273ae71-ef65-4c05-a133-45a5078aeca9-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:26 crc kubenswrapper[4703]: I1209 12:29:26.268755 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-85bddbcbcc-v2sfc"] Dec 09 12:29:26 crc kubenswrapper[4703]: E1209 12:29:26.269927 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0273ae71-ef65-4c05-a133-45a5078aeca9" containerName="cloudkitty-proc" Dec 09 12:29:26 crc kubenswrapper[4703]: I1209 12:29:26.269957 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="0273ae71-ef65-4c05-a133-45a5078aeca9" containerName="cloudkitty-proc" Dec 09 12:29:26 crc kubenswrapper[4703]: I1209 12:29:26.270284 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="0273ae71-ef65-4c05-a133-45a5078aeca9" containerName="cloudkitty-proc" Dec 09 12:29:26 crc kubenswrapper[4703]: I1209 12:29:26.276141 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-85bddbcbcc-v2sfc" Dec 09 12:29:26 crc kubenswrapper[4703]: I1209 12:29:26.279403 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Dec 09 12:29:26 crc kubenswrapper[4703]: I1209 12:29:26.279787 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Dec 09 12:29:26 crc kubenswrapper[4703]: I1209 12:29:26.280270 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Dec 09 12:29:26 crc kubenswrapper[4703]: I1209 12:29:26.286402 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"0273ae71-ef65-4c05-a133-45a5078aeca9","Type":"ContainerDied","Data":"e57641c835214038236ebb7b876fc5f5b780b82299b43f7d8ebe2c80e33949ae"} Dec 09 12:29:26 crc kubenswrapper[4703]: I1209 12:29:26.286473 4703 scope.go:117] "RemoveContainer" containerID="04a69e4bfa55e690fa4851872202f84f41c444adf172c08a65c3f280d6ee6ca7" Dec 09 12:29:26 crc kubenswrapper[4703]: I1209 12:29:26.286480 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Dec 09 12:29:26 crc kubenswrapper[4703]: I1209 12:29:26.292748 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-85bddbcbcc-v2sfc"] Dec 09 12:29:26 crc kubenswrapper[4703]: I1209 12:29:26.300067 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/366b66fd-ba0b-44c9-b9c4-ad2038f94d86-config-data\") pod \"swift-proxy-85bddbcbcc-v2sfc\" (UID: \"366b66fd-ba0b-44c9-b9c4-ad2038f94d86\") " pod="openstack/swift-proxy-85bddbcbcc-v2sfc" Dec 09 12:29:26 crc kubenswrapper[4703]: I1209 12:29:26.300177 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/366b66fd-ba0b-44c9-b9c4-ad2038f94d86-etc-swift\") pod \"swift-proxy-85bddbcbcc-v2sfc\" (UID: \"366b66fd-ba0b-44c9-b9c4-ad2038f94d86\") " pod="openstack/swift-proxy-85bddbcbcc-v2sfc" Dec 09 12:29:26 crc kubenswrapper[4703]: I1209 12:29:26.300393 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x27bl\" (UniqueName: \"kubernetes.io/projected/366b66fd-ba0b-44c9-b9c4-ad2038f94d86-kube-api-access-x27bl\") pod \"swift-proxy-85bddbcbcc-v2sfc\" (UID: \"366b66fd-ba0b-44c9-b9c4-ad2038f94d86\") " pod="openstack/swift-proxy-85bddbcbcc-v2sfc" Dec 09 12:29:26 crc kubenswrapper[4703]: I1209 12:29:26.300619 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/366b66fd-ba0b-44c9-b9c4-ad2038f94d86-public-tls-certs\") pod \"swift-proxy-85bddbcbcc-v2sfc\" (UID: \"366b66fd-ba0b-44c9-b9c4-ad2038f94d86\") " pod="openstack/swift-proxy-85bddbcbcc-v2sfc" Dec 09 12:29:26 crc kubenswrapper[4703]: I1209 12:29:26.300661 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/366b66fd-ba0b-44c9-b9c4-ad2038f94d86-combined-ca-bundle\") pod \"swift-proxy-85bddbcbcc-v2sfc\" (UID: \"366b66fd-ba0b-44c9-b9c4-ad2038f94d86\") " pod="openstack/swift-proxy-85bddbcbcc-v2sfc" Dec 09 12:29:26 crc kubenswrapper[4703]: I1209 12:29:26.300782 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/366b66fd-ba0b-44c9-b9c4-ad2038f94d86-log-httpd\") pod \"swift-proxy-85bddbcbcc-v2sfc\" (UID: \"366b66fd-ba0b-44c9-b9c4-ad2038f94d86\") " pod="openstack/swift-proxy-85bddbcbcc-v2sfc" Dec 09 12:29:26 crc kubenswrapper[4703]: I1209 12:29:26.300833 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/366b66fd-ba0b-44c9-b9c4-ad2038f94d86-run-httpd\") pod \"swift-proxy-85bddbcbcc-v2sfc\" (UID: \"366b66fd-ba0b-44c9-b9c4-ad2038f94d86\") " pod="openstack/swift-proxy-85bddbcbcc-v2sfc" Dec 09 12:29:26 crc kubenswrapper[4703]: I1209 12:29:26.300851 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/366b66fd-ba0b-44c9-b9c4-ad2038f94d86-internal-tls-certs\") pod \"swift-proxy-85bddbcbcc-v2sfc\" (UID: \"366b66fd-ba0b-44c9-b9c4-ad2038f94d86\") " pod="openstack/swift-proxy-85bddbcbcc-v2sfc" Dec 09 12:29:26 crc kubenswrapper[4703]: I1209 12:29:26.381893 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-proc-0"] Dec 09 12:29:26 crc kubenswrapper[4703]: I1209 12:29:26.401555 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-proc-0"] Dec 09 12:29:26 crc kubenswrapper[4703]: I1209 12:29:26.402932 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/366b66fd-ba0b-44c9-b9c4-ad2038f94d86-combined-ca-bundle\") pod \"swift-proxy-85bddbcbcc-v2sfc\" (UID: \"366b66fd-ba0b-44c9-b9c4-ad2038f94d86\") " pod="openstack/swift-proxy-85bddbcbcc-v2sfc" Dec 09 12:29:26 crc kubenswrapper[4703]: I1209 12:29:26.403010 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/366b66fd-ba0b-44c9-b9c4-ad2038f94d86-log-httpd\") pod \"swift-proxy-85bddbcbcc-v2sfc\" (UID: \"366b66fd-ba0b-44c9-b9c4-ad2038f94d86\") " pod="openstack/swift-proxy-85bddbcbcc-v2sfc" Dec 09 12:29:26 crc kubenswrapper[4703]: I1209 12:29:26.403057 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/366b66fd-ba0b-44c9-b9c4-ad2038f94d86-run-httpd\") pod \"swift-proxy-85bddbcbcc-v2sfc\" (UID: \"366b66fd-ba0b-44c9-b9c4-ad2038f94d86\") " pod="openstack/swift-proxy-85bddbcbcc-v2sfc" Dec 09 12:29:26 crc kubenswrapper[4703]: I1209 12:29:26.403086 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/366b66fd-ba0b-44c9-b9c4-ad2038f94d86-internal-tls-certs\") pod \"swift-proxy-85bddbcbcc-v2sfc\" (UID: \"366b66fd-ba0b-44c9-b9c4-ad2038f94d86\") " pod="openstack/swift-proxy-85bddbcbcc-v2sfc" Dec 09 12:29:26 crc kubenswrapper[4703]: I1209 12:29:26.403127 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/366b66fd-ba0b-44c9-b9c4-ad2038f94d86-config-data\") pod \"swift-proxy-85bddbcbcc-v2sfc\" (UID: \"366b66fd-ba0b-44c9-b9c4-ad2038f94d86\") " pod="openstack/swift-proxy-85bddbcbcc-v2sfc" Dec 09 12:29:26 crc kubenswrapper[4703]: I1209 12:29:26.403256 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/366b66fd-ba0b-44c9-b9c4-ad2038f94d86-etc-swift\") pod \"swift-proxy-85bddbcbcc-v2sfc\" (UID: \"366b66fd-ba0b-44c9-b9c4-ad2038f94d86\") " pod="openstack/swift-proxy-85bddbcbcc-v2sfc" Dec 09 12:29:26 crc kubenswrapper[4703]: I1209 12:29:26.403329 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x27bl\" (UniqueName: \"kubernetes.io/projected/366b66fd-ba0b-44c9-b9c4-ad2038f94d86-kube-api-access-x27bl\") pod \"swift-proxy-85bddbcbcc-v2sfc\" (UID: \"366b66fd-ba0b-44c9-b9c4-ad2038f94d86\") " pod="openstack/swift-proxy-85bddbcbcc-v2sfc" Dec 09 12:29:26 crc kubenswrapper[4703]: I1209 12:29:26.403427 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/366b66fd-ba0b-44c9-b9c4-ad2038f94d86-public-tls-certs\") pod \"swift-proxy-85bddbcbcc-v2sfc\" (UID: \"366b66fd-ba0b-44c9-b9c4-ad2038f94d86\") " pod="openstack/swift-proxy-85bddbcbcc-v2sfc" Dec 09 12:29:26 crc kubenswrapper[4703]: I1209 12:29:26.403838 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/366b66fd-ba0b-44c9-b9c4-ad2038f94d86-run-httpd\") pod \"swift-proxy-85bddbcbcc-v2sfc\" (UID: \"366b66fd-ba0b-44c9-b9c4-ad2038f94d86\") " pod="openstack/swift-proxy-85bddbcbcc-v2sfc" Dec 09 12:29:26 crc kubenswrapper[4703]: I1209 12:29:26.409477 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/366b66fd-ba0b-44c9-b9c4-ad2038f94d86-combined-ca-bundle\") pod \"swift-proxy-85bddbcbcc-v2sfc\" (UID: \"366b66fd-ba0b-44c9-b9c4-ad2038f94d86\") " pod="openstack/swift-proxy-85bddbcbcc-v2sfc" Dec 09 12:29:26 crc kubenswrapper[4703]: I1209 12:29:26.409777 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/366b66fd-ba0b-44c9-b9c4-ad2038f94d86-log-httpd\") pod \"swift-proxy-85bddbcbcc-v2sfc\" (UID: \"366b66fd-ba0b-44c9-b9c4-ad2038f94d86\") " pod="openstack/swift-proxy-85bddbcbcc-v2sfc" Dec 09 12:29:26 crc kubenswrapper[4703]: I1209 12:29:26.410531 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/366b66fd-ba0b-44c9-b9c4-ad2038f94d86-internal-tls-certs\") pod \"swift-proxy-85bddbcbcc-v2sfc\" (UID: \"366b66fd-ba0b-44c9-b9c4-ad2038f94d86\") " pod="openstack/swift-proxy-85bddbcbcc-v2sfc" Dec 09 12:29:26 crc kubenswrapper[4703]: I1209 12:29:26.414547 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/366b66fd-ba0b-44c9-b9c4-ad2038f94d86-etc-swift\") pod \"swift-proxy-85bddbcbcc-v2sfc\" (UID: \"366b66fd-ba0b-44c9-b9c4-ad2038f94d86\") " pod="openstack/swift-proxy-85bddbcbcc-v2sfc" Dec 09 12:29:26 crc kubenswrapper[4703]: I1209 12:29:26.419422 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/366b66fd-ba0b-44c9-b9c4-ad2038f94d86-config-data\") pod \"swift-proxy-85bddbcbcc-v2sfc\" (UID: \"366b66fd-ba0b-44c9-b9c4-ad2038f94d86\") " pod="openstack/swift-proxy-85bddbcbcc-v2sfc" Dec 09 12:29:26 crc kubenswrapper[4703]: I1209 12:29:26.422595 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-proc-0"] Dec 09 12:29:26 crc kubenswrapper[4703]: I1209 12:29:26.424636 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Dec 09 12:29:26 crc kubenswrapper[4703]: I1209 12:29:26.425543 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/366b66fd-ba0b-44c9-b9c4-ad2038f94d86-public-tls-certs\") pod \"swift-proxy-85bddbcbcc-v2sfc\" (UID: \"366b66fd-ba0b-44c9-b9c4-ad2038f94d86\") " pod="openstack/swift-proxy-85bddbcbcc-v2sfc" Dec 09 12:29:26 crc kubenswrapper[4703]: I1209 12:29:26.430484 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-proc-config-data" Dec 09 12:29:26 crc kubenswrapper[4703]: I1209 12:29:26.437519 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-proc-0"] Dec 09 12:29:26 crc kubenswrapper[4703]: I1209 12:29:26.439805 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x27bl\" (UniqueName: \"kubernetes.io/projected/366b66fd-ba0b-44c9-b9c4-ad2038f94d86-kube-api-access-x27bl\") pod \"swift-proxy-85bddbcbcc-v2sfc\" (UID: \"366b66fd-ba0b-44c9-b9c4-ad2038f94d86\") " pod="openstack/swift-proxy-85bddbcbcc-v2sfc" Dec 09 12:29:26 crc kubenswrapper[4703]: I1209 12:29:26.505849 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39bfa711-6e54-46c2-a7d4-e14927ffbc09-config-data\") pod \"cloudkitty-proc-0\" (UID: \"39bfa711-6e54-46c2-a7d4-e14927ffbc09\") " pod="openstack/cloudkitty-proc-0" Dec 09 12:29:26 crc kubenswrapper[4703]: I1209 12:29:26.505979 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lp6dl\" (UniqueName: \"kubernetes.io/projected/39bfa711-6e54-46c2-a7d4-e14927ffbc09-kube-api-access-lp6dl\") pod \"cloudkitty-proc-0\" (UID: \"39bfa711-6e54-46c2-a7d4-e14927ffbc09\") " pod="openstack/cloudkitty-proc-0" Dec 09 12:29:26 crc kubenswrapper[4703]: I1209 12:29:26.506010 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39bfa711-6e54-46c2-a7d4-e14927ffbc09-scripts\") pod \"cloudkitty-proc-0\" (UID: \"39bfa711-6e54-46c2-a7d4-e14927ffbc09\") " pod="openstack/cloudkitty-proc-0" Dec 09 12:29:26 crc kubenswrapper[4703]: I1209 12:29:26.506037 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39bfa711-6e54-46c2-a7d4-e14927ffbc09-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"39bfa711-6e54-46c2-a7d4-e14927ffbc09\") " pod="openstack/cloudkitty-proc-0" Dec 09 12:29:26 crc kubenswrapper[4703]: I1209 12:29:26.506092 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/39bfa711-6e54-46c2-a7d4-e14927ffbc09-certs\") pod \"cloudkitty-proc-0\" (UID: \"39bfa711-6e54-46c2-a7d4-e14927ffbc09\") " pod="openstack/cloudkitty-proc-0" Dec 09 12:29:26 crc kubenswrapper[4703]: I1209 12:29:26.506241 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/39bfa711-6e54-46c2-a7d4-e14927ffbc09-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"39bfa711-6e54-46c2-a7d4-e14927ffbc09\") " pod="openstack/cloudkitty-proc-0" Dec 09 12:29:26 crc kubenswrapper[4703]: I1209 12:29:26.606293 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-85bddbcbcc-v2sfc" Dec 09 12:29:26 crc kubenswrapper[4703]: I1209 12:29:26.607732 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/39bfa711-6e54-46c2-a7d4-e14927ffbc09-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"39bfa711-6e54-46c2-a7d4-e14927ffbc09\") " pod="openstack/cloudkitty-proc-0" Dec 09 12:29:26 crc kubenswrapper[4703]: I1209 12:29:26.607826 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39bfa711-6e54-46c2-a7d4-e14927ffbc09-config-data\") pod \"cloudkitty-proc-0\" (UID: \"39bfa711-6e54-46c2-a7d4-e14927ffbc09\") " pod="openstack/cloudkitty-proc-0" Dec 09 12:29:26 crc kubenswrapper[4703]: I1209 12:29:26.607892 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lp6dl\" (UniqueName: \"kubernetes.io/projected/39bfa711-6e54-46c2-a7d4-e14927ffbc09-kube-api-access-lp6dl\") pod \"cloudkitty-proc-0\" (UID: \"39bfa711-6e54-46c2-a7d4-e14927ffbc09\") " pod="openstack/cloudkitty-proc-0" Dec 09 12:29:26 crc kubenswrapper[4703]: I1209 12:29:26.607921 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39bfa711-6e54-46c2-a7d4-e14927ffbc09-scripts\") pod \"cloudkitty-proc-0\" (UID: \"39bfa711-6e54-46c2-a7d4-e14927ffbc09\") " pod="openstack/cloudkitty-proc-0" Dec 09 12:29:26 crc kubenswrapper[4703]: I1209 12:29:26.607953 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39bfa711-6e54-46c2-a7d4-e14927ffbc09-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"39bfa711-6e54-46c2-a7d4-e14927ffbc09\") " pod="openstack/cloudkitty-proc-0" Dec 09 12:29:26 crc kubenswrapper[4703]: I1209 12:29:26.608011 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/39bfa711-6e54-46c2-a7d4-e14927ffbc09-certs\") pod \"cloudkitty-proc-0\" (UID: \"39bfa711-6e54-46c2-a7d4-e14927ffbc09\") " pod="openstack/cloudkitty-proc-0" Dec 09 12:29:26 crc kubenswrapper[4703]: I1209 12:29:26.615455 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/39bfa711-6e54-46c2-a7d4-e14927ffbc09-certs\") pod \"cloudkitty-proc-0\" (UID: \"39bfa711-6e54-46c2-a7d4-e14927ffbc09\") " pod="openstack/cloudkitty-proc-0" Dec 09 12:29:26 crc kubenswrapper[4703]: I1209 12:29:26.616068 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/39bfa711-6e54-46c2-a7d4-e14927ffbc09-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"39bfa711-6e54-46c2-a7d4-e14927ffbc09\") " pod="openstack/cloudkitty-proc-0" Dec 09 12:29:26 crc kubenswrapper[4703]: I1209 12:29:26.622392 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39bfa711-6e54-46c2-a7d4-e14927ffbc09-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"39bfa711-6e54-46c2-a7d4-e14927ffbc09\") " pod="openstack/cloudkitty-proc-0" Dec 09 12:29:26 crc kubenswrapper[4703]: I1209 12:29:26.623997 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39bfa711-6e54-46c2-a7d4-e14927ffbc09-scripts\") pod \"cloudkitty-proc-0\" (UID: \"39bfa711-6e54-46c2-a7d4-e14927ffbc09\") " pod="openstack/cloudkitty-proc-0" Dec 09 12:29:26 crc kubenswrapper[4703]: I1209 12:29:26.627044 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39bfa711-6e54-46c2-a7d4-e14927ffbc09-config-data\") pod \"cloudkitty-proc-0\" (UID: \"39bfa711-6e54-46c2-a7d4-e14927ffbc09\") " pod="openstack/cloudkitty-proc-0" Dec 09 12:29:26 crc kubenswrapper[4703]: I1209 12:29:26.627716 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lp6dl\" (UniqueName: \"kubernetes.io/projected/39bfa711-6e54-46c2-a7d4-e14927ffbc09-kube-api-access-lp6dl\") pod \"cloudkitty-proc-0\" (UID: \"39bfa711-6e54-46c2-a7d4-e14927ffbc09\") " pod="openstack/cloudkitty-proc-0" Dec 09 12:29:26 crc kubenswrapper[4703]: I1209 12:29:26.889646 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Dec 09 12:29:27 crc kubenswrapper[4703]: I1209 12:29:27.088699 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0273ae71-ef65-4c05-a133-45a5078aeca9" path="/var/lib/kubelet/pods/0273ae71-ef65-4c05-a133-45a5078aeca9/volumes" Dec 09 12:29:27 crc kubenswrapper[4703]: I1209 12:29:27.255425 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 09 12:29:27 crc kubenswrapper[4703]: I1209 12:29:27.258084 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e5e4fae3-dab6-4aab-87fa-01b51c6f05db" containerName="ceilometer-central-agent" containerID="cri-o://54f6bc64b0672882246e0ba372ef6612f084e6efceea5dfb934d2eb7f66bdaa5" gracePeriod=30 Dec 09 12:29:27 crc kubenswrapper[4703]: I1209 12:29:27.258171 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e5e4fae3-dab6-4aab-87fa-01b51c6f05db" containerName="proxy-httpd" containerID="cri-o://1210ec49f9faa0ed785e1039b04d2ebed4be74421708caf63fccfadb292c5adf" gracePeriod=30 Dec 09 12:29:27 crc kubenswrapper[4703]: I1209 12:29:27.258235 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e5e4fae3-dab6-4aab-87fa-01b51c6f05db" containerName="ceilometer-notification-agent" containerID="cri-o://c25d8021eedebb2ddc3ad815f7ee1d179a16c4c0ee25805fc6614451e866cc89" gracePeriod=30 Dec 09 12:29:27 crc kubenswrapper[4703]: I1209 12:29:27.258114 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e5e4fae3-dab6-4aab-87fa-01b51c6f05db" containerName="sg-core" containerID="cri-o://3ea74029e6d7b1f994d4533286f61d1cd15de0e704e9c73332d4981e3903c8fb" gracePeriod=30 Dec 09 12:29:27 crc kubenswrapper[4703]: I1209 12:29:27.271358 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 09 12:29:27 crc kubenswrapper[4703]: I1209 12:29:27.333950 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-85bddbcbcc-v2sfc"] Dec 09 12:29:28 crc kubenswrapper[4703]: I1209 12:29:28.330417 4703 generic.go:334] "Generic (PLEG): container finished" podID="e5e4fae3-dab6-4aab-87fa-01b51c6f05db" containerID="1210ec49f9faa0ed785e1039b04d2ebed4be74421708caf63fccfadb292c5adf" exitCode=0 Dec 09 12:29:28 crc kubenswrapper[4703]: I1209 12:29:28.330460 4703 generic.go:334] "Generic (PLEG): container finished" podID="e5e4fae3-dab6-4aab-87fa-01b51c6f05db" containerID="3ea74029e6d7b1f994d4533286f61d1cd15de0e704e9c73332d4981e3903c8fb" exitCode=2 Dec 09 12:29:28 crc kubenswrapper[4703]: I1209 12:29:28.330469 4703 generic.go:334] "Generic (PLEG): container finished" podID="e5e4fae3-dab6-4aab-87fa-01b51c6f05db" containerID="54f6bc64b0672882246e0ba372ef6612f084e6efceea5dfb934d2eb7f66bdaa5" exitCode=0 Dec 09 12:29:28 crc kubenswrapper[4703]: I1209 12:29:28.330495 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e5e4fae3-dab6-4aab-87fa-01b51c6f05db","Type":"ContainerDied","Data":"1210ec49f9faa0ed785e1039b04d2ebed4be74421708caf63fccfadb292c5adf"} Dec 09 12:29:28 crc kubenswrapper[4703]: I1209 12:29:28.330528 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e5e4fae3-dab6-4aab-87fa-01b51c6f05db","Type":"ContainerDied","Data":"3ea74029e6d7b1f994d4533286f61d1cd15de0e704e9c73332d4981e3903c8fb"} Dec 09 12:29:28 crc kubenswrapper[4703]: I1209 12:29:28.330539 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e5e4fae3-dab6-4aab-87fa-01b51c6f05db","Type":"ContainerDied","Data":"54f6bc64b0672882246e0ba372ef6612f084e6efceea5dfb934d2eb7f66bdaa5"} Dec 09 12:29:29 crc kubenswrapper[4703]: I1209 12:29:29.581856 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 09 12:29:29 crc kubenswrapper[4703]: I1209 12:29:29.582648 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="db4f79b4-592c-4dc1-ad09-c1582b9d8497" containerName="glance-log" containerID="cri-o://d5789bc93f79ea2373b48b65d376f6ea7876199dffc2e47e5864bfa90517bfbf" gracePeriod=30 Dec 09 12:29:29 crc kubenswrapper[4703]: I1209 12:29:29.582884 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="db4f79b4-592c-4dc1-ad09-c1582b9d8497" containerName="glance-httpd" containerID="cri-o://ad9ce1825554221993e768893541a522a107440f94554e7bfba233ff5d0b16be" gracePeriod=30 Dec 09 12:29:29 crc kubenswrapper[4703]: I1209 12:29:29.888722 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Dec 09 12:29:30 crc kubenswrapper[4703]: I1209 12:29:30.083903 4703 patch_prober.go:28] interesting pod/machine-config-daemon-q8sfk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 12:29:30 crc kubenswrapper[4703]: I1209 12:29:30.083962 4703 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 12:29:30 crc kubenswrapper[4703]: I1209 12:29:30.360562 4703 generic.go:334] "Generic (PLEG): container finished" podID="7453bd3b-1c8e-4401-b6b3-644e7aff4ca2" containerID="eb5f5fc72aa49f2e665a1f22401611976d61113a9988059288776c4befc0cb22" exitCode=137 Dec 09 12:29:30 crc kubenswrapper[4703]: I1209 12:29:30.360629 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7453bd3b-1c8e-4401-b6b3-644e7aff4ca2","Type":"ContainerDied","Data":"eb5f5fc72aa49f2e665a1f22401611976d61113a9988059288776c4befc0cb22"} Dec 09 12:29:30 crc kubenswrapper[4703]: I1209 12:29:30.363366 4703 generic.go:334] "Generic (PLEG): container finished" podID="db4f79b4-592c-4dc1-ad09-c1582b9d8497" containerID="d5789bc93f79ea2373b48b65d376f6ea7876199dffc2e47e5864bfa90517bfbf" exitCode=143 Dec 09 12:29:30 crc kubenswrapper[4703]: I1209 12:29:30.363453 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"db4f79b4-592c-4dc1-ad09-c1582b9d8497","Type":"ContainerDied","Data":"d5789bc93f79ea2373b48b65d376f6ea7876199dffc2e47e5864bfa90517bfbf"} Dec 09 12:29:30 crc kubenswrapper[4703]: I1209 12:29:30.713391 4703 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="e5e4fae3-dab6-4aab-87fa-01b51c6f05db" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.182:3000/\": dial tcp 10.217.0.182:3000: connect: connection refused" Dec 09 12:29:31 crc kubenswrapper[4703]: W1209 12:29:31.871201 4703 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod62eeb497_1ec6_4eb9_b130_b3f901ad26f0.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod62eeb497_1ec6_4eb9_b130_b3f901ad26f0.slice: no such file or directory Dec 09 12:29:31 crc kubenswrapper[4703]: W1209 12:29:31.871539 4703 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0273ae71_ef65_4c05_a133_45a5078aeca9.slice/crio-e57641c835214038236ebb7b876fc5f5b780b82299b43f7d8ebe2c80e33949ae": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0273ae71_ef65_4c05_a133_45a5078aeca9.slice/crio-e57641c835214038236ebb7b876fc5f5b780b82299b43f7d8ebe2c80e33949ae: no such file or directory Dec 09 12:29:31 crc kubenswrapper[4703]: W1209 12:29:31.871736 4703 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda71f8c26_a813_4d5b_9ce4_c9b6075a3153.slice/crio-conmon-3ec64ab8b25ed181288e3c77b88c650509b4af8b0f7782418d89a46335589e07.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda71f8c26_a813_4d5b_9ce4_c9b6075a3153.slice/crio-conmon-3ec64ab8b25ed181288e3c77b88c650509b4af8b0f7782418d89a46335589e07.scope: no such file or directory Dec 09 12:29:31 crc kubenswrapper[4703]: W1209 12:29:31.871767 4703 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda71f8c26_a813_4d5b_9ce4_c9b6075a3153.slice/crio-3ec64ab8b25ed181288e3c77b88c650509b4af8b0f7782418d89a46335589e07.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda71f8c26_a813_4d5b_9ce4_c9b6075a3153.slice/crio-3ec64ab8b25ed181288e3c77b88c650509b4af8b0f7782418d89a46335589e07.scope: no such file or directory Dec 09 12:29:31 crc kubenswrapper[4703]: W1209 12:29:31.876250 4703 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode5e4fae3_dab6_4aab_87fa_01b51c6f05db.slice/crio-conmon-1210ec49f9faa0ed785e1039b04d2ebed4be74421708caf63fccfadb292c5adf.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode5e4fae3_dab6_4aab_87fa_01b51c6f05db.slice/crio-conmon-1210ec49f9faa0ed785e1039b04d2ebed4be74421708caf63fccfadb292c5adf.scope: no such file or directory Dec 09 12:29:31 crc kubenswrapper[4703]: W1209 12:29:31.876291 4703 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode5e4fae3_dab6_4aab_87fa_01b51c6f05db.slice/crio-1210ec49f9faa0ed785e1039b04d2ebed4be74421708caf63fccfadb292c5adf.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode5e4fae3_dab6_4aab_87fa_01b51c6f05db.slice/crio-1210ec49f9faa0ed785e1039b04d2ebed4be74421708caf63fccfadb292c5adf.scope: no such file or directory Dec 09 12:29:31 crc kubenswrapper[4703]: W1209 12:29:31.881949 4703 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0273ae71_ef65_4c05_a133_45a5078aeca9.slice/crio-conmon-04a69e4bfa55e690fa4851872202f84f41c444adf172c08a65c3f280d6ee6ca7.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0273ae71_ef65_4c05_a133_45a5078aeca9.slice/crio-conmon-04a69e4bfa55e690fa4851872202f84f41c444adf172c08a65c3f280d6ee6ca7.scope: no such file or directory Dec 09 12:29:31 crc kubenswrapper[4703]: W1209 12:29:31.882032 4703 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0273ae71_ef65_4c05_a133_45a5078aeca9.slice/crio-04a69e4bfa55e690fa4851872202f84f41c444adf172c08a65c3f280d6ee6ca7.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0273ae71_ef65_4c05_a133_45a5078aeca9.slice/crio-04a69e4bfa55e690fa4851872202f84f41c444adf172c08a65c3f280d6ee6ca7.scope: no such file or directory Dec 09 12:29:32 crc kubenswrapper[4703]: E1209 12:29:32.161477 4703 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddb4f79b4_592c_4dc1_ad09_c1582b9d8497.slice/crio-d5789bc93f79ea2373b48b65d376f6ea7876199dffc2e47e5864bfa90517bfbf.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd7ce5f82_bf77_4fcc_9ae9_55c0732c5012.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode5e4fae3_dab6_4aab_87fa_01b51c6f05db.slice/crio-conmon-c25d8021eedebb2ddc3ad815f7ee1d179a16c4c0ee25805fc6614451e866cc89.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7453bd3b_1c8e_4401_b6b3_644e7aff4ca2.slice/crio-conmon-eb5f5fc72aa49f2e665a1f22401611976d61113a9988059288776c4befc0cb22.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8ff08dc4_6bf7_4c61_bdf4_289c0c653a2f.slice/crio-ca97f713c271635edecd9b50e9aa8c3c14188b16f8660bce2a71cd6274753397\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8ff08dc4_6bf7_4c61_bdf4_289c0c653a2f.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode5e4fae3_dab6_4aab_87fa_01b51c6f05db.slice/crio-c25d8021eedebb2ddc3ad815f7ee1d179a16c4c0ee25805fc6614451e866cc89.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd7ce5f82_bf77_4fcc_9ae9_55c0732c5012.slice/crio-28affe60e7c3ddd5d4f2433c6611c538ab9d1dafe943b8ab250aa788fdbd2658\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0273ae71_ef65_4c05_a133_45a5078aeca9.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode5e4fae3_dab6_4aab_87fa_01b51c6f05db.slice/crio-conmon-54f6bc64b0672882246e0ba372ef6612f084e6efceea5dfb934d2eb7f66bdaa5.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddb4f79b4_592c_4dc1_ad09_c1582b9d8497.slice/crio-conmon-d5789bc93f79ea2373b48b65d376f6ea7876199dffc2e47e5864bfa90517bfbf.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode5e4fae3_dab6_4aab_87fa_01b51c6f05db.slice/crio-54f6bc64b0672882246e0ba372ef6612f084e6efceea5dfb934d2eb7f66bdaa5.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode5e4fae3_dab6_4aab_87fa_01b51c6f05db.slice/crio-3ea74029e6d7b1f994d4533286f61d1cd15de0e704e9c73332d4981e3903c8fb.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode5e4fae3_dab6_4aab_87fa_01b51c6f05db.slice/crio-conmon-3ea74029e6d7b1f994d4533286f61d1cd15de0e704e9c73332d4981e3903c8fb.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7453bd3b_1c8e_4401_b6b3_644e7aff4ca2.slice/crio-eb5f5fc72aa49f2e665a1f22401611976d61113a9988059288776c4befc0cb22.scope\": RecentStats: unable to find data in memory cache]" Dec 09 12:29:32 crc kubenswrapper[4703]: I1209 12:29:32.247594 4703 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-m57j8" podUID="03ce63d0-6e1f-4d88-a26c-05b867554db5" containerName="registry-server" probeResult="failure" output=< Dec 09 12:29:32 crc kubenswrapper[4703]: timeout: failed to connect service ":50051" within 1s Dec 09 12:29:32 crc kubenswrapper[4703]: > Dec 09 12:29:32 crc kubenswrapper[4703]: I1209 12:29:32.392544 4703 generic.go:334] "Generic (PLEG): container finished" podID="e5e4fae3-dab6-4aab-87fa-01b51c6f05db" containerID="c25d8021eedebb2ddc3ad815f7ee1d179a16c4c0ee25805fc6614451e866cc89" exitCode=0 Dec 09 12:29:32 crc kubenswrapper[4703]: I1209 12:29:32.392600 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e5e4fae3-dab6-4aab-87fa-01b51c6f05db","Type":"ContainerDied","Data":"c25d8021eedebb2ddc3ad815f7ee1d179a16c4c0ee25805fc6614451e866cc89"} Dec 09 12:29:33 crc kubenswrapper[4703]: I1209 12:29:33.205147 4703 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="7453bd3b-1c8e-4401-b6b3-644e7aff4ca2" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.179:8776/healthcheck\": dial tcp 10.217.0.179:8776: connect: connection refused" Dec 09 12:29:33 crc kubenswrapper[4703]: I1209 12:29:33.409469 4703 generic.go:334] "Generic (PLEG): container finished" podID="db4f79b4-592c-4dc1-ad09-c1582b9d8497" containerID="ad9ce1825554221993e768893541a522a107440f94554e7bfba233ff5d0b16be" exitCode=0 Dec 09 12:29:33 crc kubenswrapper[4703]: I1209 12:29:33.409533 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"db4f79b4-592c-4dc1-ad09-c1582b9d8497","Type":"ContainerDied","Data":"ad9ce1825554221993e768893541a522a107440f94554e7bfba233ff5d0b16be"} Dec 09 12:29:33 crc kubenswrapper[4703]: I1209 12:29:33.630651 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-bqvwt"] Dec 09 12:29:33 crc kubenswrapper[4703]: I1209 12:29:33.632490 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-bqvwt" Dec 09 12:29:33 crc kubenswrapper[4703]: I1209 12:29:33.661587 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-bqvwt"] Dec 09 12:29:33 crc kubenswrapper[4703]: I1209 12:29:33.765557 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-5432-account-create-update-t6hf2"] Dec 09 12:29:33 crc kubenswrapper[4703]: I1209 12:29:33.768733 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-5432-account-create-update-t6hf2" Dec 09 12:29:33 crc kubenswrapper[4703]: I1209 12:29:33.777181 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Dec 09 12:29:33 crc kubenswrapper[4703]: I1209 12:29:33.814590 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f26022ad-c235-4dd6-abe2-40fe489afe81-operator-scripts\") pod \"nova-api-db-create-bqvwt\" (UID: \"f26022ad-c235-4dd6-abe2-40fe489afe81\") " pod="openstack/nova-api-db-create-bqvwt" Dec 09 12:29:33 crc kubenswrapper[4703]: I1209 12:29:33.814658 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tj5c\" (UniqueName: \"kubernetes.io/projected/f26022ad-c235-4dd6-abe2-40fe489afe81-kube-api-access-8tj5c\") pod \"nova-api-db-create-bqvwt\" (UID: \"f26022ad-c235-4dd6-abe2-40fe489afe81\") " pod="openstack/nova-api-db-create-bqvwt" Dec 09 12:29:33 crc kubenswrapper[4703]: I1209 12:29:33.820044 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-5jcln"] Dec 09 12:29:33 crc kubenswrapper[4703]: I1209 12:29:33.821870 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-5jcln" Dec 09 12:29:33 crc kubenswrapper[4703]: I1209 12:29:33.841457 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-5432-account-create-update-t6hf2"] Dec 09 12:29:33 crc kubenswrapper[4703]: I1209 12:29:33.861401 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-5jcln"] Dec 09 12:29:33 crc kubenswrapper[4703]: I1209 12:29:33.915529 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-fms5f"] Dec 09 12:29:33 crc kubenswrapper[4703]: I1209 12:29:33.917092 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gs2n7\" (UniqueName: \"kubernetes.io/projected/76d30e84-090d-427a-b0bc-f41ced88f0b4-kube-api-access-gs2n7\") pod \"nova-cell0-db-create-5jcln\" (UID: \"76d30e84-090d-427a-b0bc-f41ced88f0b4\") " pod="openstack/nova-cell0-db-create-5jcln" Dec 09 12:29:33 crc kubenswrapper[4703]: I1209 12:29:33.917213 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6jpr\" (UniqueName: \"kubernetes.io/projected/d9b712bf-6532-486d-bf10-577b49caba4c-kube-api-access-b6jpr\") pod \"nova-api-5432-account-create-update-t6hf2\" (UID: \"d9b712bf-6532-486d-bf10-577b49caba4c\") " pod="openstack/nova-api-5432-account-create-update-t6hf2" Dec 09 12:29:33 crc kubenswrapper[4703]: I1209 12:29:33.917231 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-fms5f" Dec 09 12:29:33 crc kubenswrapper[4703]: I1209 12:29:33.917272 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f26022ad-c235-4dd6-abe2-40fe489afe81-operator-scripts\") pod \"nova-api-db-create-bqvwt\" (UID: \"f26022ad-c235-4dd6-abe2-40fe489afe81\") " pod="openstack/nova-api-db-create-bqvwt" Dec 09 12:29:33 crc kubenswrapper[4703]: I1209 12:29:33.917315 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tj5c\" (UniqueName: \"kubernetes.io/projected/f26022ad-c235-4dd6-abe2-40fe489afe81-kube-api-access-8tj5c\") pod \"nova-api-db-create-bqvwt\" (UID: \"f26022ad-c235-4dd6-abe2-40fe489afe81\") " pod="openstack/nova-api-db-create-bqvwt" Dec 09 12:29:33 crc kubenswrapper[4703]: I1209 12:29:33.917401 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/76d30e84-090d-427a-b0bc-f41ced88f0b4-operator-scripts\") pod \"nova-cell0-db-create-5jcln\" (UID: \"76d30e84-090d-427a-b0bc-f41ced88f0b4\") " pod="openstack/nova-cell0-db-create-5jcln" Dec 09 12:29:33 crc kubenswrapper[4703]: I1209 12:29:33.917539 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d9b712bf-6532-486d-bf10-577b49caba4c-operator-scripts\") pod \"nova-api-5432-account-create-update-t6hf2\" (UID: \"d9b712bf-6532-486d-bf10-577b49caba4c\") " pod="openstack/nova-api-5432-account-create-update-t6hf2" Dec 09 12:29:33 crc kubenswrapper[4703]: I1209 12:29:33.918162 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f26022ad-c235-4dd6-abe2-40fe489afe81-operator-scripts\") pod \"nova-api-db-create-bqvwt\" (UID: \"f26022ad-c235-4dd6-abe2-40fe489afe81\") " pod="openstack/nova-api-db-create-bqvwt" Dec 09 12:29:33 crc kubenswrapper[4703]: I1209 12:29:33.940960 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-fms5f"] Dec 09 12:29:33 crc kubenswrapper[4703]: I1209 12:29:33.945589 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tj5c\" (UniqueName: \"kubernetes.io/projected/f26022ad-c235-4dd6-abe2-40fe489afe81-kube-api-access-8tj5c\") pod \"nova-api-db-create-bqvwt\" (UID: \"f26022ad-c235-4dd6-abe2-40fe489afe81\") " pod="openstack/nova-api-db-create-bqvwt" Dec 09 12:29:33 crc kubenswrapper[4703]: I1209 12:29:33.964528 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-bqvwt" Dec 09 12:29:33 crc kubenswrapper[4703]: I1209 12:29:33.983873 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-2cb9-account-create-update-n9tpj"] Dec 09 12:29:33 crc kubenswrapper[4703]: I1209 12:29:33.985530 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-2cb9-account-create-update-n9tpj" Dec 09 12:29:33 crc kubenswrapper[4703]: I1209 12:29:33.991093 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Dec 09 12:29:34 crc kubenswrapper[4703]: I1209 12:29:34.019901 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gs2n7\" (UniqueName: \"kubernetes.io/projected/76d30e84-090d-427a-b0bc-f41ced88f0b4-kube-api-access-gs2n7\") pod \"nova-cell0-db-create-5jcln\" (UID: \"76d30e84-090d-427a-b0bc-f41ced88f0b4\") " pod="openstack/nova-cell0-db-create-5jcln" Dec 09 12:29:34 crc kubenswrapper[4703]: I1209 12:29:34.019993 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6jpr\" (UniqueName: \"kubernetes.io/projected/d9b712bf-6532-486d-bf10-577b49caba4c-kube-api-access-b6jpr\") pod \"nova-api-5432-account-create-update-t6hf2\" (UID: \"d9b712bf-6532-486d-bf10-577b49caba4c\") " pod="openstack/nova-api-5432-account-create-update-t6hf2" Dec 09 12:29:34 crc kubenswrapper[4703]: I1209 12:29:34.020040 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b5815e2-e19c-491f-b0c9-19e651e10fec-operator-scripts\") pod \"nova-cell1-db-create-fms5f\" (UID: \"3b5815e2-e19c-491f-b0c9-19e651e10fec\") " pod="openstack/nova-cell1-db-create-fms5f" Dec 09 12:29:34 crc kubenswrapper[4703]: I1209 12:29:34.020124 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/76d30e84-090d-427a-b0bc-f41ced88f0b4-operator-scripts\") pod \"nova-cell0-db-create-5jcln\" (UID: \"76d30e84-090d-427a-b0bc-f41ced88f0b4\") " pod="openstack/nova-cell0-db-create-5jcln" Dec 09 12:29:34 crc kubenswrapper[4703]: I1209 12:29:34.020227 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nz8sn\" (UniqueName: \"kubernetes.io/projected/3b5815e2-e19c-491f-b0c9-19e651e10fec-kube-api-access-nz8sn\") pod \"nova-cell1-db-create-fms5f\" (UID: \"3b5815e2-e19c-491f-b0c9-19e651e10fec\") " pod="openstack/nova-cell1-db-create-fms5f" Dec 09 12:29:34 crc kubenswrapper[4703]: I1209 12:29:34.020288 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d9b712bf-6532-486d-bf10-577b49caba4c-operator-scripts\") pod \"nova-api-5432-account-create-update-t6hf2\" (UID: \"d9b712bf-6532-486d-bf10-577b49caba4c\") " pod="openstack/nova-api-5432-account-create-update-t6hf2" Dec 09 12:29:34 crc kubenswrapper[4703]: I1209 12:29:34.021278 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d9b712bf-6532-486d-bf10-577b49caba4c-operator-scripts\") pod \"nova-api-5432-account-create-update-t6hf2\" (UID: \"d9b712bf-6532-486d-bf10-577b49caba4c\") " pod="openstack/nova-api-5432-account-create-update-t6hf2" Dec 09 12:29:34 crc kubenswrapper[4703]: I1209 12:29:34.021372 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-2cb9-account-create-update-n9tpj"] Dec 09 12:29:34 crc kubenswrapper[4703]: I1209 12:29:34.021981 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/76d30e84-090d-427a-b0bc-f41ced88f0b4-operator-scripts\") pod \"nova-cell0-db-create-5jcln\" (UID: \"76d30e84-090d-427a-b0bc-f41ced88f0b4\") " pod="openstack/nova-cell0-db-create-5jcln" Dec 09 12:29:34 crc kubenswrapper[4703]: I1209 12:29:34.053472 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gs2n7\" (UniqueName: \"kubernetes.io/projected/76d30e84-090d-427a-b0bc-f41ced88f0b4-kube-api-access-gs2n7\") pod \"nova-cell0-db-create-5jcln\" (UID: \"76d30e84-090d-427a-b0bc-f41ced88f0b4\") " pod="openstack/nova-cell0-db-create-5jcln" Dec 09 12:29:34 crc kubenswrapper[4703]: I1209 12:29:34.062214 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6jpr\" (UniqueName: \"kubernetes.io/projected/d9b712bf-6532-486d-bf10-577b49caba4c-kube-api-access-b6jpr\") pod \"nova-api-5432-account-create-update-t6hf2\" (UID: \"d9b712bf-6532-486d-bf10-577b49caba4c\") " pod="openstack/nova-api-5432-account-create-update-t6hf2" Dec 09 12:29:34 crc kubenswrapper[4703]: I1209 12:29:34.122120 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b5815e2-e19c-491f-b0c9-19e651e10fec-operator-scripts\") pod \"nova-cell1-db-create-fms5f\" (UID: \"3b5815e2-e19c-491f-b0c9-19e651e10fec\") " pod="openstack/nova-cell1-db-create-fms5f" Dec 09 12:29:34 crc kubenswrapper[4703]: I1209 12:29:34.122218 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/59cdfe75-7ac7-4bf4-9b72-951c5ee0bc73-operator-scripts\") pod \"nova-cell0-2cb9-account-create-update-n9tpj\" (UID: \"59cdfe75-7ac7-4bf4-9b72-951c5ee0bc73\") " pod="openstack/nova-cell0-2cb9-account-create-update-n9tpj" Dec 09 12:29:34 crc kubenswrapper[4703]: I1209 12:29:34.122360 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nz8sn\" (UniqueName: \"kubernetes.io/projected/3b5815e2-e19c-491f-b0c9-19e651e10fec-kube-api-access-nz8sn\") pod \"nova-cell1-db-create-fms5f\" (UID: \"3b5815e2-e19c-491f-b0c9-19e651e10fec\") " pod="openstack/nova-cell1-db-create-fms5f" Dec 09 12:29:34 crc kubenswrapper[4703]: I1209 12:29:34.122442 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4p4lx\" (UniqueName: \"kubernetes.io/projected/59cdfe75-7ac7-4bf4-9b72-951c5ee0bc73-kube-api-access-4p4lx\") pod \"nova-cell0-2cb9-account-create-update-n9tpj\" (UID: \"59cdfe75-7ac7-4bf4-9b72-951c5ee0bc73\") " pod="openstack/nova-cell0-2cb9-account-create-update-n9tpj" Dec 09 12:29:34 crc kubenswrapper[4703]: I1209 12:29:34.123377 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b5815e2-e19c-491f-b0c9-19e651e10fec-operator-scripts\") pod \"nova-cell1-db-create-fms5f\" (UID: \"3b5815e2-e19c-491f-b0c9-19e651e10fec\") " pod="openstack/nova-cell1-db-create-fms5f" Dec 09 12:29:34 crc kubenswrapper[4703]: I1209 12:29:34.125449 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-5432-account-create-update-t6hf2" Dec 09 12:29:34 crc kubenswrapper[4703]: I1209 12:29:34.143246 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-241d-account-create-update-jx7s6"] Dec 09 12:29:34 crc kubenswrapper[4703]: I1209 12:29:34.144960 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-241d-account-create-update-jx7s6" Dec 09 12:29:34 crc kubenswrapper[4703]: I1209 12:29:34.147811 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Dec 09 12:29:34 crc kubenswrapper[4703]: I1209 12:29:34.152801 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nz8sn\" (UniqueName: \"kubernetes.io/projected/3b5815e2-e19c-491f-b0c9-19e651e10fec-kube-api-access-nz8sn\") pod \"nova-cell1-db-create-fms5f\" (UID: \"3b5815e2-e19c-491f-b0c9-19e651e10fec\") " pod="openstack/nova-cell1-db-create-fms5f" Dec 09 12:29:34 crc kubenswrapper[4703]: I1209 12:29:34.162741 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-5jcln" Dec 09 12:29:34 crc kubenswrapper[4703]: I1209 12:29:34.182046 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-241d-account-create-update-jx7s6"] Dec 09 12:29:34 crc kubenswrapper[4703]: I1209 12:29:34.225003 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb4bb3a0-758c-4ce3-b2a6-5b34f95531b9-operator-scripts\") pod \"nova-cell1-241d-account-create-update-jx7s6\" (UID: \"fb4bb3a0-758c-4ce3-b2a6-5b34f95531b9\") " pod="openstack/nova-cell1-241d-account-create-update-jx7s6" Dec 09 12:29:34 crc kubenswrapper[4703]: I1209 12:29:34.225056 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsj4b\" (UniqueName: \"kubernetes.io/projected/fb4bb3a0-758c-4ce3-b2a6-5b34f95531b9-kube-api-access-nsj4b\") pod \"nova-cell1-241d-account-create-update-jx7s6\" (UID: \"fb4bb3a0-758c-4ce3-b2a6-5b34f95531b9\") " pod="openstack/nova-cell1-241d-account-create-update-jx7s6" Dec 09 12:29:34 crc kubenswrapper[4703]: I1209 12:29:34.225582 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4p4lx\" (UniqueName: \"kubernetes.io/projected/59cdfe75-7ac7-4bf4-9b72-951c5ee0bc73-kube-api-access-4p4lx\") pod \"nova-cell0-2cb9-account-create-update-n9tpj\" (UID: \"59cdfe75-7ac7-4bf4-9b72-951c5ee0bc73\") " pod="openstack/nova-cell0-2cb9-account-create-update-n9tpj" Dec 09 12:29:34 crc kubenswrapper[4703]: I1209 12:29:34.225920 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/59cdfe75-7ac7-4bf4-9b72-951c5ee0bc73-operator-scripts\") pod \"nova-cell0-2cb9-account-create-update-n9tpj\" (UID: \"59cdfe75-7ac7-4bf4-9b72-951c5ee0bc73\") " pod="openstack/nova-cell0-2cb9-account-create-update-n9tpj" Dec 09 12:29:34 crc kubenswrapper[4703]: I1209 12:29:34.227078 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/59cdfe75-7ac7-4bf4-9b72-951c5ee0bc73-operator-scripts\") pod \"nova-cell0-2cb9-account-create-update-n9tpj\" (UID: \"59cdfe75-7ac7-4bf4-9b72-951c5ee0bc73\") " pod="openstack/nova-cell0-2cb9-account-create-update-n9tpj" Dec 09 12:29:34 crc kubenswrapper[4703]: I1209 12:29:34.246363 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-fms5f" Dec 09 12:29:34 crc kubenswrapper[4703]: I1209 12:29:34.246910 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4p4lx\" (UniqueName: \"kubernetes.io/projected/59cdfe75-7ac7-4bf4-9b72-951c5ee0bc73-kube-api-access-4p4lx\") pod \"nova-cell0-2cb9-account-create-update-n9tpj\" (UID: \"59cdfe75-7ac7-4bf4-9b72-951c5ee0bc73\") " pod="openstack/nova-cell0-2cb9-account-create-update-n9tpj" Dec 09 12:29:34 crc kubenswrapper[4703]: I1209 12:29:34.328387 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb4bb3a0-758c-4ce3-b2a6-5b34f95531b9-operator-scripts\") pod \"nova-cell1-241d-account-create-update-jx7s6\" (UID: \"fb4bb3a0-758c-4ce3-b2a6-5b34f95531b9\") " pod="openstack/nova-cell1-241d-account-create-update-jx7s6" Dec 09 12:29:34 crc kubenswrapper[4703]: I1209 12:29:34.328446 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nsj4b\" (UniqueName: \"kubernetes.io/projected/fb4bb3a0-758c-4ce3-b2a6-5b34f95531b9-kube-api-access-nsj4b\") pod \"nova-cell1-241d-account-create-update-jx7s6\" (UID: \"fb4bb3a0-758c-4ce3-b2a6-5b34f95531b9\") " pod="openstack/nova-cell1-241d-account-create-update-jx7s6" Dec 09 12:29:34 crc kubenswrapper[4703]: I1209 12:29:34.329942 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb4bb3a0-758c-4ce3-b2a6-5b34f95531b9-operator-scripts\") pod \"nova-cell1-241d-account-create-update-jx7s6\" (UID: \"fb4bb3a0-758c-4ce3-b2a6-5b34f95531b9\") " pod="openstack/nova-cell1-241d-account-create-update-jx7s6" Dec 09 12:29:34 crc kubenswrapper[4703]: I1209 12:29:34.330598 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-2cb9-account-create-update-n9tpj" Dec 09 12:29:34 crc kubenswrapper[4703]: I1209 12:29:34.349995 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nsj4b\" (UniqueName: \"kubernetes.io/projected/fb4bb3a0-758c-4ce3-b2a6-5b34f95531b9-kube-api-access-nsj4b\") pod \"nova-cell1-241d-account-create-update-jx7s6\" (UID: \"fb4bb3a0-758c-4ce3-b2a6-5b34f95531b9\") " pod="openstack/nova-cell1-241d-account-create-update-jx7s6" Dec 09 12:29:34 crc kubenswrapper[4703]: I1209 12:29:34.532662 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-241d-account-create-update-jx7s6" Dec 09 12:29:35 crc kubenswrapper[4703]: I1209 12:29:35.460953 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-85bddbcbcc-v2sfc" event={"ID":"366b66fd-ba0b-44c9-b9c4-ad2038f94d86","Type":"ContainerStarted","Data":"660378cdf6cf518357b96a4a0269a6458eb02faa318351efab32149dc23bac73"} Dec 09 12:29:35 crc kubenswrapper[4703]: I1209 12:29:35.685683 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 09 12:29:35 crc kubenswrapper[4703]: I1209 12:29:35.767155 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7453bd3b-1c8e-4401-b6b3-644e7aff4ca2-scripts\") pod \"7453bd3b-1c8e-4401-b6b3-644e7aff4ca2\" (UID: \"7453bd3b-1c8e-4401-b6b3-644e7aff4ca2\") " Dec 09 12:29:35 crc kubenswrapper[4703]: I1209 12:29:35.767435 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7453bd3b-1c8e-4401-b6b3-644e7aff4ca2-config-data\") pod \"7453bd3b-1c8e-4401-b6b3-644e7aff4ca2\" (UID: \"7453bd3b-1c8e-4401-b6b3-644e7aff4ca2\") " Dec 09 12:29:35 crc kubenswrapper[4703]: I1209 12:29:35.767533 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7453bd3b-1c8e-4401-b6b3-644e7aff4ca2-config-data-custom\") pod \"7453bd3b-1c8e-4401-b6b3-644e7aff4ca2\" (UID: \"7453bd3b-1c8e-4401-b6b3-644e7aff4ca2\") " Dec 09 12:29:35 crc kubenswrapper[4703]: I1209 12:29:35.767575 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7453bd3b-1c8e-4401-b6b3-644e7aff4ca2-combined-ca-bundle\") pod \"7453bd3b-1c8e-4401-b6b3-644e7aff4ca2\" (UID: \"7453bd3b-1c8e-4401-b6b3-644e7aff4ca2\") " Dec 09 12:29:35 crc kubenswrapper[4703]: I1209 12:29:35.767649 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7453bd3b-1c8e-4401-b6b3-644e7aff4ca2-logs\") pod \"7453bd3b-1c8e-4401-b6b3-644e7aff4ca2\" (UID: \"7453bd3b-1c8e-4401-b6b3-644e7aff4ca2\") " Dec 09 12:29:35 crc kubenswrapper[4703]: I1209 12:29:35.767713 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q9mf2\" (UniqueName: \"kubernetes.io/projected/7453bd3b-1c8e-4401-b6b3-644e7aff4ca2-kube-api-access-q9mf2\") pod \"7453bd3b-1c8e-4401-b6b3-644e7aff4ca2\" (UID: \"7453bd3b-1c8e-4401-b6b3-644e7aff4ca2\") " Dec 09 12:29:35 crc kubenswrapper[4703]: I1209 12:29:35.767768 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7453bd3b-1c8e-4401-b6b3-644e7aff4ca2-etc-machine-id\") pod \"7453bd3b-1c8e-4401-b6b3-644e7aff4ca2\" (UID: \"7453bd3b-1c8e-4401-b6b3-644e7aff4ca2\") " Dec 09 12:29:35 crc kubenswrapper[4703]: I1209 12:29:35.768416 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7453bd3b-1c8e-4401-b6b3-644e7aff4ca2-logs" (OuterVolumeSpecName: "logs") pod "7453bd3b-1c8e-4401-b6b3-644e7aff4ca2" (UID: "7453bd3b-1c8e-4401-b6b3-644e7aff4ca2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:29:35 crc kubenswrapper[4703]: I1209 12:29:35.768549 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7453bd3b-1c8e-4401-b6b3-644e7aff4ca2-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "7453bd3b-1c8e-4401-b6b3-644e7aff4ca2" (UID: "7453bd3b-1c8e-4401-b6b3-644e7aff4ca2"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 12:29:35 crc kubenswrapper[4703]: I1209 12:29:35.843075 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7453bd3b-1c8e-4401-b6b3-644e7aff4ca2-scripts" (OuterVolumeSpecName: "scripts") pod "7453bd3b-1c8e-4401-b6b3-644e7aff4ca2" (UID: "7453bd3b-1c8e-4401-b6b3-644e7aff4ca2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:29:35 crc kubenswrapper[4703]: I1209 12:29:35.858423 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7453bd3b-1c8e-4401-b6b3-644e7aff4ca2-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "7453bd3b-1c8e-4401-b6b3-644e7aff4ca2" (UID: "7453bd3b-1c8e-4401-b6b3-644e7aff4ca2"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:29:35 crc kubenswrapper[4703]: I1209 12:29:35.868424 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7453bd3b-1c8e-4401-b6b3-644e7aff4ca2-kube-api-access-q9mf2" (OuterVolumeSpecName: "kube-api-access-q9mf2") pod "7453bd3b-1c8e-4401-b6b3-644e7aff4ca2" (UID: "7453bd3b-1c8e-4401-b6b3-644e7aff4ca2"). InnerVolumeSpecName "kube-api-access-q9mf2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:29:35 crc kubenswrapper[4703]: I1209 12:29:35.870462 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q9mf2\" (UniqueName: \"kubernetes.io/projected/7453bd3b-1c8e-4401-b6b3-644e7aff4ca2-kube-api-access-q9mf2\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:35 crc kubenswrapper[4703]: I1209 12:29:35.870499 4703 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7453bd3b-1c8e-4401-b6b3-644e7aff4ca2-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:35 crc kubenswrapper[4703]: I1209 12:29:35.870509 4703 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7453bd3b-1c8e-4401-b6b3-644e7aff4ca2-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:35 crc kubenswrapper[4703]: I1209 12:29:35.870518 4703 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7453bd3b-1c8e-4401-b6b3-644e7aff4ca2-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:35 crc kubenswrapper[4703]: I1209 12:29:35.870531 4703 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7453bd3b-1c8e-4401-b6b3-644e7aff4ca2-logs\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:36 crc kubenswrapper[4703]: I1209 12:29:36.119860 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7453bd3b-1c8e-4401-b6b3-644e7aff4ca2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7453bd3b-1c8e-4401-b6b3-644e7aff4ca2" (UID: "7453bd3b-1c8e-4401-b6b3-644e7aff4ca2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:29:36 crc kubenswrapper[4703]: I1209 12:29:36.197890 4703 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7453bd3b-1c8e-4401-b6b3-644e7aff4ca2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:36 crc kubenswrapper[4703]: I1209 12:29:36.209357 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7453bd3b-1c8e-4401-b6b3-644e7aff4ca2-config-data" (OuterVolumeSpecName: "config-data") pod "7453bd3b-1c8e-4401-b6b3-644e7aff4ca2" (UID: "7453bd3b-1c8e-4401-b6b3-644e7aff4ca2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:29:36 crc kubenswrapper[4703]: I1209 12:29:36.300162 4703 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7453bd3b-1c8e-4401-b6b3-644e7aff4ca2-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:36 crc kubenswrapper[4703]: I1209 12:29:36.331255 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 12:29:36 crc kubenswrapper[4703]: I1209 12:29:36.347243 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 09 12:29:36 crc kubenswrapper[4703]: I1209 12:29:36.404773 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e5e4fae3-dab6-4aab-87fa-01b51c6f05db-run-httpd\") pod \"e5e4fae3-dab6-4aab-87fa-01b51c6f05db\" (UID: \"e5e4fae3-dab6-4aab-87fa-01b51c6f05db\") " Dec 09 12:29:36 crc kubenswrapper[4703]: I1209 12:29:36.404829 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e5e4fae3-dab6-4aab-87fa-01b51c6f05db-log-httpd\") pod \"e5e4fae3-dab6-4aab-87fa-01b51c6f05db\" (UID: \"e5e4fae3-dab6-4aab-87fa-01b51c6f05db\") " Dec 09 12:29:36 crc kubenswrapper[4703]: I1209 12:29:36.404907 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db4f79b4-592c-4dc1-ad09-c1582b9d8497-combined-ca-bundle\") pod \"db4f79b4-592c-4dc1-ad09-c1582b9d8497\" (UID: \"db4f79b4-592c-4dc1-ad09-c1582b9d8497\") " Dec 09 12:29:36 crc kubenswrapper[4703]: I1209 12:29:36.404962 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pfwk5\" (UniqueName: \"kubernetes.io/projected/db4f79b4-592c-4dc1-ad09-c1582b9d8497-kube-api-access-pfwk5\") pod \"db4f79b4-592c-4dc1-ad09-c1582b9d8497\" (UID: \"db4f79b4-592c-4dc1-ad09-c1582b9d8497\") " Dec 09 12:29:36 crc kubenswrapper[4703]: I1209 12:29:36.404990 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db4f79b4-592c-4dc1-ad09-c1582b9d8497-config-data\") pod \"db4f79b4-592c-4dc1-ad09-c1582b9d8497\" (UID: \"db4f79b4-592c-4dc1-ad09-c1582b9d8497\") " Dec 09 12:29:36 crc kubenswrapper[4703]: I1209 12:29:36.405126 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b4442429-2353-4ea7-ac70-798afe17b703\") pod \"db4f79b4-592c-4dc1-ad09-c1582b9d8497\" (UID: \"db4f79b4-592c-4dc1-ad09-c1582b9d8497\") " Dec 09 12:29:36 crc kubenswrapper[4703]: I1209 12:29:36.405212 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xk6bp\" (UniqueName: \"kubernetes.io/projected/e5e4fae3-dab6-4aab-87fa-01b51c6f05db-kube-api-access-xk6bp\") pod \"e5e4fae3-dab6-4aab-87fa-01b51c6f05db\" (UID: \"e5e4fae3-dab6-4aab-87fa-01b51c6f05db\") " Dec 09 12:29:36 crc kubenswrapper[4703]: I1209 12:29:36.405311 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5e4fae3-dab6-4aab-87fa-01b51c6f05db-scripts\") pod \"e5e4fae3-dab6-4aab-87fa-01b51c6f05db\" (UID: \"e5e4fae3-dab6-4aab-87fa-01b51c6f05db\") " Dec 09 12:29:36 crc kubenswrapper[4703]: I1209 12:29:36.405349 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db4f79b4-592c-4dc1-ad09-c1582b9d8497-scripts\") pod \"db4f79b4-592c-4dc1-ad09-c1582b9d8497\" (UID: \"db4f79b4-592c-4dc1-ad09-c1582b9d8497\") " Dec 09 12:29:36 crc kubenswrapper[4703]: I1209 12:29:36.405372 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5e4fae3-dab6-4aab-87fa-01b51c6f05db-config-data\") pod \"e5e4fae3-dab6-4aab-87fa-01b51c6f05db\" (UID: \"e5e4fae3-dab6-4aab-87fa-01b51c6f05db\") " Dec 09 12:29:36 crc kubenswrapper[4703]: I1209 12:29:36.405454 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5e4fae3-dab6-4aab-87fa-01b51c6f05db-combined-ca-bundle\") pod \"e5e4fae3-dab6-4aab-87fa-01b51c6f05db\" (UID: \"e5e4fae3-dab6-4aab-87fa-01b51c6f05db\") " Dec 09 12:29:36 crc kubenswrapper[4703]: I1209 12:29:36.405498 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/db4f79b4-592c-4dc1-ad09-c1582b9d8497-httpd-run\") pod \"db4f79b4-592c-4dc1-ad09-c1582b9d8497\" (UID: \"db4f79b4-592c-4dc1-ad09-c1582b9d8497\") " Dec 09 12:29:36 crc kubenswrapper[4703]: I1209 12:29:36.405514 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/db4f79b4-592c-4dc1-ad09-c1582b9d8497-public-tls-certs\") pod \"db4f79b4-592c-4dc1-ad09-c1582b9d8497\" (UID: \"db4f79b4-592c-4dc1-ad09-c1582b9d8497\") " Dec 09 12:29:36 crc kubenswrapper[4703]: I1209 12:29:36.405532 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5e4fae3-dab6-4aab-87fa-01b51c6f05db-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e5e4fae3-dab6-4aab-87fa-01b51c6f05db" (UID: "e5e4fae3-dab6-4aab-87fa-01b51c6f05db"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:29:36 crc kubenswrapper[4703]: I1209 12:29:36.405548 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e5e4fae3-dab6-4aab-87fa-01b51c6f05db-sg-core-conf-yaml\") pod \"e5e4fae3-dab6-4aab-87fa-01b51c6f05db\" (UID: \"e5e4fae3-dab6-4aab-87fa-01b51c6f05db\") " Dec 09 12:29:36 crc kubenswrapper[4703]: I1209 12:29:36.405577 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db4f79b4-592c-4dc1-ad09-c1582b9d8497-logs\") pod \"db4f79b4-592c-4dc1-ad09-c1582b9d8497\" (UID: \"db4f79b4-592c-4dc1-ad09-c1582b9d8497\") " Dec 09 12:29:36 crc kubenswrapper[4703]: I1209 12:29:36.406108 4703 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e5e4fae3-dab6-4aab-87fa-01b51c6f05db-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:36 crc kubenswrapper[4703]: I1209 12:29:36.407472 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db4f79b4-592c-4dc1-ad09-c1582b9d8497-logs" (OuterVolumeSpecName: "logs") pod "db4f79b4-592c-4dc1-ad09-c1582b9d8497" (UID: "db4f79b4-592c-4dc1-ad09-c1582b9d8497"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:29:36 crc kubenswrapper[4703]: I1209 12:29:36.407796 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db4f79b4-592c-4dc1-ad09-c1582b9d8497-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "db4f79b4-592c-4dc1-ad09-c1582b9d8497" (UID: "db4f79b4-592c-4dc1-ad09-c1582b9d8497"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:29:36 crc kubenswrapper[4703]: I1209 12:29:36.423590 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5e4fae3-dab6-4aab-87fa-01b51c6f05db-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e5e4fae3-dab6-4aab-87fa-01b51c6f05db" (UID: "e5e4fae3-dab6-4aab-87fa-01b51c6f05db"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:29:36 crc kubenswrapper[4703]: I1209 12:29:36.434696 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5e4fae3-dab6-4aab-87fa-01b51c6f05db-kube-api-access-xk6bp" (OuterVolumeSpecName: "kube-api-access-xk6bp") pod "e5e4fae3-dab6-4aab-87fa-01b51c6f05db" (UID: "e5e4fae3-dab6-4aab-87fa-01b51c6f05db"). InnerVolumeSpecName "kube-api-access-xk6bp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:29:36 crc kubenswrapper[4703]: I1209 12:29:36.452721 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db4f79b4-592c-4dc1-ad09-c1582b9d8497-kube-api-access-pfwk5" (OuterVolumeSpecName: "kube-api-access-pfwk5") pod "db4f79b4-592c-4dc1-ad09-c1582b9d8497" (UID: "db4f79b4-592c-4dc1-ad09-c1582b9d8497"). InnerVolumeSpecName "kube-api-access-pfwk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:29:36 crc kubenswrapper[4703]: I1209 12:29:36.453402 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db4f79b4-592c-4dc1-ad09-c1582b9d8497-scripts" (OuterVolumeSpecName: "scripts") pod "db4f79b4-592c-4dc1-ad09-c1582b9d8497" (UID: "db4f79b4-592c-4dc1-ad09-c1582b9d8497"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:29:36 crc kubenswrapper[4703]: I1209 12:29:36.459287 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5e4fae3-dab6-4aab-87fa-01b51c6f05db-scripts" (OuterVolumeSpecName: "scripts") pod "e5e4fae3-dab6-4aab-87fa-01b51c6f05db" (UID: "e5e4fae3-dab6-4aab-87fa-01b51c6f05db"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:29:36 crc kubenswrapper[4703]: I1209 12:29:36.512053 4703 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5e4fae3-dab6-4aab-87fa-01b51c6f05db-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:36 crc kubenswrapper[4703]: I1209 12:29:36.512103 4703 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db4f79b4-592c-4dc1-ad09-c1582b9d8497-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:36 crc kubenswrapper[4703]: I1209 12:29:36.512113 4703 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/db4f79b4-592c-4dc1-ad09-c1582b9d8497-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:36 crc kubenswrapper[4703]: I1209 12:29:36.512126 4703 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db4f79b4-592c-4dc1-ad09-c1582b9d8497-logs\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:36 crc kubenswrapper[4703]: I1209 12:29:36.512140 4703 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e5e4fae3-dab6-4aab-87fa-01b51c6f05db-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:36 crc kubenswrapper[4703]: I1209 12:29:36.512151 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pfwk5\" (UniqueName: \"kubernetes.io/projected/db4f79b4-592c-4dc1-ad09-c1582b9d8497-kube-api-access-pfwk5\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:36 crc kubenswrapper[4703]: I1209 12:29:36.512168 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xk6bp\" (UniqueName: \"kubernetes.io/projected/e5e4fae3-dab6-4aab-87fa-01b51c6f05db-kube-api-access-xk6bp\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:36 crc kubenswrapper[4703]: I1209 12:29:36.566062 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-fms5f"] Dec 09 12:29:36 crc kubenswrapper[4703]: I1209 12:29:36.573235 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7453bd3b-1c8e-4401-b6b3-644e7aff4ca2","Type":"ContainerDied","Data":"1564de6bb2e7f86d2af6f6be076b2f4974b2b52ef434efb9a4038add710709a3"} Dec 09 12:29:36 crc kubenswrapper[4703]: I1209 12:29:36.573297 4703 scope.go:117] "RemoveContainer" containerID="eb5f5fc72aa49f2e665a1f22401611976d61113a9988059288776c4befc0cb22" Dec 09 12:29:36 crc kubenswrapper[4703]: I1209 12:29:36.573593 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 09 12:29:36 crc kubenswrapper[4703]: I1209 12:29:36.609708 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b4442429-2353-4ea7-ac70-798afe17b703" (OuterVolumeSpecName: "glance") pod "db4f79b4-592c-4dc1-ad09-c1582b9d8497" (UID: "db4f79b4-592c-4dc1-ad09-c1582b9d8497"). InnerVolumeSpecName "pvc-b4442429-2353-4ea7-ac70-798afe17b703". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 09 12:29:36 crc kubenswrapper[4703]: I1209 12:29:36.614557 4703 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-b4442429-2353-4ea7-ac70-798afe17b703\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b4442429-2353-4ea7-ac70-798afe17b703\") on node \"crc\" " Dec 09 12:29:36 crc kubenswrapper[4703]: I1209 12:29:36.632732 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e5e4fae3-dab6-4aab-87fa-01b51c6f05db","Type":"ContainerDied","Data":"314ac44285b5a3a856e849a37b17ad1b466db5f5fcf6d07623dad0f486cd1e0d"} Dec 09 12:29:36 crc kubenswrapper[4703]: I1209 12:29:36.634308 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 12:29:36 crc kubenswrapper[4703]: I1209 12:29:36.646941 4703 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Dec 09 12:29:36 crc kubenswrapper[4703]: I1209 12:29:36.647599 4703 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-b4442429-2353-4ea7-ac70-798afe17b703" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b4442429-2353-4ea7-ac70-798afe17b703") on node "crc" Dec 09 12:29:36 crc kubenswrapper[4703]: I1209 12:29:36.652965 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5e4fae3-dab6-4aab-87fa-01b51c6f05db-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e5e4fae3-dab6-4aab-87fa-01b51c6f05db" (UID: "e5e4fae3-dab6-4aab-87fa-01b51c6f05db"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:29:36 crc kubenswrapper[4703]: I1209 12:29:36.659038 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-bqvwt"] Dec 09 12:29:36 crc kubenswrapper[4703]: I1209 12:29:36.659903 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-85bddbcbcc-v2sfc" event={"ID":"366b66fd-ba0b-44c9-b9c4-ad2038f94d86","Type":"ContainerStarted","Data":"d084928630466d832ce6d08406aee527f8932a5083ef4501c4221006a87f8c51"} Dec 09 12:29:36 crc kubenswrapper[4703]: I1209 12:29:36.681345 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"db4f79b4-592c-4dc1-ad09-c1582b9d8497","Type":"ContainerDied","Data":"b2024022a0b733b85ee0ec1a275df2eac0ab96983b162851c980b4bc645308a8"} Dec 09 12:29:36 crc kubenswrapper[4703]: I1209 12:29:36.681523 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 09 12:29:36 crc kubenswrapper[4703]: I1209 12:29:36.723948 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"82a9b4ac-cb47-454e-802d-0f24b798103b","Type":"ContainerStarted","Data":"8429c55e0015e9abe9ebffcf6ca47c4d8d2c90d1f2c90d9c551709067269213d"} Dec 09 12:29:36 crc kubenswrapper[4703]: I1209 12:29:36.726391 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db4f79b4-592c-4dc1-ad09-c1582b9d8497-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "db4f79b4-592c-4dc1-ad09-c1582b9d8497" (UID: "db4f79b4-592c-4dc1-ad09-c1582b9d8497"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:29:36 crc kubenswrapper[4703]: I1209 12:29:36.753548 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.9451763829999997 podStartE2EDuration="19.753520529s" podCreationTimestamp="2025-12-09 12:29:17 +0000 UTC" firstStartedPulling="2025-12-09 12:29:18.571383249 +0000 UTC m=+1457.820146768" lastFinishedPulling="2025-12-09 12:29:35.379727395 +0000 UTC m=+1474.628490914" observedRunningTime="2025-12-09 12:29:36.747697413 +0000 UTC m=+1475.996460942" watchObservedRunningTime="2025-12-09 12:29:36.753520529 +0000 UTC m=+1476.002284048" Dec 09 12:29:36 crc kubenswrapper[4703]: I1209 12:29:36.785413 4703 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e5e4fae3-dab6-4aab-87fa-01b51c6f05db-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:36 crc kubenswrapper[4703]: I1209 12:29:36.785468 4703 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db4f79b4-592c-4dc1-ad09-c1582b9d8497-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:36 crc kubenswrapper[4703]: I1209 12:29:36.785586 4703 reconciler_common.go:293] "Volume detached for volume \"pvc-b4442429-2353-4ea7-ac70-798afe17b703\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b4442429-2353-4ea7-ac70-798afe17b703\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:36 crc kubenswrapper[4703]: I1209 12:29:36.795046 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db4f79b4-592c-4dc1-ad09-c1582b9d8497-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "db4f79b4-592c-4dc1-ad09-c1582b9d8497" (UID: "db4f79b4-592c-4dc1-ad09-c1582b9d8497"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:29:36 crc kubenswrapper[4703]: I1209 12:29:36.802887 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db4f79b4-592c-4dc1-ad09-c1582b9d8497-config-data" (OuterVolumeSpecName: "config-data") pod "db4f79b4-592c-4dc1-ad09-c1582b9d8497" (UID: "db4f79b4-592c-4dc1-ad09-c1582b9d8497"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:29:36 crc kubenswrapper[4703]: I1209 12:29:36.854000 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Dec 09 12:29:36 crc kubenswrapper[4703]: I1209 12:29:36.862525 4703 scope.go:117] "RemoveContainer" containerID="341864cf418070075ecf2f45bd5aab48a85e5c5716fa23240910c198bfb425b3" Dec 09 12:29:36 crc kubenswrapper[4703]: I1209 12:29:36.889521 4703 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/db4f79b4-592c-4dc1-ad09-c1582b9d8497-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:36 crc kubenswrapper[4703]: I1209 12:29:36.891378 4703 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db4f79b4-592c-4dc1-ad09-c1582b9d8497-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:36 crc kubenswrapper[4703]: I1209 12:29:36.906363 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5e4fae3-dab6-4aab-87fa-01b51c6f05db-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e5e4fae3-dab6-4aab-87fa-01b51c6f05db" (UID: "e5e4fae3-dab6-4aab-87fa-01b51c6f05db"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:29:36 crc kubenswrapper[4703]: I1209 12:29:36.923859 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Dec 09 12:29:36 crc kubenswrapper[4703]: I1209 12:29:36.939627 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Dec 09 12:29:36 crc kubenswrapper[4703]: E1209 12:29:36.940125 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db4f79b4-592c-4dc1-ad09-c1582b9d8497" containerName="glance-log" Dec 09 12:29:36 crc kubenswrapper[4703]: I1209 12:29:36.940143 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="db4f79b4-592c-4dc1-ad09-c1582b9d8497" containerName="glance-log" Dec 09 12:29:36 crc kubenswrapper[4703]: E1209 12:29:36.940158 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7453bd3b-1c8e-4401-b6b3-644e7aff4ca2" containerName="cinder-api-log" Dec 09 12:29:36 crc kubenswrapper[4703]: I1209 12:29:36.940165 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="7453bd3b-1c8e-4401-b6b3-644e7aff4ca2" containerName="cinder-api-log" Dec 09 12:29:36 crc kubenswrapper[4703]: E1209 12:29:36.940177 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5e4fae3-dab6-4aab-87fa-01b51c6f05db" containerName="sg-core" Dec 09 12:29:36 crc kubenswrapper[4703]: I1209 12:29:36.940187 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5e4fae3-dab6-4aab-87fa-01b51c6f05db" containerName="sg-core" Dec 09 12:29:36 crc kubenswrapper[4703]: E1209 12:29:36.940219 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db4f79b4-592c-4dc1-ad09-c1582b9d8497" containerName="glance-httpd" Dec 09 12:29:36 crc kubenswrapper[4703]: I1209 12:29:36.940226 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="db4f79b4-592c-4dc1-ad09-c1582b9d8497" containerName="glance-httpd" Dec 09 12:29:36 crc kubenswrapper[4703]: E1209 12:29:36.940235 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7453bd3b-1c8e-4401-b6b3-644e7aff4ca2" containerName="cinder-api" Dec 09 12:29:36 crc kubenswrapper[4703]: I1209 12:29:36.940241 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="7453bd3b-1c8e-4401-b6b3-644e7aff4ca2" containerName="cinder-api" Dec 09 12:29:36 crc kubenswrapper[4703]: E1209 12:29:36.940259 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5e4fae3-dab6-4aab-87fa-01b51c6f05db" containerName="ceilometer-central-agent" Dec 09 12:29:36 crc kubenswrapper[4703]: I1209 12:29:36.940265 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5e4fae3-dab6-4aab-87fa-01b51c6f05db" containerName="ceilometer-central-agent" Dec 09 12:29:36 crc kubenswrapper[4703]: E1209 12:29:36.940281 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5e4fae3-dab6-4aab-87fa-01b51c6f05db" containerName="proxy-httpd" Dec 09 12:29:36 crc kubenswrapper[4703]: I1209 12:29:36.940288 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5e4fae3-dab6-4aab-87fa-01b51c6f05db" containerName="proxy-httpd" Dec 09 12:29:36 crc kubenswrapper[4703]: E1209 12:29:36.940305 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5e4fae3-dab6-4aab-87fa-01b51c6f05db" containerName="ceilometer-notification-agent" Dec 09 12:29:36 crc kubenswrapper[4703]: I1209 12:29:36.940311 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5e4fae3-dab6-4aab-87fa-01b51c6f05db" containerName="ceilometer-notification-agent" Dec 09 12:29:36 crc kubenswrapper[4703]: I1209 12:29:36.940503 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="db4f79b4-592c-4dc1-ad09-c1582b9d8497" containerName="glance-log" Dec 09 12:29:36 crc kubenswrapper[4703]: I1209 12:29:36.940511 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="db4f79b4-592c-4dc1-ad09-c1582b9d8497" containerName="glance-httpd" Dec 09 12:29:36 crc kubenswrapper[4703]: I1209 12:29:36.940525 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5e4fae3-dab6-4aab-87fa-01b51c6f05db" containerName="ceilometer-notification-agent" Dec 09 12:29:36 crc kubenswrapper[4703]: I1209 12:29:36.940532 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="7453bd3b-1c8e-4401-b6b3-644e7aff4ca2" containerName="cinder-api-log" Dec 09 12:29:36 crc kubenswrapper[4703]: I1209 12:29:36.940542 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5e4fae3-dab6-4aab-87fa-01b51c6f05db" containerName="sg-core" Dec 09 12:29:36 crc kubenswrapper[4703]: I1209 12:29:36.940552 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5e4fae3-dab6-4aab-87fa-01b51c6f05db" containerName="proxy-httpd" Dec 09 12:29:36 crc kubenswrapper[4703]: I1209 12:29:36.940559 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="7453bd3b-1c8e-4401-b6b3-644e7aff4ca2" containerName="cinder-api" Dec 09 12:29:36 crc kubenswrapper[4703]: I1209 12:29:36.940572 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5e4fae3-dab6-4aab-87fa-01b51c6f05db" containerName="ceilometer-central-agent" Dec 09 12:29:36 crc kubenswrapper[4703]: I1209 12:29:36.941787 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 09 12:29:36 crc kubenswrapper[4703]: I1209 12:29:36.944969 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Dec 09 12:29:36 crc kubenswrapper[4703]: I1209 12:29:36.945469 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Dec 09 12:29:36 crc kubenswrapper[4703]: I1209 12:29:36.945561 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Dec 09 12:29:36 crc kubenswrapper[4703]: I1209 12:29:36.980280 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 09 12:29:36 crc kubenswrapper[4703]: I1209 12:29:36.988950 4703 scope.go:117] "RemoveContainer" containerID="1210ec49f9faa0ed785e1039b04d2ebed4be74421708caf63fccfadb292c5adf" Dec 09 12:29:36 crc kubenswrapper[4703]: I1209 12:29:36.995906 4703 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5e4fae3-dab6-4aab-87fa-01b51c6f05db-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:37 crc kubenswrapper[4703]: I1209 12:29:37.012399 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5e4fae3-dab6-4aab-87fa-01b51c6f05db-config-data" (OuterVolumeSpecName: "config-data") pod "e5e4fae3-dab6-4aab-87fa-01b51c6f05db" (UID: "e5e4fae3-dab6-4aab-87fa-01b51c6f05db"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:29:37 crc kubenswrapper[4703]: I1209 12:29:37.102148 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjs2h\" (UniqueName: \"kubernetes.io/projected/b6029ff4-8d8a-442e-95d0-b7d3c0cbdc0e-kube-api-access-hjs2h\") pod \"cinder-api-0\" (UID: \"b6029ff4-8d8a-442e-95d0-b7d3c0cbdc0e\") " pod="openstack/cinder-api-0" Dec 09 12:29:37 crc kubenswrapper[4703]: I1209 12:29:37.102983 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b6029ff4-8d8a-442e-95d0-b7d3c0cbdc0e-config-data-custom\") pod \"cinder-api-0\" (UID: \"b6029ff4-8d8a-442e-95d0-b7d3c0cbdc0e\") " pod="openstack/cinder-api-0" Dec 09 12:29:37 crc kubenswrapper[4703]: I1209 12:29:37.103437 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6029ff4-8d8a-442e-95d0-b7d3c0cbdc0e-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"b6029ff4-8d8a-442e-95d0-b7d3c0cbdc0e\") " pod="openstack/cinder-api-0" Dec 09 12:29:37 crc kubenswrapper[4703]: I1209 12:29:37.103543 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6029ff4-8d8a-442e-95d0-b7d3c0cbdc0e-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"b6029ff4-8d8a-442e-95d0-b7d3c0cbdc0e\") " pod="openstack/cinder-api-0" Dec 09 12:29:37 crc kubenswrapper[4703]: I1209 12:29:37.103601 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b6029ff4-8d8a-442e-95d0-b7d3c0cbdc0e-etc-machine-id\") pod \"cinder-api-0\" (UID: \"b6029ff4-8d8a-442e-95d0-b7d3c0cbdc0e\") " pod="openstack/cinder-api-0" Dec 09 12:29:37 crc kubenswrapper[4703]: I1209 12:29:37.103708 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6029ff4-8d8a-442e-95d0-b7d3c0cbdc0e-scripts\") pod \"cinder-api-0\" (UID: \"b6029ff4-8d8a-442e-95d0-b7d3c0cbdc0e\") " pod="openstack/cinder-api-0" Dec 09 12:29:37 crc kubenswrapper[4703]: I1209 12:29:37.103750 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6029ff4-8d8a-442e-95d0-b7d3c0cbdc0e-logs\") pod \"cinder-api-0\" (UID: \"b6029ff4-8d8a-442e-95d0-b7d3c0cbdc0e\") " pod="openstack/cinder-api-0" Dec 09 12:29:37 crc kubenswrapper[4703]: I1209 12:29:37.103826 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6029ff4-8d8a-442e-95d0-b7d3c0cbdc0e-public-tls-certs\") pod \"cinder-api-0\" (UID: \"b6029ff4-8d8a-442e-95d0-b7d3c0cbdc0e\") " pod="openstack/cinder-api-0" Dec 09 12:29:37 crc kubenswrapper[4703]: I1209 12:29:37.104014 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6029ff4-8d8a-442e-95d0-b7d3c0cbdc0e-config-data\") pod \"cinder-api-0\" (UID: \"b6029ff4-8d8a-442e-95d0-b7d3c0cbdc0e\") " pod="openstack/cinder-api-0" Dec 09 12:29:37 crc kubenswrapper[4703]: I1209 12:29:37.104206 4703 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5e4fae3-dab6-4aab-87fa-01b51c6f05db-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:37 crc kubenswrapper[4703]: I1209 12:29:37.155842 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7453bd3b-1c8e-4401-b6b3-644e7aff4ca2" path="/var/lib/kubelet/pods/7453bd3b-1c8e-4401-b6b3-644e7aff4ca2/volumes" Dec 09 12:29:37 crc kubenswrapper[4703]: I1209 12:29:37.158174 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-2cb9-account-create-update-n9tpj"] Dec 09 12:29:37 crc kubenswrapper[4703]: I1209 12:29:37.179810 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-5jcln"] Dec 09 12:29:37 crc kubenswrapper[4703]: I1209 12:29:37.201019 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-proc-0"] Dec 09 12:29:37 crc kubenswrapper[4703]: I1209 12:29:37.206301 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjs2h\" (UniqueName: \"kubernetes.io/projected/b6029ff4-8d8a-442e-95d0-b7d3c0cbdc0e-kube-api-access-hjs2h\") pod \"cinder-api-0\" (UID: \"b6029ff4-8d8a-442e-95d0-b7d3c0cbdc0e\") " pod="openstack/cinder-api-0" Dec 09 12:29:37 crc kubenswrapper[4703]: I1209 12:29:37.206372 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b6029ff4-8d8a-442e-95d0-b7d3c0cbdc0e-config-data-custom\") pod \"cinder-api-0\" (UID: \"b6029ff4-8d8a-442e-95d0-b7d3c0cbdc0e\") " pod="openstack/cinder-api-0" Dec 09 12:29:37 crc kubenswrapper[4703]: I1209 12:29:37.206466 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6029ff4-8d8a-442e-95d0-b7d3c0cbdc0e-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"b6029ff4-8d8a-442e-95d0-b7d3c0cbdc0e\") " pod="openstack/cinder-api-0" Dec 09 12:29:37 crc kubenswrapper[4703]: I1209 12:29:37.206500 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6029ff4-8d8a-442e-95d0-b7d3c0cbdc0e-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"b6029ff4-8d8a-442e-95d0-b7d3c0cbdc0e\") " pod="openstack/cinder-api-0" Dec 09 12:29:37 crc kubenswrapper[4703]: I1209 12:29:37.206524 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b6029ff4-8d8a-442e-95d0-b7d3c0cbdc0e-etc-machine-id\") pod \"cinder-api-0\" (UID: \"b6029ff4-8d8a-442e-95d0-b7d3c0cbdc0e\") " pod="openstack/cinder-api-0" Dec 09 12:29:37 crc kubenswrapper[4703]: I1209 12:29:37.206566 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6029ff4-8d8a-442e-95d0-b7d3c0cbdc0e-scripts\") pod \"cinder-api-0\" (UID: \"b6029ff4-8d8a-442e-95d0-b7d3c0cbdc0e\") " pod="openstack/cinder-api-0" Dec 09 12:29:37 crc kubenswrapper[4703]: I1209 12:29:37.206586 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6029ff4-8d8a-442e-95d0-b7d3c0cbdc0e-logs\") pod \"cinder-api-0\" (UID: \"b6029ff4-8d8a-442e-95d0-b7d3c0cbdc0e\") " pod="openstack/cinder-api-0" Dec 09 12:29:37 crc kubenswrapper[4703]: I1209 12:29:37.206610 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6029ff4-8d8a-442e-95d0-b7d3c0cbdc0e-public-tls-certs\") pod \"cinder-api-0\" (UID: \"b6029ff4-8d8a-442e-95d0-b7d3c0cbdc0e\") " pod="openstack/cinder-api-0" Dec 09 12:29:37 crc kubenswrapper[4703]: I1209 12:29:37.206670 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6029ff4-8d8a-442e-95d0-b7d3c0cbdc0e-config-data\") pod \"cinder-api-0\" (UID: \"b6029ff4-8d8a-442e-95d0-b7d3c0cbdc0e\") " pod="openstack/cinder-api-0" Dec 09 12:29:37 crc kubenswrapper[4703]: I1209 12:29:37.208028 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6029ff4-8d8a-442e-95d0-b7d3c0cbdc0e-logs\") pod \"cinder-api-0\" (UID: \"b6029ff4-8d8a-442e-95d0-b7d3c0cbdc0e\") " pod="openstack/cinder-api-0" Dec 09 12:29:37 crc kubenswrapper[4703]: I1209 12:29:37.208437 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b6029ff4-8d8a-442e-95d0-b7d3c0cbdc0e-etc-machine-id\") pod \"cinder-api-0\" (UID: \"b6029ff4-8d8a-442e-95d0-b7d3c0cbdc0e\") " pod="openstack/cinder-api-0" Dec 09 12:29:37 crc kubenswrapper[4703]: I1209 12:29:37.208596 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-5432-account-create-update-t6hf2"] Dec 09 12:29:37 crc kubenswrapper[4703]: I1209 12:29:37.216878 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6029ff4-8d8a-442e-95d0-b7d3c0cbdc0e-config-data\") pod \"cinder-api-0\" (UID: \"b6029ff4-8d8a-442e-95d0-b7d3c0cbdc0e\") " pod="openstack/cinder-api-0" Dec 09 12:29:37 crc kubenswrapper[4703]: I1209 12:29:37.222984 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6029ff4-8d8a-442e-95d0-b7d3c0cbdc0e-scripts\") pod \"cinder-api-0\" (UID: \"b6029ff4-8d8a-442e-95d0-b7d3c0cbdc0e\") " pod="openstack/cinder-api-0" Dec 09 12:29:37 crc kubenswrapper[4703]: I1209 12:29:37.227688 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6029ff4-8d8a-442e-95d0-b7d3c0cbdc0e-public-tls-certs\") pod \"cinder-api-0\" (UID: \"b6029ff4-8d8a-442e-95d0-b7d3c0cbdc0e\") " pod="openstack/cinder-api-0" Dec 09 12:29:37 crc kubenswrapper[4703]: I1209 12:29:37.230996 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6029ff4-8d8a-442e-95d0-b7d3c0cbdc0e-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"b6029ff4-8d8a-442e-95d0-b7d3c0cbdc0e\") " pod="openstack/cinder-api-0" Dec 09 12:29:37 crc kubenswrapper[4703]: I1209 12:29:37.231708 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b6029ff4-8d8a-442e-95d0-b7d3c0cbdc0e-config-data-custom\") pod \"cinder-api-0\" (UID: \"b6029ff4-8d8a-442e-95d0-b7d3c0cbdc0e\") " pod="openstack/cinder-api-0" Dec 09 12:29:37 crc kubenswrapper[4703]: I1209 12:29:37.246967 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6029ff4-8d8a-442e-95d0-b7d3c0cbdc0e-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"b6029ff4-8d8a-442e-95d0-b7d3c0cbdc0e\") " pod="openstack/cinder-api-0" Dec 09 12:29:37 crc kubenswrapper[4703]: I1209 12:29:37.250587 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-241d-account-create-update-jx7s6"] Dec 09 12:29:37 crc kubenswrapper[4703]: I1209 12:29:37.252959 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjs2h\" (UniqueName: \"kubernetes.io/projected/b6029ff4-8d8a-442e-95d0-b7d3c0cbdc0e-kube-api-access-hjs2h\") pod \"cinder-api-0\" (UID: \"b6029ff4-8d8a-442e-95d0-b7d3c0cbdc0e\") " pod="openstack/cinder-api-0" Dec 09 12:29:37 crc kubenswrapper[4703]: I1209 12:29:37.476665 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 09 12:29:37 crc kubenswrapper[4703]: I1209 12:29:37.513472 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 09 12:29:37 crc kubenswrapper[4703]: I1209 12:29:37.546601 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 09 12:29:37 crc kubenswrapper[4703]: I1209 12:29:37.584969 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 09 12:29:37 crc kubenswrapper[4703]: I1209 12:29:37.588542 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 12:29:37 crc kubenswrapper[4703]: I1209 12:29:37.595832 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 09 12:29:37 crc kubenswrapper[4703]: I1209 12:29:37.596414 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 09 12:29:37 crc kubenswrapper[4703]: I1209 12:29:37.597380 4703 scope.go:117] "RemoveContainer" containerID="3ea74029e6d7b1f994d4533286f61d1cd15de0e704e9c73332d4981e3903c8fb" Dec 09 12:29:37 crc kubenswrapper[4703]: I1209 12:29:37.610018 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 09 12:29:37 crc kubenswrapper[4703]: I1209 12:29:37.653028 4703 scope.go:117] "RemoveContainer" containerID="c25d8021eedebb2ddc3ad815f7ee1d179a16c4c0ee25805fc6614451e866cc89" Dec 09 12:29:37 crc kubenswrapper[4703]: I1209 12:29:37.721088 4703 scope.go:117] "RemoveContainer" containerID="54f6bc64b0672882246e0ba372ef6612f084e6efceea5dfb934d2eb7f66bdaa5" Dec 09 12:29:37 crc kubenswrapper[4703]: I1209 12:29:37.723289 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ec047e2-1461-4fc0-b1cf-d149cc23924b-config-data\") pod \"ceilometer-0\" (UID: \"8ec047e2-1461-4fc0-b1cf-d149cc23924b\") " pod="openstack/ceilometer-0" Dec 09 12:29:37 crc kubenswrapper[4703]: I1209 12:29:37.723361 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbzv8\" (UniqueName: \"kubernetes.io/projected/8ec047e2-1461-4fc0-b1cf-d149cc23924b-kube-api-access-vbzv8\") pod \"ceilometer-0\" (UID: \"8ec047e2-1461-4fc0-b1cf-d149cc23924b\") " pod="openstack/ceilometer-0" Dec 09 12:29:37 crc kubenswrapper[4703]: I1209 12:29:37.723390 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8ec047e2-1461-4fc0-b1cf-d149cc23924b-run-httpd\") pod \"ceilometer-0\" (UID: \"8ec047e2-1461-4fc0-b1cf-d149cc23924b\") " pod="openstack/ceilometer-0" Dec 09 12:29:37 crc kubenswrapper[4703]: I1209 12:29:37.723498 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ec047e2-1461-4fc0-b1cf-d149cc23924b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8ec047e2-1461-4fc0-b1cf-d149cc23924b\") " pod="openstack/ceilometer-0" Dec 09 12:29:37 crc kubenswrapper[4703]: I1209 12:29:37.723614 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ec047e2-1461-4fc0-b1cf-d149cc23924b-scripts\") pod \"ceilometer-0\" (UID: \"8ec047e2-1461-4fc0-b1cf-d149cc23924b\") " pod="openstack/ceilometer-0" Dec 09 12:29:37 crc kubenswrapper[4703]: I1209 12:29:37.723647 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8ec047e2-1461-4fc0-b1cf-d149cc23924b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8ec047e2-1461-4fc0-b1cf-d149cc23924b\") " pod="openstack/ceilometer-0" Dec 09 12:29:37 crc kubenswrapper[4703]: I1209 12:29:37.723718 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8ec047e2-1461-4fc0-b1cf-d149cc23924b-log-httpd\") pod \"ceilometer-0\" (UID: \"8ec047e2-1461-4fc0-b1cf-d149cc23924b\") " pod="openstack/ceilometer-0" Dec 09 12:29:37 crc kubenswrapper[4703]: I1209 12:29:37.760001 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"39bfa711-6e54-46c2-a7d4-e14927ffbc09","Type":"ContainerStarted","Data":"20ca1760b25cde6d4e8b1640d4d6e7438b4bd1f298e493b6d694818f3af03034"} Dec 09 12:29:37 crc kubenswrapper[4703]: I1209 12:29:37.773098 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-2cb9-account-create-update-n9tpj" event={"ID":"59cdfe75-7ac7-4bf4-9b72-951c5ee0bc73","Type":"ContainerStarted","Data":"5a89f8a3fac72ea4256f0730e466110d800d5ddcd6d25e208efb3c90eef84a67"} Dec 09 12:29:37 crc kubenswrapper[4703]: I1209 12:29:37.784717 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-85bddbcbcc-v2sfc" event={"ID":"366b66fd-ba0b-44c9-b9c4-ad2038f94d86","Type":"ContainerStarted","Data":"76fa7eb2f5e12975ba9ce5ef04edfec20174874eb3e8c85461d55f916ed334ca"} Dec 09 12:29:37 crc kubenswrapper[4703]: I1209 12:29:37.785288 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-85bddbcbcc-v2sfc" Dec 09 12:29:37 crc kubenswrapper[4703]: I1209 12:29:37.785357 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-85bddbcbcc-v2sfc" Dec 09 12:29:37 crc kubenswrapper[4703]: I1209 12:29:37.787097 4703 scope.go:117] "RemoveContainer" containerID="ad9ce1825554221993e768893541a522a107440f94554e7bfba233ff5d0b16be" Dec 09 12:29:37 crc kubenswrapper[4703]: I1209 12:29:37.795163 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-fms5f" event={"ID":"3b5815e2-e19c-491f-b0c9-19e651e10fec","Type":"ContainerStarted","Data":"96d8eea2d9e701a39ee402070c6fd873845af43c63404625177ee02b3fcc549e"} Dec 09 12:29:37 crc kubenswrapper[4703]: I1209 12:29:37.795468 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-fms5f" event={"ID":"3b5815e2-e19c-491f-b0c9-19e651e10fec","Type":"ContainerStarted","Data":"d3661547b292bef64116d9582cc1459eea0619d81ef51c92d842c1d91929a356"} Dec 09 12:29:37 crc kubenswrapper[4703]: I1209 12:29:37.808373 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-5432-account-create-update-t6hf2" event={"ID":"d9b712bf-6532-486d-bf10-577b49caba4c","Type":"ContainerStarted","Data":"9906a8101be5c4257136598c810248fcaa82992cc4056aa4b98a440414202407"} Dec 09 12:29:37 crc kubenswrapper[4703]: I1209 12:29:37.811725 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-241d-account-create-update-jx7s6" event={"ID":"fb4bb3a0-758c-4ce3-b2a6-5b34f95531b9","Type":"ContainerStarted","Data":"cde24c2607b6e6b9a16bde2c003177d4803ef8ca82fd48f2dc8b8f0c7db165ac"} Dec 09 12:29:37 crc kubenswrapper[4703]: I1209 12:29:37.819057 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-bqvwt" event={"ID":"f26022ad-c235-4dd6-abe2-40fe489afe81","Type":"ContainerStarted","Data":"aafcc445e60ae91cfb4588ab562a2f5b637916d5fb12a70ffb4693ab47ca72b4"} Dec 09 12:29:37 crc kubenswrapper[4703]: I1209 12:29:37.827022 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ec047e2-1461-4fc0-b1cf-d149cc23924b-config-data\") pod \"ceilometer-0\" (UID: \"8ec047e2-1461-4fc0-b1cf-d149cc23924b\") " pod="openstack/ceilometer-0" Dec 09 12:29:37 crc kubenswrapper[4703]: I1209 12:29:37.827097 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbzv8\" (UniqueName: \"kubernetes.io/projected/8ec047e2-1461-4fc0-b1cf-d149cc23924b-kube-api-access-vbzv8\") pod \"ceilometer-0\" (UID: \"8ec047e2-1461-4fc0-b1cf-d149cc23924b\") " pod="openstack/ceilometer-0" Dec 09 12:29:37 crc kubenswrapper[4703]: I1209 12:29:37.827130 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8ec047e2-1461-4fc0-b1cf-d149cc23924b-run-httpd\") pod \"ceilometer-0\" (UID: \"8ec047e2-1461-4fc0-b1cf-d149cc23924b\") " pod="openstack/ceilometer-0" Dec 09 12:29:37 crc kubenswrapper[4703]: I1209 12:29:37.827290 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ec047e2-1461-4fc0-b1cf-d149cc23924b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8ec047e2-1461-4fc0-b1cf-d149cc23924b\") " pod="openstack/ceilometer-0" Dec 09 12:29:37 crc kubenswrapper[4703]: I1209 12:29:37.827387 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ec047e2-1461-4fc0-b1cf-d149cc23924b-scripts\") pod \"ceilometer-0\" (UID: \"8ec047e2-1461-4fc0-b1cf-d149cc23924b\") " pod="openstack/ceilometer-0" Dec 09 12:29:37 crc kubenswrapper[4703]: I1209 12:29:37.827418 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8ec047e2-1461-4fc0-b1cf-d149cc23924b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8ec047e2-1461-4fc0-b1cf-d149cc23924b\") " pod="openstack/ceilometer-0" Dec 09 12:29:37 crc kubenswrapper[4703]: I1209 12:29:37.827480 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8ec047e2-1461-4fc0-b1cf-d149cc23924b-log-httpd\") pod \"ceilometer-0\" (UID: \"8ec047e2-1461-4fc0-b1cf-d149cc23924b\") " pod="openstack/ceilometer-0" Dec 09 12:29:37 crc kubenswrapper[4703]: I1209 12:29:37.837502 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8ec047e2-1461-4fc0-b1cf-d149cc23924b-run-httpd\") pod \"ceilometer-0\" (UID: \"8ec047e2-1461-4fc0-b1cf-d149cc23924b\") " pod="openstack/ceilometer-0" Dec 09 12:29:37 crc kubenswrapper[4703]: I1209 12:29:37.837542 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8ec047e2-1461-4fc0-b1cf-d149cc23924b-log-httpd\") pod \"ceilometer-0\" (UID: \"8ec047e2-1461-4fc0-b1cf-d149cc23924b\") " pod="openstack/ceilometer-0" Dec 09 12:29:37 crc kubenswrapper[4703]: I1209 12:29:37.841996 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ec047e2-1461-4fc0-b1cf-d149cc23924b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8ec047e2-1461-4fc0-b1cf-d149cc23924b\") " pod="openstack/ceilometer-0" Dec 09 12:29:37 crc kubenswrapper[4703]: I1209 12:29:37.846057 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ec047e2-1461-4fc0-b1cf-d149cc23924b-config-data\") pod \"ceilometer-0\" (UID: \"8ec047e2-1461-4fc0-b1cf-d149cc23924b\") " pod="openstack/ceilometer-0" Dec 09 12:29:37 crc kubenswrapper[4703]: I1209 12:29:37.848751 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-85bddbcbcc-v2sfc" podStartSLOduration=11.848720983 podStartE2EDuration="11.848720983s" podCreationTimestamp="2025-12-09 12:29:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:29:37.820002555 +0000 UTC m=+1477.068766084" watchObservedRunningTime="2025-12-09 12:29:37.848720983 +0000 UTC m=+1477.097484512" Dec 09 12:29:37 crc kubenswrapper[4703]: I1209 12:29:37.856807 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8ec047e2-1461-4fc0-b1cf-d149cc23924b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8ec047e2-1461-4fc0-b1cf-d149cc23924b\") " pod="openstack/ceilometer-0" Dec 09 12:29:37 crc kubenswrapper[4703]: I1209 12:29:37.894557 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbzv8\" (UniqueName: \"kubernetes.io/projected/8ec047e2-1461-4fc0-b1cf-d149cc23924b-kube-api-access-vbzv8\") pod \"ceilometer-0\" (UID: \"8ec047e2-1461-4fc0-b1cf-d149cc23924b\") " pod="openstack/ceilometer-0" Dec 09 12:29:37 crc kubenswrapper[4703]: I1209 12:29:37.909930 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-5jcln" event={"ID":"76d30e84-090d-427a-b0bc-f41ced88f0b4","Type":"ContainerStarted","Data":"bed91dcd2dc92148a3a149ab1bc0016383274942f403f299ac3f2492316b148b"} Dec 09 12:29:37 crc kubenswrapper[4703]: I1209 12:29:37.918911 4703 scope.go:117] "RemoveContainer" containerID="d5789bc93f79ea2373b48b65d376f6ea7876199dffc2e47e5864bfa90517bfbf" Dec 09 12:29:37 crc kubenswrapper[4703]: I1209 12:29:37.945817 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ec047e2-1461-4fc0-b1cf-d149cc23924b-scripts\") pod \"ceilometer-0\" (UID: \"8ec047e2-1461-4fc0-b1cf-d149cc23924b\") " pod="openstack/ceilometer-0" Dec 09 12:29:38 crc kubenswrapper[4703]: I1209 12:29:38.005597 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-fms5f" podStartSLOduration=5.005565131 podStartE2EDuration="5.005565131s" podCreationTimestamp="2025-12-09 12:29:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:29:37.854587719 +0000 UTC m=+1477.103351238" watchObservedRunningTime="2025-12-09 12:29:38.005565131 +0000 UTC m=+1477.254328650" Dec 09 12:29:38 crc kubenswrapper[4703]: I1209 12:29:38.065117 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-5jcln" podStartSLOduration=5.065082937 podStartE2EDuration="5.065082937s" podCreationTimestamp="2025-12-09 12:29:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:29:37.954970197 +0000 UTC m=+1477.203733736" watchObservedRunningTime="2025-12-09 12:29:38.065082937 +0000 UTC m=+1477.313846456" Dec 09 12:29:38 crc kubenswrapper[4703]: I1209 12:29:38.235018 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 12:29:38 crc kubenswrapper[4703]: I1209 12:29:38.361651 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 09 12:29:38 crc kubenswrapper[4703]: I1209 12:29:38.905269 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 09 12:29:39 crc kubenswrapper[4703]: I1209 12:29:39.027976 4703 generic.go:334] "Generic (PLEG): container finished" podID="76d30e84-090d-427a-b0bc-f41ced88f0b4" containerID="54250a58131dee7a170d21769a59d82eae888a0c48199c1ab36b6af7eb2bead5" exitCode=0 Dec 09 12:29:39 crc kubenswrapper[4703]: I1209 12:29:39.028080 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-5jcln" event={"ID":"76d30e84-090d-427a-b0bc-f41ced88f0b4","Type":"ContainerDied","Data":"54250a58131dee7a170d21769a59d82eae888a0c48199c1ab36b6af7eb2bead5"} Dec 09 12:29:39 crc kubenswrapper[4703]: I1209 12:29:39.042419 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-5432-account-create-update-t6hf2" event={"ID":"d9b712bf-6532-486d-bf10-577b49caba4c","Type":"ContainerDied","Data":"72f141da5c8a031c7d1251d02c2802bd911e730c1e439cf1df41961391ffb863"} Dec 09 12:29:39 crc kubenswrapper[4703]: I1209 12:29:39.041892 4703 generic.go:334] "Generic (PLEG): container finished" podID="d9b712bf-6532-486d-bf10-577b49caba4c" containerID="72f141da5c8a031c7d1251d02c2802bd911e730c1e439cf1df41961391ffb863" exitCode=0 Dec 09 12:29:39 crc kubenswrapper[4703]: I1209 12:29:39.053145 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"39bfa711-6e54-46c2-a7d4-e14927ffbc09","Type":"ContainerStarted","Data":"6b0696506218ccad37574b48141a52e640ccb7295ef824a74e82ad465613102d"} Dec 09 12:29:39 crc kubenswrapper[4703]: I1209 12:29:39.059622 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b6029ff4-8d8a-442e-95d0-b7d3c0cbdc0e","Type":"ContainerStarted","Data":"fca12c1b42eeaf1aee944dd2fc90dbc6a48d43a668e610280c9489a397375839"} Dec 09 12:29:39 crc kubenswrapper[4703]: I1209 12:29:39.078108 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-proc-0" podStartSLOduration=13.078082528 podStartE2EDuration="13.078082528s" podCreationTimestamp="2025-12-09 12:29:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:29:39.076684423 +0000 UTC m=+1478.325447952" watchObservedRunningTime="2025-12-09 12:29:39.078082528 +0000 UTC m=+1478.326846047" Dec 09 12:29:39 crc kubenswrapper[4703]: I1209 12:29:39.091374 4703 generic.go:334] "Generic (PLEG): container finished" podID="f26022ad-c235-4dd6-abe2-40fe489afe81" containerID="a7a8dfd51dc66e816339a1f32b077ab2394ffdb55d1bd8f1a1a0dc2fa0de5464" exitCode=0 Dec 09 12:29:39 crc kubenswrapper[4703]: I1209 12:29:39.094451 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5e4fae3-dab6-4aab-87fa-01b51c6f05db" path="/var/lib/kubelet/pods/e5e4fae3-dab6-4aab-87fa-01b51c6f05db/volumes" Dec 09 12:29:39 crc kubenswrapper[4703]: I1209 12:29:39.095382 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-bqvwt" event={"ID":"f26022ad-c235-4dd6-abe2-40fe489afe81","Type":"ContainerDied","Data":"a7a8dfd51dc66e816339a1f32b077ab2394ffdb55d1bd8f1a1a0dc2fa0de5464"} Dec 09 12:29:39 crc kubenswrapper[4703]: I1209 12:29:39.110991 4703 generic.go:334] "Generic (PLEG): container finished" podID="59cdfe75-7ac7-4bf4-9b72-951c5ee0bc73" containerID="e551da69f2355c1f3f90cd74c443fb8372ec937fbde9a16152a5767d78549c3e" exitCode=0 Dec 09 12:29:39 crc kubenswrapper[4703]: I1209 12:29:39.111114 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-2cb9-account-create-update-n9tpj" event={"ID":"59cdfe75-7ac7-4bf4-9b72-951c5ee0bc73","Type":"ContainerDied","Data":"e551da69f2355c1f3f90cd74c443fb8372ec937fbde9a16152a5767d78549c3e"} Dec 09 12:29:39 crc kubenswrapper[4703]: I1209 12:29:39.113813 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8ec047e2-1461-4fc0-b1cf-d149cc23924b","Type":"ContainerStarted","Data":"6918f1c9131d6a078c9545201a4c29c25d2381de0150e526b27448f44f3916c1"} Dec 09 12:29:39 crc kubenswrapper[4703]: I1209 12:29:39.116318 4703 generic.go:334] "Generic (PLEG): container finished" podID="3b5815e2-e19c-491f-b0c9-19e651e10fec" containerID="96d8eea2d9e701a39ee402070c6fd873845af43c63404625177ee02b3fcc549e" exitCode=0 Dec 09 12:29:39 crc kubenswrapper[4703]: I1209 12:29:39.116426 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-fms5f" event={"ID":"3b5815e2-e19c-491f-b0c9-19e651e10fec","Type":"ContainerDied","Data":"96d8eea2d9e701a39ee402070c6fd873845af43c63404625177ee02b3fcc549e"} Dec 09 12:29:39 crc kubenswrapper[4703]: I1209 12:29:39.122207 4703 generic.go:334] "Generic (PLEG): container finished" podID="fb4bb3a0-758c-4ce3-b2a6-5b34f95531b9" containerID="7b08e94082453392b3ff99d150e7a01f9247396a0410b23ab2b090f4d33a5e26" exitCode=0 Dec 09 12:29:39 crc kubenswrapper[4703]: I1209 12:29:39.123721 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-241d-account-create-update-jx7s6" event={"ID":"fb4bb3a0-758c-4ce3-b2a6-5b34f95531b9","Type":"ContainerDied","Data":"7b08e94082453392b3ff99d150e7a01f9247396a0410b23ab2b090f4d33a5e26"} Dec 09 12:29:39 crc kubenswrapper[4703]: I1209 12:29:39.795422 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 09 12:29:39 crc kubenswrapper[4703]: I1209 12:29:39.796818 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="a92bceed-7795-442f-99c4-c852c51c6284" containerName="glance-log" containerID="cri-o://dd5e176b5dc8e23e00e8dc2a1f9d5037a6aa2d80f99ab89d8a59b5211d30d1e8" gracePeriod=30 Dec 09 12:29:39 crc kubenswrapper[4703]: I1209 12:29:39.798011 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="a92bceed-7795-442f-99c4-c852c51c6284" containerName="glance-httpd" containerID="cri-o://e665c0eec651eb3aea395c83654ee943673bd3b89396ca192e184ffc6709f905" gracePeriod=30 Dec 09 12:29:40 crc kubenswrapper[4703]: I1209 12:29:40.175059 4703 generic.go:334] "Generic (PLEG): container finished" podID="a92bceed-7795-442f-99c4-c852c51c6284" containerID="dd5e176b5dc8e23e00e8dc2a1f9d5037a6aa2d80f99ab89d8a59b5211d30d1e8" exitCode=143 Dec 09 12:29:40 crc kubenswrapper[4703]: I1209 12:29:40.175238 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a92bceed-7795-442f-99c4-c852c51c6284","Type":"ContainerDied","Data":"dd5e176b5dc8e23e00e8dc2a1f9d5037a6aa2d80f99ab89d8a59b5211d30d1e8"} Dec 09 12:29:40 crc kubenswrapper[4703]: I1209 12:29:40.181237 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b6029ff4-8d8a-442e-95d0-b7d3c0cbdc0e","Type":"ContainerStarted","Data":"1516408b259bd4f566b1b09fc6fefa95e30855bec655499ef2651408ad7127b3"} Dec 09 12:29:41 crc kubenswrapper[4703]: I1209 12:29:41.275982 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-fms5f" Dec 09 12:29:41 crc kubenswrapper[4703]: I1209 12:29:41.284713 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-m57j8" Dec 09 12:29:41 crc kubenswrapper[4703]: I1209 12:29:41.317177 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8ec047e2-1461-4fc0-b1cf-d149cc23924b","Type":"ContainerStarted","Data":"e374ad9846e8bc4fa09abe97caf319e7922096a04787be0cd2238c81120960c0"} Dec 09 12:29:41 crc kubenswrapper[4703]: I1209 12:29:41.357011 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nz8sn\" (UniqueName: \"kubernetes.io/projected/3b5815e2-e19c-491f-b0c9-19e651e10fec-kube-api-access-nz8sn\") pod \"3b5815e2-e19c-491f-b0c9-19e651e10fec\" (UID: \"3b5815e2-e19c-491f-b0c9-19e651e10fec\") " Dec 09 12:29:41 crc kubenswrapper[4703]: I1209 12:29:41.357187 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b5815e2-e19c-491f-b0c9-19e651e10fec-operator-scripts\") pod \"3b5815e2-e19c-491f-b0c9-19e651e10fec\" (UID: \"3b5815e2-e19c-491f-b0c9-19e651e10fec\") " Dec 09 12:29:41 crc kubenswrapper[4703]: I1209 12:29:41.371806 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b5815e2-e19c-491f-b0c9-19e651e10fec-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3b5815e2-e19c-491f-b0c9-19e651e10fec" (UID: "3b5815e2-e19c-491f-b0c9-19e651e10fec"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:29:41 crc kubenswrapper[4703]: I1209 12:29:41.372020 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-fms5f" event={"ID":"3b5815e2-e19c-491f-b0c9-19e651e10fec","Type":"ContainerDied","Data":"d3661547b292bef64116d9582cc1459eea0619d81ef51c92d842c1d91929a356"} Dec 09 12:29:41 crc kubenswrapper[4703]: I1209 12:29:41.372084 4703 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d3661547b292bef64116d9582cc1459eea0619d81ef51c92d842c1d91929a356" Dec 09 12:29:41 crc kubenswrapper[4703]: I1209 12:29:41.372097 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-fms5f" Dec 09 12:29:41 crc kubenswrapper[4703]: I1209 12:29:41.396723 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b5815e2-e19c-491f-b0c9-19e651e10fec-kube-api-access-nz8sn" (OuterVolumeSpecName: "kube-api-access-nz8sn") pod "3b5815e2-e19c-491f-b0c9-19e651e10fec" (UID: "3b5815e2-e19c-491f-b0c9-19e651e10fec"). InnerVolumeSpecName "kube-api-access-nz8sn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:29:41 crc kubenswrapper[4703]: I1209 12:29:41.441122 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-m57j8" Dec 09 12:29:41 crc kubenswrapper[4703]: I1209 12:29:41.459902 4703 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b5815e2-e19c-491f-b0c9-19e651e10fec-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:41 crc kubenswrapper[4703]: I1209 12:29:41.459942 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nz8sn\" (UniqueName: \"kubernetes.io/projected/3b5815e2-e19c-491f-b0c9-19e651e10fec-kube-api-access-nz8sn\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:41 crc kubenswrapper[4703]: I1209 12:29:41.698773 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-85bddbcbcc-v2sfc" Dec 09 12:29:41 crc kubenswrapper[4703]: I1209 12:29:41.712262 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-m57j8"] Dec 09 12:29:41 crc kubenswrapper[4703]: I1209 12:29:41.729599 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-85bddbcbcc-v2sfc" Dec 09 12:29:42 crc kubenswrapper[4703]: I1209 12:29:42.078495 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-241d-account-create-update-jx7s6" Dec 09 12:29:42 crc kubenswrapper[4703]: I1209 12:29:42.093167 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-5432-account-create-update-t6hf2" Dec 09 12:29:42 crc kubenswrapper[4703]: I1209 12:29:42.110266 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-bqvwt" Dec 09 12:29:42 crc kubenswrapper[4703]: I1209 12:29:42.122299 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-5jcln" Dec 09 12:29:42 crc kubenswrapper[4703]: I1209 12:29:42.147023 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-2cb9-account-create-update-n9tpj" Dec 09 12:29:42 crc kubenswrapper[4703]: I1209 12:29:42.205126 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nsj4b\" (UniqueName: \"kubernetes.io/projected/fb4bb3a0-758c-4ce3-b2a6-5b34f95531b9-kube-api-access-nsj4b\") pod \"fb4bb3a0-758c-4ce3-b2a6-5b34f95531b9\" (UID: \"fb4bb3a0-758c-4ce3-b2a6-5b34f95531b9\") " Dec 09 12:29:42 crc kubenswrapper[4703]: I1209 12:29:42.205233 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4p4lx\" (UniqueName: \"kubernetes.io/projected/59cdfe75-7ac7-4bf4-9b72-951c5ee0bc73-kube-api-access-4p4lx\") pod \"59cdfe75-7ac7-4bf4-9b72-951c5ee0bc73\" (UID: \"59cdfe75-7ac7-4bf4-9b72-951c5ee0bc73\") " Dec 09 12:29:42 crc kubenswrapper[4703]: I1209 12:29:42.205298 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d9b712bf-6532-486d-bf10-577b49caba4c-operator-scripts\") pod \"d9b712bf-6532-486d-bf10-577b49caba4c\" (UID: \"d9b712bf-6532-486d-bf10-577b49caba4c\") " Dec 09 12:29:42 crc kubenswrapper[4703]: I1209 12:29:42.205352 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tj5c\" (UniqueName: \"kubernetes.io/projected/f26022ad-c235-4dd6-abe2-40fe489afe81-kube-api-access-8tj5c\") pod \"f26022ad-c235-4dd6-abe2-40fe489afe81\" (UID: \"f26022ad-c235-4dd6-abe2-40fe489afe81\") " Dec 09 12:29:42 crc kubenswrapper[4703]: I1209 12:29:42.205409 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f26022ad-c235-4dd6-abe2-40fe489afe81-operator-scripts\") pod \"f26022ad-c235-4dd6-abe2-40fe489afe81\" (UID: \"f26022ad-c235-4dd6-abe2-40fe489afe81\") " Dec 09 12:29:42 crc kubenswrapper[4703]: I1209 12:29:42.205501 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/76d30e84-090d-427a-b0bc-f41ced88f0b4-operator-scripts\") pod \"76d30e84-090d-427a-b0bc-f41ced88f0b4\" (UID: \"76d30e84-090d-427a-b0bc-f41ced88f0b4\") " Dec 09 12:29:42 crc kubenswrapper[4703]: I1209 12:29:42.205652 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb4bb3a0-758c-4ce3-b2a6-5b34f95531b9-operator-scripts\") pod \"fb4bb3a0-758c-4ce3-b2a6-5b34f95531b9\" (UID: \"fb4bb3a0-758c-4ce3-b2a6-5b34f95531b9\") " Dec 09 12:29:42 crc kubenswrapper[4703]: I1209 12:29:42.205694 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6jpr\" (UniqueName: \"kubernetes.io/projected/d9b712bf-6532-486d-bf10-577b49caba4c-kube-api-access-b6jpr\") pod \"d9b712bf-6532-486d-bf10-577b49caba4c\" (UID: \"d9b712bf-6532-486d-bf10-577b49caba4c\") " Dec 09 12:29:42 crc kubenswrapper[4703]: I1209 12:29:42.205757 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/59cdfe75-7ac7-4bf4-9b72-951c5ee0bc73-operator-scripts\") pod \"59cdfe75-7ac7-4bf4-9b72-951c5ee0bc73\" (UID: \"59cdfe75-7ac7-4bf4-9b72-951c5ee0bc73\") " Dec 09 12:29:42 crc kubenswrapper[4703]: I1209 12:29:42.205784 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gs2n7\" (UniqueName: \"kubernetes.io/projected/76d30e84-090d-427a-b0bc-f41ced88f0b4-kube-api-access-gs2n7\") pod \"76d30e84-090d-427a-b0bc-f41ced88f0b4\" (UID: \"76d30e84-090d-427a-b0bc-f41ced88f0b4\") " Dec 09 12:29:42 crc kubenswrapper[4703]: I1209 12:29:42.207347 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59cdfe75-7ac7-4bf4-9b72-951c5ee0bc73-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "59cdfe75-7ac7-4bf4-9b72-951c5ee0bc73" (UID: "59cdfe75-7ac7-4bf4-9b72-951c5ee0bc73"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:29:42 crc kubenswrapper[4703]: I1209 12:29:42.209663 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76d30e84-090d-427a-b0bc-f41ced88f0b4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "76d30e84-090d-427a-b0bc-f41ced88f0b4" (UID: "76d30e84-090d-427a-b0bc-f41ced88f0b4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:29:42 crc kubenswrapper[4703]: I1209 12:29:42.210176 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f26022ad-c235-4dd6-abe2-40fe489afe81-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f26022ad-c235-4dd6-abe2-40fe489afe81" (UID: "f26022ad-c235-4dd6-abe2-40fe489afe81"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:29:42 crc kubenswrapper[4703]: I1209 12:29:42.210714 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb4bb3a0-758c-4ce3-b2a6-5b34f95531b9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fb4bb3a0-758c-4ce3-b2a6-5b34f95531b9" (UID: "fb4bb3a0-758c-4ce3-b2a6-5b34f95531b9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:29:42 crc kubenswrapper[4703]: I1209 12:29:42.211456 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9b712bf-6532-486d-bf10-577b49caba4c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d9b712bf-6532-486d-bf10-577b49caba4c" (UID: "d9b712bf-6532-486d-bf10-577b49caba4c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:29:42 crc kubenswrapper[4703]: I1209 12:29:42.220479 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59cdfe75-7ac7-4bf4-9b72-951c5ee0bc73-kube-api-access-4p4lx" (OuterVolumeSpecName: "kube-api-access-4p4lx") pod "59cdfe75-7ac7-4bf4-9b72-951c5ee0bc73" (UID: "59cdfe75-7ac7-4bf4-9b72-951c5ee0bc73"). InnerVolumeSpecName "kube-api-access-4p4lx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:29:42 crc kubenswrapper[4703]: I1209 12:29:42.225119 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76d30e84-090d-427a-b0bc-f41ced88f0b4-kube-api-access-gs2n7" (OuterVolumeSpecName: "kube-api-access-gs2n7") pod "76d30e84-090d-427a-b0bc-f41ced88f0b4" (UID: "76d30e84-090d-427a-b0bc-f41ced88f0b4"). InnerVolumeSpecName "kube-api-access-gs2n7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:29:42 crc kubenswrapper[4703]: I1209 12:29:42.225217 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb4bb3a0-758c-4ce3-b2a6-5b34f95531b9-kube-api-access-nsj4b" (OuterVolumeSpecName: "kube-api-access-nsj4b") pod "fb4bb3a0-758c-4ce3-b2a6-5b34f95531b9" (UID: "fb4bb3a0-758c-4ce3-b2a6-5b34f95531b9"). InnerVolumeSpecName "kube-api-access-nsj4b". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:29:42 crc kubenswrapper[4703]: I1209 12:29:42.226474 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9b712bf-6532-486d-bf10-577b49caba4c-kube-api-access-b6jpr" (OuterVolumeSpecName: "kube-api-access-b6jpr") pod "d9b712bf-6532-486d-bf10-577b49caba4c" (UID: "d9b712bf-6532-486d-bf10-577b49caba4c"). InnerVolumeSpecName "kube-api-access-b6jpr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:29:42 crc kubenswrapper[4703]: I1209 12:29:42.238465 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f26022ad-c235-4dd6-abe2-40fe489afe81-kube-api-access-8tj5c" (OuterVolumeSpecName: "kube-api-access-8tj5c") pod "f26022ad-c235-4dd6-abe2-40fe489afe81" (UID: "f26022ad-c235-4dd6-abe2-40fe489afe81"). InnerVolumeSpecName "kube-api-access-8tj5c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:29:42 crc kubenswrapper[4703]: I1209 12:29:42.309805 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nsj4b\" (UniqueName: \"kubernetes.io/projected/fb4bb3a0-758c-4ce3-b2a6-5b34f95531b9-kube-api-access-nsj4b\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:42 crc kubenswrapper[4703]: I1209 12:29:42.309867 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4p4lx\" (UniqueName: \"kubernetes.io/projected/59cdfe75-7ac7-4bf4-9b72-951c5ee0bc73-kube-api-access-4p4lx\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:42 crc kubenswrapper[4703]: I1209 12:29:42.309881 4703 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d9b712bf-6532-486d-bf10-577b49caba4c-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:42 crc kubenswrapper[4703]: I1209 12:29:42.309896 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tj5c\" (UniqueName: \"kubernetes.io/projected/f26022ad-c235-4dd6-abe2-40fe489afe81-kube-api-access-8tj5c\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:42 crc kubenswrapper[4703]: I1209 12:29:42.309909 4703 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f26022ad-c235-4dd6-abe2-40fe489afe81-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:42 crc kubenswrapper[4703]: I1209 12:29:42.309920 4703 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/76d30e84-090d-427a-b0bc-f41ced88f0b4-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:42 crc kubenswrapper[4703]: I1209 12:29:42.309937 4703 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb4bb3a0-758c-4ce3-b2a6-5b34f95531b9-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:42 crc kubenswrapper[4703]: I1209 12:29:42.309951 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b6jpr\" (UniqueName: \"kubernetes.io/projected/d9b712bf-6532-486d-bf10-577b49caba4c-kube-api-access-b6jpr\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:42 crc kubenswrapper[4703]: I1209 12:29:42.309961 4703 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/59cdfe75-7ac7-4bf4-9b72-951c5ee0bc73-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:42 crc kubenswrapper[4703]: I1209 12:29:42.309970 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gs2n7\" (UniqueName: \"kubernetes.io/projected/76d30e84-090d-427a-b0bc-f41ced88f0b4-kube-api-access-gs2n7\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:42 crc kubenswrapper[4703]: I1209 12:29:42.391764 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-bqvwt" Dec 09 12:29:42 crc kubenswrapper[4703]: I1209 12:29:42.391743 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-bqvwt" event={"ID":"f26022ad-c235-4dd6-abe2-40fe489afe81","Type":"ContainerDied","Data":"aafcc445e60ae91cfb4588ab562a2f5b637916d5fb12a70ffb4693ab47ca72b4"} Dec 09 12:29:42 crc kubenswrapper[4703]: I1209 12:29:42.391925 4703 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aafcc445e60ae91cfb4588ab562a2f5b637916d5fb12a70ffb4693ab47ca72b4" Dec 09 12:29:42 crc kubenswrapper[4703]: I1209 12:29:42.394004 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-5jcln" Dec 09 12:29:42 crc kubenswrapper[4703]: I1209 12:29:42.394032 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-5jcln" event={"ID":"76d30e84-090d-427a-b0bc-f41ced88f0b4","Type":"ContainerDied","Data":"bed91dcd2dc92148a3a149ab1bc0016383274942f403f299ac3f2492316b148b"} Dec 09 12:29:42 crc kubenswrapper[4703]: I1209 12:29:42.394072 4703 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bed91dcd2dc92148a3a149ab1bc0016383274942f403f299ac3f2492316b148b" Dec 09 12:29:42 crc kubenswrapper[4703]: I1209 12:29:42.396569 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-5432-account-create-update-t6hf2" event={"ID":"d9b712bf-6532-486d-bf10-577b49caba4c","Type":"ContainerDied","Data":"9906a8101be5c4257136598c810248fcaa82992cc4056aa4b98a440414202407"} Dec 09 12:29:42 crc kubenswrapper[4703]: I1209 12:29:42.396603 4703 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9906a8101be5c4257136598c810248fcaa82992cc4056aa4b98a440414202407" Dec 09 12:29:42 crc kubenswrapper[4703]: I1209 12:29:42.396665 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-5432-account-create-update-t6hf2" Dec 09 12:29:42 crc kubenswrapper[4703]: I1209 12:29:42.407706 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-241d-account-create-update-jx7s6" event={"ID":"fb4bb3a0-758c-4ce3-b2a6-5b34f95531b9","Type":"ContainerDied","Data":"cde24c2607b6e6b9a16bde2c003177d4803ef8ca82fd48f2dc8b8f0c7db165ac"} Dec 09 12:29:42 crc kubenswrapper[4703]: I1209 12:29:42.407761 4703 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cde24c2607b6e6b9a16bde2c003177d4803ef8ca82fd48f2dc8b8f0c7db165ac" Dec 09 12:29:42 crc kubenswrapper[4703]: I1209 12:29:42.407844 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-241d-account-create-update-jx7s6" Dec 09 12:29:42 crc kubenswrapper[4703]: I1209 12:29:42.410583 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-2cb9-account-create-update-n9tpj" Dec 09 12:29:42 crc kubenswrapper[4703]: I1209 12:29:42.411799 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-2cb9-account-create-update-n9tpj" event={"ID":"59cdfe75-7ac7-4bf4-9b72-951c5ee0bc73","Type":"ContainerDied","Data":"5a89f8a3fac72ea4256f0730e466110d800d5ddcd6d25e208efb3c90eef84a67"} Dec 09 12:29:42 crc kubenswrapper[4703]: I1209 12:29:42.411841 4703 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5a89f8a3fac72ea4256f0730e466110d800d5ddcd6d25e208efb3c90eef84a67" Dec 09 12:29:42 crc kubenswrapper[4703]: E1209 12:29:42.540139 4703 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf26022ad_c235_4dd6_abe2_40fe489afe81.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod59cdfe75_7ac7_4bf4_9b72_951c5ee0bc73.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod76d30e84_090d_427a_b0bc_f41ced88f0b4.slice\": RecentStats: unable to find data in memory cache]" Dec 09 12:29:43 crc kubenswrapper[4703]: I1209 12:29:43.421693 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-m57j8" podUID="03ce63d0-6e1f-4d88-a26c-05b867554db5" containerName="registry-server" containerID="cri-o://68afdfa339f9a8f0da9ae3cbe4ac53e990e55a3ecce98b631fa8b0a4c4682a7f" gracePeriod=2 Dec 09 12:29:44 crc kubenswrapper[4703]: I1209 12:29:44.523826 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-xdtxt"] Dec 09 12:29:44 crc kubenswrapper[4703]: E1209 12:29:44.524351 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59cdfe75-7ac7-4bf4-9b72-951c5ee0bc73" containerName="mariadb-account-create-update" Dec 09 12:29:44 crc kubenswrapper[4703]: I1209 12:29:44.524384 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="59cdfe75-7ac7-4bf4-9b72-951c5ee0bc73" containerName="mariadb-account-create-update" Dec 09 12:29:44 crc kubenswrapper[4703]: E1209 12:29:44.524406 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f26022ad-c235-4dd6-abe2-40fe489afe81" containerName="mariadb-database-create" Dec 09 12:29:44 crc kubenswrapper[4703]: I1209 12:29:44.524413 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="f26022ad-c235-4dd6-abe2-40fe489afe81" containerName="mariadb-database-create" Dec 09 12:29:44 crc kubenswrapper[4703]: E1209 12:29:44.524430 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76d30e84-090d-427a-b0bc-f41ced88f0b4" containerName="mariadb-database-create" Dec 09 12:29:44 crc kubenswrapper[4703]: I1209 12:29:44.524436 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="76d30e84-090d-427a-b0bc-f41ced88f0b4" containerName="mariadb-database-create" Dec 09 12:29:44 crc kubenswrapper[4703]: E1209 12:29:44.524448 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9b712bf-6532-486d-bf10-577b49caba4c" containerName="mariadb-account-create-update" Dec 09 12:29:44 crc kubenswrapper[4703]: I1209 12:29:44.524453 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9b712bf-6532-486d-bf10-577b49caba4c" containerName="mariadb-account-create-update" Dec 09 12:29:44 crc kubenswrapper[4703]: E1209 12:29:44.524467 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b5815e2-e19c-491f-b0c9-19e651e10fec" containerName="mariadb-database-create" Dec 09 12:29:44 crc kubenswrapper[4703]: I1209 12:29:44.524472 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b5815e2-e19c-491f-b0c9-19e651e10fec" containerName="mariadb-database-create" Dec 09 12:29:44 crc kubenswrapper[4703]: E1209 12:29:44.524494 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb4bb3a0-758c-4ce3-b2a6-5b34f95531b9" containerName="mariadb-account-create-update" Dec 09 12:29:44 crc kubenswrapper[4703]: I1209 12:29:44.524502 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb4bb3a0-758c-4ce3-b2a6-5b34f95531b9" containerName="mariadb-account-create-update" Dec 09 12:29:44 crc kubenswrapper[4703]: I1209 12:29:44.524676 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="76d30e84-090d-427a-b0bc-f41ced88f0b4" containerName="mariadb-database-create" Dec 09 12:29:44 crc kubenswrapper[4703]: I1209 12:29:44.524686 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="f26022ad-c235-4dd6-abe2-40fe489afe81" containerName="mariadb-database-create" Dec 09 12:29:44 crc kubenswrapper[4703]: I1209 12:29:44.524696 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9b712bf-6532-486d-bf10-577b49caba4c" containerName="mariadb-account-create-update" Dec 09 12:29:44 crc kubenswrapper[4703]: I1209 12:29:44.524713 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb4bb3a0-758c-4ce3-b2a6-5b34f95531b9" containerName="mariadb-account-create-update" Dec 09 12:29:44 crc kubenswrapper[4703]: I1209 12:29:44.524721 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b5815e2-e19c-491f-b0c9-19e651e10fec" containerName="mariadb-database-create" Dec 09 12:29:44 crc kubenswrapper[4703]: I1209 12:29:44.524734 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="59cdfe75-7ac7-4bf4-9b72-951c5ee0bc73" containerName="mariadb-account-create-update" Dec 09 12:29:44 crc kubenswrapper[4703]: I1209 12:29:44.525540 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-xdtxt" Dec 09 12:29:44 crc kubenswrapper[4703]: I1209 12:29:44.532955 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-295qv" Dec 09 12:29:44 crc kubenswrapper[4703]: I1209 12:29:44.532988 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Dec 09 12:29:44 crc kubenswrapper[4703]: I1209 12:29:44.533248 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Dec 09 12:29:44 crc kubenswrapper[4703]: I1209 12:29:44.562709 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-xdtxt"] Dec 09 12:29:44 crc kubenswrapper[4703]: I1209 12:29:44.568831 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/955f8136-42b0-4fce-8853-3c467cf8d070-scripts\") pod \"nova-cell0-conductor-db-sync-xdtxt\" (UID: \"955f8136-42b0-4fce-8853-3c467cf8d070\") " pod="openstack/nova-cell0-conductor-db-sync-xdtxt" Dec 09 12:29:44 crc kubenswrapper[4703]: I1209 12:29:44.569072 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/955f8136-42b0-4fce-8853-3c467cf8d070-config-data\") pod \"nova-cell0-conductor-db-sync-xdtxt\" (UID: \"955f8136-42b0-4fce-8853-3c467cf8d070\") " pod="openstack/nova-cell0-conductor-db-sync-xdtxt" Dec 09 12:29:44 crc kubenswrapper[4703]: I1209 12:29:44.569124 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2df6\" (UniqueName: \"kubernetes.io/projected/955f8136-42b0-4fce-8853-3c467cf8d070-kube-api-access-p2df6\") pod \"nova-cell0-conductor-db-sync-xdtxt\" (UID: \"955f8136-42b0-4fce-8853-3c467cf8d070\") " pod="openstack/nova-cell0-conductor-db-sync-xdtxt" Dec 09 12:29:44 crc kubenswrapper[4703]: I1209 12:29:44.569513 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/955f8136-42b0-4fce-8853-3c467cf8d070-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-xdtxt\" (UID: \"955f8136-42b0-4fce-8853-3c467cf8d070\") " pod="openstack/nova-cell0-conductor-db-sync-xdtxt" Dec 09 12:29:44 crc kubenswrapper[4703]: I1209 12:29:44.676596 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/955f8136-42b0-4fce-8853-3c467cf8d070-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-xdtxt\" (UID: \"955f8136-42b0-4fce-8853-3c467cf8d070\") " pod="openstack/nova-cell0-conductor-db-sync-xdtxt" Dec 09 12:29:44 crc kubenswrapper[4703]: I1209 12:29:44.676691 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/955f8136-42b0-4fce-8853-3c467cf8d070-scripts\") pod \"nova-cell0-conductor-db-sync-xdtxt\" (UID: \"955f8136-42b0-4fce-8853-3c467cf8d070\") " pod="openstack/nova-cell0-conductor-db-sync-xdtxt" Dec 09 12:29:44 crc kubenswrapper[4703]: I1209 12:29:44.676750 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/955f8136-42b0-4fce-8853-3c467cf8d070-config-data\") pod \"nova-cell0-conductor-db-sync-xdtxt\" (UID: \"955f8136-42b0-4fce-8853-3c467cf8d070\") " pod="openstack/nova-cell0-conductor-db-sync-xdtxt" Dec 09 12:29:44 crc kubenswrapper[4703]: I1209 12:29:44.676769 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2df6\" (UniqueName: \"kubernetes.io/projected/955f8136-42b0-4fce-8853-3c467cf8d070-kube-api-access-p2df6\") pod \"nova-cell0-conductor-db-sync-xdtxt\" (UID: \"955f8136-42b0-4fce-8853-3c467cf8d070\") " pod="openstack/nova-cell0-conductor-db-sync-xdtxt" Dec 09 12:29:44 crc kubenswrapper[4703]: I1209 12:29:44.689441 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/955f8136-42b0-4fce-8853-3c467cf8d070-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-xdtxt\" (UID: \"955f8136-42b0-4fce-8853-3c467cf8d070\") " pod="openstack/nova-cell0-conductor-db-sync-xdtxt" Dec 09 12:29:44 crc kubenswrapper[4703]: I1209 12:29:44.690817 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/955f8136-42b0-4fce-8853-3c467cf8d070-config-data\") pod \"nova-cell0-conductor-db-sync-xdtxt\" (UID: \"955f8136-42b0-4fce-8853-3c467cf8d070\") " pod="openstack/nova-cell0-conductor-db-sync-xdtxt" Dec 09 12:29:44 crc kubenswrapper[4703]: I1209 12:29:44.705156 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/955f8136-42b0-4fce-8853-3c467cf8d070-scripts\") pod \"nova-cell0-conductor-db-sync-xdtxt\" (UID: \"955f8136-42b0-4fce-8853-3c467cf8d070\") " pod="openstack/nova-cell0-conductor-db-sync-xdtxt" Dec 09 12:29:44 crc kubenswrapper[4703]: I1209 12:29:44.714564 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2df6\" (UniqueName: \"kubernetes.io/projected/955f8136-42b0-4fce-8853-3c467cf8d070-kube-api-access-p2df6\") pod \"nova-cell0-conductor-db-sync-xdtxt\" (UID: \"955f8136-42b0-4fce-8853-3c467cf8d070\") " pod="openstack/nova-cell0-conductor-db-sync-xdtxt" Dec 09 12:29:44 crc kubenswrapper[4703]: I1209 12:29:44.859007 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-xdtxt" Dec 09 12:29:45 crc kubenswrapper[4703]: I1209 12:29:45.559589 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b6029ff4-8d8a-442e-95d0-b7d3c0cbdc0e","Type":"ContainerStarted","Data":"0b20248e71e5fa9882a6a4ce280823b34cb9dbc52f9417899cbbce7e396353e6"} Dec 09 12:29:45 crc kubenswrapper[4703]: I1209 12:29:45.566368 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Dec 09 12:29:45 crc kubenswrapper[4703]: I1209 12:29:45.603413 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=9.603377538 podStartE2EDuration="9.603377538s" podCreationTimestamp="2025-12-09 12:29:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:29:45.595648865 +0000 UTC m=+1484.844412384" watchObservedRunningTime="2025-12-09 12:29:45.603377538 +0000 UTC m=+1484.852141057" Dec 09 12:29:45 crc kubenswrapper[4703]: I1209 12:29:45.712161 4703 generic.go:334] "Generic (PLEG): container finished" podID="a92bceed-7795-442f-99c4-c852c51c6284" containerID="e665c0eec651eb3aea395c83654ee943673bd3b89396ca192e184ffc6709f905" exitCode=0 Dec 09 12:29:45 crc kubenswrapper[4703]: I1209 12:29:45.712387 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a92bceed-7795-442f-99c4-c852c51c6284","Type":"ContainerDied","Data":"e665c0eec651eb3aea395c83654ee943673bd3b89396ca192e184ffc6709f905"} Dec 09 12:29:45 crc kubenswrapper[4703]: I1209 12:29:45.716583 4703 generic.go:334] "Generic (PLEG): container finished" podID="03ce63d0-6e1f-4d88-a26c-05b867554db5" containerID="68afdfa339f9a8f0da9ae3cbe4ac53e990e55a3ecce98b631fa8b0a4c4682a7f" exitCode=0 Dec 09 12:29:45 crc kubenswrapper[4703]: I1209 12:29:45.716636 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m57j8" event={"ID":"03ce63d0-6e1f-4d88-a26c-05b867554db5","Type":"ContainerDied","Data":"68afdfa339f9a8f0da9ae3cbe4ac53e990e55a3ecce98b631fa8b0a4c4682a7f"} Dec 09 12:29:46 crc kubenswrapper[4703]: I1209 12:29:46.014543 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m57j8" Dec 09 12:29:46 crc kubenswrapper[4703]: I1209 12:29:46.128280 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03ce63d0-6e1f-4d88-a26c-05b867554db5-catalog-content\") pod \"03ce63d0-6e1f-4d88-a26c-05b867554db5\" (UID: \"03ce63d0-6e1f-4d88-a26c-05b867554db5\") " Dec 09 12:29:46 crc kubenswrapper[4703]: I1209 12:29:46.128844 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03ce63d0-6e1f-4d88-a26c-05b867554db5-utilities\") pod \"03ce63d0-6e1f-4d88-a26c-05b867554db5\" (UID: \"03ce63d0-6e1f-4d88-a26c-05b867554db5\") " Dec 09 12:29:46 crc kubenswrapper[4703]: I1209 12:29:46.128938 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzbxc\" (UniqueName: \"kubernetes.io/projected/03ce63d0-6e1f-4d88-a26c-05b867554db5-kube-api-access-rzbxc\") pod \"03ce63d0-6e1f-4d88-a26c-05b867554db5\" (UID: \"03ce63d0-6e1f-4d88-a26c-05b867554db5\") " Dec 09 12:29:46 crc kubenswrapper[4703]: I1209 12:29:46.131794 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03ce63d0-6e1f-4d88-a26c-05b867554db5-utilities" (OuterVolumeSpecName: "utilities") pod "03ce63d0-6e1f-4d88-a26c-05b867554db5" (UID: "03ce63d0-6e1f-4d88-a26c-05b867554db5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:29:46 crc kubenswrapper[4703]: I1209 12:29:46.146517 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03ce63d0-6e1f-4d88-a26c-05b867554db5-kube-api-access-rzbxc" (OuterVolumeSpecName: "kube-api-access-rzbxc") pod "03ce63d0-6e1f-4d88-a26c-05b867554db5" (UID: "03ce63d0-6e1f-4d88-a26c-05b867554db5"). InnerVolumeSpecName "kube-api-access-rzbxc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:29:46 crc kubenswrapper[4703]: I1209 12:29:46.233511 4703 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03ce63d0-6e1f-4d88-a26c-05b867554db5-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:46 crc kubenswrapper[4703]: I1209 12:29:46.233980 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rzbxc\" (UniqueName: \"kubernetes.io/projected/03ce63d0-6e1f-4d88-a26c-05b867554db5-kube-api-access-rzbxc\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:46 crc kubenswrapper[4703]: I1209 12:29:46.241541 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-xdtxt"] Dec 09 12:29:46 crc kubenswrapper[4703]: I1209 12:29:46.354774 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03ce63d0-6e1f-4d88-a26c-05b867554db5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "03ce63d0-6e1f-4d88-a26c-05b867554db5" (UID: "03ce63d0-6e1f-4d88-a26c-05b867554db5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:29:46 crc kubenswrapper[4703]: I1209 12:29:46.439679 4703 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03ce63d0-6e1f-4d88-a26c-05b867554db5-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:46 crc kubenswrapper[4703]: I1209 12:29:46.776105 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-xdtxt" event={"ID":"955f8136-42b0-4fce-8853-3c467cf8d070","Type":"ContainerStarted","Data":"a1377fb54f325e6c1c80fbc8782ef302ee0098c99d7014009e5fcc26872a32b7"} Dec 09 12:29:46 crc kubenswrapper[4703]: I1209 12:29:46.781423 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8ec047e2-1461-4fc0-b1cf-d149cc23924b","Type":"ContainerStarted","Data":"8cf5f01aea04ac23eff56442767dfaa85f4d9467e280928589b1ca9268ad367e"} Dec 09 12:29:46 crc kubenswrapper[4703]: I1209 12:29:46.795403 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m57j8" Dec 09 12:29:46 crc kubenswrapper[4703]: I1209 12:29:46.797279 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m57j8" event={"ID":"03ce63d0-6e1f-4d88-a26c-05b867554db5","Type":"ContainerDied","Data":"aa36a8f400a19486c0eb53c2b724b8b4249634a01fe08af3726690ed755dd464"} Dec 09 12:29:46 crc kubenswrapper[4703]: I1209 12:29:46.797366 4703 scope.go:117] "RemoveContainer" containerID="68afdfa339f9a8f0da9ae3cbe4ac53e990e55a3ecce98b631fa8b0a4c4682a7f" Dec 09 12:29:47 crc kubenswrapper[4703]: I1209 12:29:47.053818 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 09 12:29:47 crc kubenswrapper[4703]: I1209 12:29:47.065033 4703 scope.go:117] "RemoveContainer" containerID="a14c9a018ccdb3b32b872d1f789f41c1b40d95edf3fdae8166f9924ad4afafd6" Dec 09 12:29:47 crc kubenswrapper[4703]: I1209 12:29:47.111789 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-m57j8"] Dec 09 12:29:47 crc kubenswrapper[4703]: I1209 12:29:47.121536 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-m57j8"] Dec 09 12:29:47 crc kubenswrapper[4703]: I1209 12:29:47.165986 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ts7ph\" (UniqueName: \"kubernetes.io/projected/a92bceed-7795-442f-99c4-c852c51c6284-kube-api-access-ts7ph\") pod \"a92bceed-7795-442f-99c4-c852c51c6284\" (UID: \"a92bceed-7795-442f-99c4-c852c51c6284\") " Dec 09 12:29:47 crc kubenswrapper[4703]: I1209 12:29:47.166458 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a92bceed-7795-442f-99c4-c852c51c6284-config-data\") pod \"a92bceed-7795-442f-99c4-c852c51c6284\" (UID: \"a92bceed-7795-442f-99c4-c852c51c6284\") " Dec 09 12:29:47 crc kubenswrapper[4703]: I1209 12:29:47.166909 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cfecbed1-519b-4af5-9dfc-e582a380e9a4\") pod \"a92bceed-7795-442f-99c4-c852c51c6284\" (UID: \"a92bceed-7795-442f-99c4-c852c51c6284\") " Dec 09 12:29:47 crc kubenswrapper[4703]: I1209 12:29:47.167185 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a92bceed-7795-442f-99c4-c852c51c6284-internal-tls-certs\") pod \"a92bceed-7795-442f-99c4-c852c51c6284\" (UID: \"a92bceed-7795-442f-99c4-c852c51c6284\") " Dec 09 12:29:47 crc kubenswrapper[4703]: I1209 12:29:47.167761 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a92bceed-7795-442f-99c4-c852c51c6284-logs\") pod \"a92bceed-7795-442f-99c4-c852c51c6284\" (UID: \"a92bceed-7795-442f-99c4-c852c51c6284\") " Dec 09 12:29:47 crc kubenswrapper[4703]: I1209 12:29:47.168647 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a92bceed-7795-442f-99c4-c852c51c6284-scripts\") pod \"a92bceed-7795-442f-99c4-c852c51c6284\" (UID: \"a92bceed-7795-442f-99c4-c852c51c6284\") " Dec 09 12:29:47 crc kubenswrapper[4703]: I1209 12:29:47.168883 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a92bceed-7795-442f-99c4-c852c51c6284-combined-ca-bundle\") pod \"a92bceed-7795-442f-99c4-c852c51c6284\" (UID: \"a92bceed-7795-442f-99c4-c852c51c6284\") " Dec 09 12:29:47 crc kubenswrapper[4703]: I1209 12:29:47.169507 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a92bceed-7795-442f-99c4-c852c51c6284-httpd-run\") pod \"a92bceed-7795-442f-99c4-c852c51c6284\" (UID: \"a92bceed-7795-442f-99c4-c852c51c6284\") " Dec 09 12:29:47 crc kubenswrapper[4703]: I1209 12:29:47.172298 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a92bceed-7795-442f-99c4-c852c51c6284-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "a92bceed-7795-442f-99c4-c852c51c6284" (UID: "a92bceed-7795-442f-99c4-c852c51c6284"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:29:47 crc kubenswrapper[4703]: I1209 12:29:47.185633 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a92bceed-7795-442f-99c4-c852c51c6284-logs" (OuterVolumeSpecName: "logs") pod "a92bceed-7795-442f-99c4-c852c51c6284" (UID: "a92bceed-7795-442f-99c4-c852c51c6284"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:29:47 crc kubenswrapper[4703]: I1209 12:29:47.199430 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a92bceed-7795-442f-99c4-c852c51c6284-scripts" (OuterVolumeSpecName: "scripts") pod "a92bceed-7795-442f-99c4-c852c51c6284" (UID: "a92bceed-7795-442f-99c4-c852c51c6284"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:29:47 crc kubenswrapper[4703]: I1209 12:29:47.199471 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a92bceed-7795-442f-99c4-c852c51c6284-kube-api-access-ts7ph" (OuterVolumeSpecName: "kube-api-access-ts7ph") pod "a92bceed-7795-442f-99c4-c852c51c6284" (UID: "a92bceed-7795-442f-99c4-c852c51c6284"). InnerVolumeSpecName "kube-api-access-ts7ph". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:29:47 crc kubenswrapper[4703]: I1209 12:29:47.234079 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cfecbed1-519b-4af5-9dfc-e582a380e9a4" (OuterVolumeSpecName: "glance") pod "a92bceed-7795-442f-99c4-c852c51c6284" (UID: "a92bceed-7795-442f-99c4-c852c51c6284"). InnerVolumeSpecName "pvc-cfecbed1-519b-4af5-9dfc-e582a380e9a4". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 09 12:29:47 crc kubenswrapper[4703]: I1209 12:29:47.257166 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a92bceed-7795-442f-99c4-c852c51c6284-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a92bceed-7795-442f-99c4-c852c51c6284" (UID: "a92bceed-7795-442f-99c4-c852c51c6284"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:29:47 crc kubenswrapper[4703]: I1209 12:29:47.257499 4703 scope.go:117] "RemoveContainer" containerID="0228cb72e4c9d928c1122bbcfbbf8512ebc7f66513a5cfdcdfc65cbfc2a7c9d8" Dec 09 12:29:47 crc kubenswrapper[4703]: I1209 12:29:47.275938 4703 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a92bceed-7795-442f-99c4-c852c51c6284-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:47 crc kubenswrapper[4703]: I1209 12:29:47.275977 4703 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a92bceed-7795-442f-99c4-c852c51c6284-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:47 crc kubenswrapper[4703]: I1209 12:29:47.275988 4703 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a92bceed-7795-442f-99c4-c852c51c6284-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:47 crc kubenswrapper[4703]: I1209 12:29:47.276000 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ts7ph\" (UniqueName: \"kubernetes.io/projected/a92bceed-7795-442f-99c4-c852c51c6284-kube-api-access-ts7ph\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:47 crc kubenswrapper[4703]: I1209 12:29:47.276030 4703 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-cfecbed1-519b-4af5-9dfc-e582a380e9a4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cfecbed1-519b-4af5-9dfc-e582a380e9a4\") on node \"crc\" " Dec 09 12:29:47 crc kubenswrapper[4703]: I1209 12:29:47.276043 4703 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a92bceed-7795-442f-99c4-c852c51c6284-logs\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:47 crc kubenswrapper[4703]: I1209 12:29:47.308833 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a92bceed-7795-442f-99c4-c852c51c6284-config-data" (OuterVolumeSpecName: "config-data") pod "a92bceed-7795-442f-99c4-c852c51c6284" (UID: "a92bceed-7795-442f-99c4-c852c51c6284"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:29:47 crc kubenswrapper[4703]: I1209 12:29:47.314990 4703 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Dec 09 12:29:47 crc kubenswrapper[4703]: I1209 12:29:47.315209 4703 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-cfecbed1-519b-4af5-9dfc-e582a380e9a4" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cfecbed1-519b-4af5-9dfc-e582a380e9a4") on node "crc" Dec 09 12:29:47 crc kubenswrapper[4703]: I1209 12:29:47.326110 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a92bceed-7795-442f-99c4-c852c51c6284-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "a92bceed-7795-442f-99c4-c852c51c6284" (UID: "a92bceed-7795-442f-99c4-c852c51c6284"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:29:47 crc kubenswrapper[4703]: I1209 12:29:47.383469 4703 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a92bceed-7795-442f-99c4-c852c51c6284-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:47 crc kubenswrapper[4703]: I1209 12:29:47.383531 4703 reconciler_common.go:293] "Volume detached for volume \"pvc-cfecbed1-519b-4af5-9dfc-e582a380e9a4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cfecbed1-519b-4af5-9dfc-e582a380e9a4\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:47 crc kubenswrapper[4703]: I1209 12:29:47.383547 4703 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a92bceed-7795-442f-99c4-c852c51c6284-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:47 crc kubenswrapper[4703]: I1209 12:29:47.817505 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a92bceed-7795-442f-99c4-c852c51c6284","Type":"ContainerDied","Data":"cf1af995918b617cc87b7847847a40eee22bff6313becd320fef250468f5ac72"} Dec 09 12:29:47 crc kubenswrapper[4703]: I1209 12:29:47.818276 4703 scope.go:117] "RemoveContainer" containerID="e665c0eec651eb3aea395c83654ee943673bd3b89396ca192e184ffc6709f905" Dec 09 12:29:47 crc kubenswrapper[4703]: I1209 12:29:47.817533 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 09 12:29:47 crc kubenswrapper[4703]: I1209 12:29:47.826677 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8ec047e2-1461-4fc0-b1cf-d149cc23924b","Type":"ContainerStarted","Data":"d229ccd0e57dad6cee969c55c5bb1dd86e7bc1da36d31bbb98d785d9e425bd9c"} Dec 09 12:29:47 crc kubenswrapper[4703]: I1209 12:29:47.880461 4703 scope.go:117] "RemoveContainer" containerID="dd5e176b5dc8e23e00e8dc2a1f9d5037a6aa2d80f99ab89d8a59b5211d30d1e8" Dec 09 12:29:47 crc kubenswrapper[4703]: I1209 12:29:47.901008 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 09 12:29:47 crc kubenswrapper[4703]: I1209 12:29:47.920280 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 09 12:29:47 crc kubenswrapper[4703]: I1209 12:29:47.933252 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 09 12:29:47 crc kubenswrapper[4703]: E1209 12:29:47.933923 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a92bceed-7795-442f-99c4-c852c51c6284" containerName="glance-httpd" Dec 09 12:29:47 crc kubenswrapper[4703]: I1209 12:29:47.933948 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="a92bceed-7795-442f-99c4-c852c51c6284" containerName="glance-httpd" Dec 09 12:29:47 crc kubenswrapper[4703]: E1209 12:29:47.933971 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a92bceed-7795-442f-99c4-c852c51c6284" containerName="glance-log" Dec 09 12:29:47 crc kubenswrapper[4703]: I1209 12:29:47.933977 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="a92bceed-7795-442f-99c4-c852c51c6284" containerName="glance-log" Dec 09 12:29:47 crc kubenswrapper[4703]: E1209 12:29:47.933992 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03ce63d0-6e1f-4d88-a26c-05b867554db5" containerName="extract-utilities" Dec 09 12:29:47 crc kubenswrapper[4703]: I1209 12:29:47.933999 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="03ce63d0-6e1f-4d88-a26c-05b867554db5" containerName="extract-utilities" Dec 09 12:29:47 crc kubenswrapper[4703]: E1209 12:29:47.934011 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03ce63d0-6e1f-4d88-a26c-05b867554db5" containerName="extract-content" Dec 09 12:29:47 crc kubenswrapper[4703]: I1209 12:29:47.934018 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="03ce63d0-6e1f-4d88-a26c-05b867554db5" containerName="extract-content" Dec 09 12:29:47 crc kubenswrapper[4703]: E1209 12:29:47.934038 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03ce63d0-6e1f-4d88-a26c-05b867554db5" containerName="registry-server" Dec 09 12:29:47 crc kubenswrapper[4703]: I1209 12:29:47.934044 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="03ce63d0-6e1f-4d88-a26c-05b867554db5" containerName="registry-server" Dec 09 12:29:47 crc kubenswrapper[4703]: I1209 12:29:47.934287 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="03ce63d0-6e1f-4d88-a26c-05b867554db5" containerName="registry-server" Dec 09 12:29:47 crc kubenswrapper[4703]: I1209 12:29:47.934305 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="a92bceed-7795-442f-99c4-c852c51c6284" containerName="glance-httpd" Dec 09 12:29:47 crc kubenswrapper[4703]: I1209 12:29:47.934316 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="a92bceed-7795-442f-99c4-c852c51c6284" containerName="glance-log" Dec 09 12:29:47 crc kubenswrapper[4703]: I1209 12:29:47.935617 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 09 12:29:47 crc kubenswrapper[4703]: I1209 12:29:47.952101 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 09 12:29:47 crc kubenswrapper[4703]: I1209 12:29:47.952378 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Dec 09 12:29:47 crc kubenswrapper[4703]: I1209 12:29:47.952577 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Dec 09 12:29:47 crc kubenswrapper[4703]: I1209 12:29:47.952701 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-5lwc2" Dec 09 12:29:47 crc kubenswrapper[4703]: I1209 12:29:47.978016 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 09 12:29:48 crc kubenswrapper[4703]: I1209 12:29:48.000156 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-cfecbed1-519b-4af5-9dfc-e582a380e9a4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cfecbed1-519b-4af5-9dfc-e582a380e9a4\") pod \"glance-default-internal-api-0\" (UID: \"d5aa154d-168f-4bf7-a86f-85f3f8989c41\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:29:48 crc kubenswrapper[4703]: I1209 12:29:48.000243 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d5aa154d-168f-4bf7-a86f-85f3f8989c41-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d5aa154d-168f-4bf7-a86f-85f3f8989c41\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:29:48 crc kubenswrapper[4703]: I1209 12:29:48.000312 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5aa154d-168f-4bf7-a86f-85f3f8989c41-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d5aa154d-168f-4bf7-a86f-85f3f8989c41\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:29:48 crc kubenswrapper[4703]: I1209 12:29:48.000341 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d5aa154d-168f-4bf7-a86f-85f3f8989c41-logs\") pod \"glance-default-internal-api-0\" (UID: \"d5aa154d-168f-4bf7-a86f-85f3f8989c41\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:29:48 crc kubenswrapper[4703]: I1209 12:29:48.000374 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5aa154d-168f-4bf7-a86f-85f3f8989c41-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d5aa154d-168f-4bf7-a86f-85f3f8989c41\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:29:48 crc kubenswrapper[4703]: I1209 12:29:48.000413 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d5aa154d-168f-4bf7-a86f-85f3f8989c41-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d5aa154d-168f-4bf7-a86f-85f3f8989c41\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:29:48 crc kubenswrapper[4703]: I1209 12:29:48.000482 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5aa154d-168f-4bf7-a86f-85f3f8989c41-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d5aa154d-168f-4bf7-a86f-85f3f8989c41\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:29:48 crc kubenswrapper[4703]: I1209 12:29:48.000545 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knnvw\" (UniqueName: \"kubernetes.io/projected/d5aa154d-168f-4bf7-a86f-85f3f8989c41-kube-api-access-knnvw\") pod \"glance-default-internal-api-0\" (UID: \"d5aa154d-168f-4bf7-a86f-85f3f8989c41\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:29:48 crc kubenswrapper[4703]: I1209 12:29:48.102824 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-cfecbed1-519b-4af5-9dfc-e582a380e9a4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cfecbed1-519b-4af5-9dfc-e582a380e9a4\") pod \"glance-default-internal-api-0\" (UID: \"d5aa154d-168f-4bf7-a86f-85f3f8989c41\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:29:48 crc kubenswrapper[4703]: I1209 12:29:48.102907 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d5aa154d-168f-4bf7-a86f-85f3f8989c41-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d5aa154d-168f-4bf7-a86f-85f3f8989c41\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:29:48 crc kubenswrapper[4703]: I1209 12:29:48.103011 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5aa154d-168f-4bf7-a86f-85f3f8989c41-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d5aa154d-168f-4bf7-a86f-85f3f8989c41\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:29:48 crc kubenswrapper[4703]: I1209 12:29:48.103044 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d5aa154d-168f-4bf7-a86f-85f3f8989c41-logs\") pod \"glance-default-internal-api-0\" (UID: \"d5aa154d-168f-4bf7-a86f-85f3f8989c41\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:29:48 crc kubenswrapper[4703]: I1209 12:29:48.103078 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5aa154d-168f-4bf7-a86f-85f3f8989c41-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d5aa154d-168f-4bf7-a86f-85f3f8989c41\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:29:48 crc kubenswrapper[4703]: I1209 12:29:48.103117 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d5aa154d-168f-4bf7-a86f-85f3f8989c41-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d5aa154d-168f-4bf7-a86f-85f3f8989c41\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:29:48 crc kubenswrapper[4703]: I1209 12:29:48.103232 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5aa154d-168f-4bf7-a86f-85f3f8989c41-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d5aa154d-168f-4bf7-a86f-85f3f8989c41\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:29:48 crc kubenswrapper[4703]: I1209 12:29:48.103325 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knnvw\" (UniqueName: \"kubernetes.io/projected/d5aa154d-168f-4bf7-a86f-85f3f8989c41-kube-api-access-knnvw\") pod \"glance-default-internal-api-0\" (UID: \"d5aa154d-168f-4bf7-a86f-85f3f8989c41\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:29:48 crc kubenswrapper[4703]: I1209 12:29:48.105649 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d5aa154d-168f-4bf7-a86f-85f3f8989c41-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d5aa154d-168f-4bf7-a86f-85f3f8989c41\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:29:48 crc kubenswrapper[4703]: I1209 12:29:48.106242 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d5aa154d-168f-4bf7-a86f-85f3f8989c41-logs\") pod \"glance-default-internal-api-0\" (UID: \"d5aa154d-168f-4bf7-a86f-85f3f8989c41\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:29:48 crc kubenswrapper[4703]: I1209 12:29:48.113277 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5aa154d-168f-4bf7-a86f-85f3f8989c41-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d5aa154d-168f-4bf7-a86f-85f3f8989c41\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:29:48 crc kubenswrapper[4703]: I1209 12:29:48.125107 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5aa154d-168f-4bf7-a86f-85f3f8989c41-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d5aa154d-168f-4bf7-a86f-85f3f8989c41\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:29:48 crc kubenswrapper[4703]: I1209 12:29:48.128545 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5aa154d-168f-4bf7-a86f-85f3f8989c41-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d5aa154d-168f-4bf7-a86f-85f3f8989c41\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:29:48 crc kubenswrapper[4703]: I1209 12:29:48.130085 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d5aa154d-168f-4bf7-a86f-85f3f8989c41-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d5aa154d-168f-4bf7-a86f-85f3f8989c41\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:29:48 crc kubenswrapper[4703]: I1209 12:29:48.134896 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knnvw\" (UniqueName: \"kubernetes.io/projected/d5aa154d-168f-4bf7-a86f-85f3f8989c41-kube-api-access-knnvw\") pod \"glance-default-internal-api-0\" (UID: \"d5aa154d-168f-4bf7-a86f-85f3f8989c41\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:29:48 crc kubenswrapper[4703]: I1209 12:29:48.154350 4703 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 09 12:29:48 crc kubenswrapper[4703]: I1209 12:29:48.154416 4703 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-cfecbed1-519b-4af5-9dfc-e582a380e9a4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cfecbed1-519b-4af5-9dfc-e582a380e9a4\") pod \"glance-default-internal-api-0\" (UID: \"d5aa154d-168f-4bf7-a86f-85f3f8989c41\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/797e4922fbff3a4efc14293ba89fb3aa4e856c71aa0daf9f22ac638d77d5a369/globalmount\"" pod="openstack/glance-default-internal-api-0" Dec 09 12:29:48 crc kubenswrapper[4703]: I1209 12:29:48.358229 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-cfecbed1-519b-4af5-9dfc-e582a380e9a4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cfecbed1-519b-4af5-9dfc-e582a380e9a4\") pod \"glance-default-internal-api-0\" (UID: \"d5aa154d-168f-4bf7-a86f-85f3f8989c41\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:29:48 crc kubenswrapper[4703]: I1209 12:29:48.578787 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 09 12:29:49 crc kubenswrapper[4703]: I1209 12:29:49.093435 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03ce63d0-6e1f-4d88-a26c-05b867554db5" path="/var/lib/kubelet/pods/03ce63d0-6e1f-4d88-a26c-05b867554db5/volumes" Dec 09 12:29:49 crc kubenswrapper[4703]: I1209 12:29:49.095490 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a92bceed-7795-442f-99c4-c852c51c6284" path="/var/lib/kubelet/pods/a92bceed-7795-442f-99c4-c852c51c6284/volumes" Dec 09 12:29:49 crc kubenswrapper[4703]: I1209 12:29:49.096510 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 09 12:29:49 crc kubenswrapper[4703]: I1209 12:29:49.309596 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 09 12:29:49 crc kubenswrapper[4703]: W1209 12:29:49.325299 4703 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd5aa154d_168f_4bf7_a86f_85f3f8989c41.slice/crio-a08110b9a4a827754d1a007c5cb33672e8139144aa4a6aa9c990c7552dcfa00d WatchSource:0}: Error finding container a08110b9a4a827754d1a007c5cb33672e8139144aa4a6aa9c990c7552dcfa00d: Status 404 returned error can't find the container with id a08110b9a4a827754d1a007c5cb33672e8139144aa4a6aa9c990c7552dcfa00d Dec 09 12:29:49 crc kubenswrapper[4703]: I1209 12:29:49.902171 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d5aa154d-168f-4bf7-a86f-85f3f8989c41","Type":"ContainerStarted","Data":"a08110b9a4a827754d1a007c5cb33672e8139144aa4a6aa9c990c7552dcfa00d"} Dec 09 12:29:50 crc kubenswrapper[4703]: I1209 12:29:50.939746 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8ec047e2-1461-4fc0-b1cf-d149cc23924b" containerName="ceilometer-central-agent" containerID="cri-o://e374ad9846e8bc4fa09abe97caf319e7922096a04787be0cd2238c81120960c0" gracePeriod=30 Dec 09 12:29:50 crc kubenswrapper[4703]: I1209 12:29:50.940706 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 09 12:29:50 crc kubenswrapper[4703]: I1209 12:29:50.941262 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8ec047e2-1461-4fc0-b1cf-d149cc23924b" containerName="proxy-httpd" containerID="cri-o://a485ed4f1f568db8578cea11283f2481188b69eef867ce794b79211dc66a3225" gracePeriod=30 Dec 09 12:29:50 crc kubenswrapper[4703]: I1209 12:29:50.941325 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8ec047e2-1461-4fc0-b1cf-d149cc23924b" containerName="ceilometer-notification-agent" containerID="cri-o://8cf5f01aea04ac23eff56442767dfaa85f4d9467e280928589b1ca9268ad367e" gracePeriod=30 Dec 09 12:29:50 crc kubenswrapper[4703]: I1209 12:29:50.941491 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8ec047e2-1461-4fc0-b1cf-d149cc23924b" containerName="sg-core" containerID="cri-o://d229ccd0e57dad6cee969c55c5bb1dd86e7bc1da36d31bbb98d785d9e425bd9c" gracePeriod=30 Dec 09 12:29:50 crc kubenswrapper[4703]: I1209 12:29:50.949441 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d5aa154d-168f-4bf7-a86f-85f3f8989c41","Type":"ContainerStarted","Data":"3fbf3ddc74b00829893c6e716a8f3d3548bee87a0cd0022a8d98bb13a4089f69"} Dec 09 12:29:50 crc kubenswrapper[4703]: I1209 12:29:50.992803 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.4981141989999998 podStartE2EDuration="13.992775837s" podCreationTimestamp="2025-12-09 12:29:37 +0000 UTC" firstStartedPulling="2025-12-09 12:29:38.945350133 +0000 UTC m=+1478.194113652" lastFinishedPulling="2025-12-09 12:29:50.440011771 +0000 UTC m=+1489.688775290" observedRunningTime="2025-12-09 12:29:50.977313131 +0000 UTC m=+1490.226076650" watchObservedRunningTime="2025-12-09 12:29:50.992775837 +0000 UTC m=+1490.241539356" Dec 09 12:29:51 crc kubenswrapper[4703]: I1209 12:29:51.986327 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d5aa154d-168f-4bf7-a86f-85f3f8989c41","Type":"ContainerStarted","Data":"e50f46d844744f6e446320bb5fa5ff4a88327cd63dc31c2753b715f2a7b4583b"} Dec 09 12:29:51 crc kubenswrapper[4703]: I1209 12:29:51.997925 4703 generic.go:334] "Generic (PLEG): container finished" podID="8ec047e2-1461-4fc0-b1cf-d149cc23924b" containerID="d229ccd0e57dad6cee969c55c5bb1dd86e7bc1da36d31bbb98d785d9e425bd9c" exitCode=2 Dec 09 12:29:51 crc kubenswrapper[4703]: I1209 12:29:51.997984 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8ec047e2-1461-4fc0-b1cf-d149cc23924b","Type":"ContainerStarted","Data":"a485ed4f1f568db8578cea11283f2481188b69eef867ce794b79211dc66a3225"} Dec 09 12:29:51 crc kubenswrapper[4703]: I1209 12:29:51.998013 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8ec047e2-1461-4fc0-b1cf-d149cc23924b","Type":"ContainerDied","Data":"d229ccd0e57dad6cee969c55c5bb1dd86e7bc1da36d31bbb98d785d9e425bd9c"} Dec 09 12:29:52 crc kubenswrapper[4703]: I1209 12:29:52.017828 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.017801529 podStartE2EDuration="5.017801529s" podCreationTimestamp="2025-12-09 12:29:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:29:52.01466881 +0000 UTC m=+1491.263432339" watchObservedRunningTime="2025-12-09 12:29:52.017801529 +0000 UTC m=+1491.266565048" Dec 09 12:29:53 crc kubenswrapper[4703]: I1209 12:29:53.022431 4703 generic.go:334] "Generic (PLEG): container finished" podID="8ec047e2-1461-4fc0-b1cf-d149cc23924b" containerID="8cf5f01aea04ac23eff56442767dfaa85f4d9467e280928589b1ca9268ad367e" exitCode=0 Dec 09 12:29:53 crc kubenswrapper[4703]: I1209 12:29:53.022506 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8ec047e2-1461-4fc0-b1cf-d149cc23924b","Type":"ContainerDied","Data":"8cf5f01aea04ac23eff56442767dfaa85f4d9467e280928589b1ca9268ad367e"} Dec 09 12:29:55 crc kubenswrapper[4703]: I1209 12:29:55.780156 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Dec 09 12:29:58 crc kubenswrapper[4703]: I1209 12:29:58.579109 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 09 12:29:58 crc kubenswrapper[4703]: I1209 12:29:58.579949 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 09 12:29:59 crc kubenswrapper[4703]: I1209 12:29:59.353376 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 09 12:29:59 crc kubenswrapper[4703]: I1209 12:29:59.355236 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 09 12:29:59 crc kubenswrapper[4703]: I1209 12:29:59.367070 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 09 12:29:59 crc kubenswrapper[4703]: I1209 12:29:59.621881 4703 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cloudkitty-api-0" podUID="20fe1291-16c1-4602-b70d-fad8bda0f61b" containerName="cloudkitty-api" probeResult="failure" output="Get \"https://10.217.0.188:8889/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 09 12:29:59 crc kubenswrapper[4703]: I1209 12:29:59.622010 4703 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-api-0" podUID="20fe1291-16c1-4602-b70d-fad8bda0f61b" containerName="cloudkitty-api" probeResult="failure" output="Get \"https://10.217.0.188:8889/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 09 12:30:00 crc kubenswrapper[4703]: I1209 12:30:00.083982 4703 patch_prober.go:28] interesting pod/machine-config-daemon-q8sfk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 12:30:00 crc kubenswrapper[4703]: I1209 12:30:00.084077 4703 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 12:30:00 crc kubenswrapper[4703]: I1209 12:30:00.145956 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 09 12:30:00 crc kubenswrapper[4703]: I1209 12:30:00.188254 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421390-7x87k"] Dec 09 12:30:00 crc kubenswrapper[4703]: I1209 12:30:00.190634 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421390-7x87k" Dec 09 12:30:00 crc kubenswrapper[4703]: I1209 12:30:00.218706 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 09 12:30:00 crc kubenswrapper[4703]: I1209 12:30:00.218992 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 09 12:30:00 crc kubenswrapper[4703]: I1209 12:30:00.237526 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421390-7x87k"] Dec 09 12:30:00 crc kubenswrapper[4703]: I1209 12:30:00.324906 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/624667b4-444c-42a5-91e1-cd0bbc2f79ae-secret-volume\") pod \"collect-profiles-29421390-7x87k\" (UID: \"624667b4-444c-42a5-91e1-cd0bbc2f79ae\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421390-7x87k" Dec 09 12:30:00 crc kubenswrapper[4703]: I1209 12:30:00.325122 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4njfc\" (UniqueName: \"kubernetes.io/projected/624667b4-444c-42a5-91e1-cd0bbc2f79ae-kube-api-access-4njfc\") pod \"collect-profiles-29421390-7x87k\" (UID: \"624667b4-444c-42a5-91e1-cd0bbc2f79ae\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421390-7x87k" Dec 09 12:30:00 crc kubenswrapper[4703]: I1209 12:30:00.325155 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/624667b4-444c-42a5-91e1-cd0bbc2f79ae-config-volume\") pod \"collect-profiles-29421390-7x87k\" (UID: \"624667b4-444c-42a5-91e1-cd0bbc2f79ae\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421390-7x87k" Dec 09 12:30:00 crc kubenswrapper[4703]: I1209 12:30:00.428781 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4njfc\" (UniqueName: \"kubernetes.io/projected/624667b4-444c-42a5-91e1-cd0bbc2f79ae-kube-api-access-4njfc\") pod \"collect-profiles-29421390-7x87k\" (UID: \"624667b4-444c-42a5-91e1-cd0bbc2f79ae\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421390-7x87k" Dec 09 12:30:00 crc kubenswrapper[4703]: I1209 12:30:00.429276 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/624667b4-444c-42a5-91e1-cd0bbc2f79ae-config-volume\") pod \"collect-profiles-29421390-7x87k\" (UID: \"624667b4-444c-42a5-91e1-cd0bbc2f79ae\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421390-7x87k" Dec 09 12:30:00 crc kubenswrapper[4703]: I1209 12:30:00.429468 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/624667b4-444c-42a5-91e1-cd0bbc2f79ae-secret-volume\") pod \"collect-profiles-29421390-7x87k\" (UID: \"624667b4-444c-42a5-91e1-cd0bbc2f79ae\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421390-7x87k" Dec 09 12:30:00 crc kubenswrapper[4703]: I1209 12:30:00.430772 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/624667b4-444c-42a5-91e1-cd0bbc2f79ae-config-volume\") pod \"collect-profiles-29421390-7x87k\" (UID: \"624667b4-444c-42a5-91e1-cd0bbc2f79ae\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421390-7x87k" Dec 09 12:30:00 crc kubenswrapper[4703]: I1209 12:30:00.436039 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/624667b4-444c-42a5-91e1-cd0bbc2f79ae-secret-volume\") pod \"collect-profiles-29421390-7x87k\" (UID: \"624667b4-444c-42a5-91e1-cd0bbc2f79ae\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421390-7x87k" Dec 09 12:30:00 crc kubenswrapper[4703]: I1209 12:30:00.454135 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4njfc\" (UniqueName: \"kubernetes.io/projected/624667b4-444c-42a5-91e1-cd0bbc2f79ae-kube-api-access-4njfc\") pod \"collect-profiles-29421390-7x87k\" (UID: \"624667b4-444c-42a5-91e1-cd0bbc2f79ae\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421390-7x87k" Dec 09 12:30:00 crc kubenswrapper[4703]: I1209 12:30:00.567586 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421390-7x87k" Dec 09 12:30:01 crc kubenswrapper[4703]: I1209 12:30:01.044542 4703 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-api-0" podUID="b6029ff4-8d8a-442e-95d0-b7d3c0cbdc0e" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.197:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 09 12:30:01 crc kubenswrapper[4703]: I1209 12:30:01.166118 4703 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 09 12:30:02 crc kubenswrapper[4703]: I1209 12:30:02.185237 4703 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 09 12:30:02 crc kubenswrapper[4703]: I1209 12:30:02.185581 4703 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 09 12:30:03 crc kubenswrapper[4703]: E1209 12:30:03.414951 4703 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified" Dec 09 12:30:03 crc kubenswrapper[4703]: E1209 12:30:03.415529 4703 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:nova-cell0-conductor-db-sync,Image:quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CELL_NAME,Value:cell0,ValueFrom:nil,},EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:false,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/kolla/config_files/config.json,SubPath:nova-conductor-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p2df6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42436,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-cell0-conductor-db-sync-xdtxt_openstack(955f8136-42b0-4fce-8853-3c467cf8d070): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 12:30:03 crc kubenswrapper[4703]: E1209 12:30:03.416812 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-cell0-conductor-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/nova-cell0-conductor-db-sync-xdtxt" podUID="955f8136-42b0-4fce-8853-3c467cf8d070" Dec 09 12:30:03 crc kubenswrapper[4703]: I1209 12:30:03.560948 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-api-0" Dec 09 12:30:03 crc kubenswrapper[4703]: I1209 12:30:03.956501 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421390-7x87k"] Dec 09 12:30:04 crc kubenswrapper[4703]: I1209 12:30:04.225073 4703 generic.go:334] "Generic (PLEG): container finished" podID="8ec047e2-1461-4fc0-b1cf-d149cc23924b" containerID="e374ad9846e8bc4fa09abe97caf319e7922096a04787be0cd2238c81120960c0" exitCode=0 Dec 09 12:30:04 crc kubenswrapper[4703]: I1209 12:30:04.225177 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8ec047e2-1461-4fc0-b1cf-d149cc23924b","Type":"ContainerDied","Data":"e374ad9846e8bc4fa09abe97caf319e7922096a04787be0cd2238c81120960c0"} Dec 09 12:30:04 crc kubenswrapper[4703]: I1209 12:30:04.229359 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421390-7x87k" event={"ID":"624667b4-444c-42a5-91e1-cd0bbc2f79ae","Type":"ContainerStarted","Data":"c87568f8a7492d4bc0d35bd6bd958d61d182fc921d3174058217a2025337dc09"} Dec 09 12:30:04 crc kubenswrapper[4703]: E1209 12:30:04.233464 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-cell0-conductor-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified\\\"\"" pod="openstack/nova-cell0-conductor-db-sync-xdtxt" podUID="955f8136-42b0-4fce-8853-3c467cf8d070" Dec 09 12:30:04 crc kubenswrapper[4703]: I1209 12:30:04.484566 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 09 12:30:04 crc kubenswrapper[4703]: I1209 12:30:04.484721 4703 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 09 12:30:04 crc kubenswrapper[4703]: I1209 12:30:04.487206 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 09 12:30:05 crc kubenswrapper[4703]: I1209 12:30:05.241721 4703 generic.go:334] "Generic (PLEG): container finished" podID="624667b4-444c-42a5-91e1-cd0bbc2f79ae" containerID="9dfa5b1ce6aff44fbb0e693c090028d418a01dde89bd591b3d661566e3951420" exitCode=0 Dec 09 12:30:05 crc kubenswrapper[4703]: I1209 12:30:05.244139 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421390-7x87k" event={"ID":"624667b4-444c-42a5-91e1-cd0bbc2f79ae","Type":"ContainerDied","Data":"9dfa5b1ce6aff44fbb0e693c090028d418a01dde89bd591b3d661566e3951420"} Dec 09 12:30:06 crc kubenswrapper[4703]: I1209 12:30:06.767206 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421390-7x87k" Dec 09 12:30:06 crc kubenswrapper[4703]: I1209 12:30:06.832248 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4njfc\" (UniqueName: \"kubernetes.io/projected/624667b4-444c-42a5-91e1-cd0bbc2f79ae-kube-api-access-4njfc\") pod \"624667b4-444c-42a5-91e1-cd0bbc2f79ae\" (UID: \"624667b4-444c-42a5-91e1-cd0bbc2f79ae\") " Dec 09 12:30:06 crc kubenswrapper[4703]: I1209 12:30:06.832410 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/624667b4-444c-42a5-91e1-cd0bbc2f79ae-secret-volume\") pod \"624667b4-444c-42a5-91e1-cd0bbc2f79ae\" (UID: \"624667b4-444c-42a5-91e1-cd0bbc2f79ae\") " Dec 09 12:30:06 crc kubenswrapper[4703]: I1209 12:30:06.832691 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/624667b4-444c-42a5-91e1-cd0bbc2f79ae-config-volume\") pod \"624667b4-444c-42a5-91e1-cd0bbc2f79ae\" (UID: \"624667b4-444c-42a5-91e1-cd0bbc2f79ae\") " Dec 09 12:30:06 crc kubenswrapper[4703]: I1209 12:30:06.838025 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/624667b4-444c-42a5-91e1-cd0bbc2f79ae-config-volume" (OuterVolumeSpecName: "config-volume") pod "624667b4-444c-42a5-91e1-cd0bbc2f79ae" (UID: "624667b4-444c-42a5-91e1-cd0bbc2f79ae"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:30:06 crc kubenswrapper[4703]: I1209 12:30:06.842084 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/624667b4-444c-42a5-91e1-cd0bbc2f79ae-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "624667b4-444c-42a5-91e1-cd0bbc2f79ae" (UID: "624667b4-444c-42a5-91e1-cd0bbc2f79ae"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:30:06 crc kubenswrapper[4703]: I1209 12:30:06.849631 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/624667b4-444c-42a5-91e1-cd0bbc2f79ae-kube-api-access-4njfc" (OuterVolumeSpecName: "kube-api-access-4njfc") pod "624667b4-444c-42a5-91e1-cd0bbc2f79ae" (UID: "624667b4-444c-42a5-91e1-cd0bbc2f79ae"). InnerVolumeSpecName "kube-api-access-4njfc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:30:06 crc kubenswrapper[4703]: I1209 12:30:06.935289 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4njfc\" (UniqueName: \"kubernetes.io/projected/624667b4-444c-42a5-91e1-cd0bbc2f79ae-kube-api-access-4njfc\") on node \"crc\" DevicePath \"\"" Dec 09 12:30:06 crc kubenswrapper[4703]: I1209 12:30:06.935349 4703 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/624667b4-444c-42a5-91e1-cd0bbc2f79ae-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 09 12:30:06 crc kubenswrapper[4703]: I1209 12:30:06.935363 4703 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/624667b4-444c-42a5-91e1-cd0bbc2f79ae-config-volume\") on node \"crc\" DevicePath \"\"" Dec 09 12:30:07 crc kubenswrapper[4703]: I1209 12:30:07.267633 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421390-7x87k" event={"ID":"624667b4-444c-42a5-91e1-cd0bbc2f79ae","Type":"ContainerDied","Data":"c87568f8a7492d4bc0d35bd6bd958d61d182fc921d3174058217a2025337dc09"} Dec 09 12:30:07 crc kubenswrapper[4703]: I1209 12:30:07.267688 4703 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c87568f8a7492d4bc0d35bd6bd958d61d182fc921d3174058217a2025337dc09" Dec 09 12:30:07 crc kubenswrapper[4703]: I1209 12:30:07.267755 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421390-7x87k" Dec 09 12:30:07 crc kubenswrapper[4703]: I1209 12:30:07.345068 4703 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","poddb4f79b4-592c-4dc1-ad09-c1582b9d8497"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort poddb4f79b4-592c-4dc1-ad09-c1582b9d8497] : Timed out while waiting for systemd to remove kubepods-besteffort-poddb4f79b4_592c_4dc1_ad09_c1582b9d8497.slice" Dec 09 12:30:07 crc kubenswrapper[4703]: E1209 12:30:07.345134 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort poddb4f79b4-592c-4dc1-ad09-c1582b9d8497] : unable to destroy cgroup paths for cgroup [kubepods besteffort poddb4f79b4-592c-4dc1-ad09-c1582b9d8497] : Timed out while waiting for systemd to remove kubepods-besteffort-poddb4f79b4_592c_4dc1_ad09_c1582b9d8497.slice" pod="openstack/glance-default-external-api-0" podUID="db4f79b4-592c-4dc1-ad09-c1582b9d8497" Dec 09 12:30:07 crc kubenswrapper[4703]: I1209 12:30:07.360233 4703 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pode5e4fae3-dab6-4aab-87fa-01b51c6f05db"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pode5e4fae3-dab6-4aab-87fa-01b51c6f05db] : Timed out while waiting for systemd to remove kubepods-besteffort-pode5e4fae3_dab6_4aab_87fa_01b51c6f05db.slice" Dec 09 12:30:08 crc kubenswrapper[4703]: I1209 12:30:08.241010 4703 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="8ec047e2-1461-4fc0-b1cf-d149cc23924b" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 09 12:30:08 crc kubenswrapper[4703]: I1209 12:30:08.278561 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 09 12:30:08 crc kubenswrapper[4703]: I1209 12:30:08.307255 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 09 12:30:08 crc kubenswrapper[4703]: I1209 12:30:08.321138 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 09 12:30:08 crc kubenswrapper[4703]: I1209 12:30:08.338288 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 09 12:30:08 crc kubenswrapper[4703]: E1209 12:30:08.338948 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="624667b4-444c-42a5-91e1-cd0bbc2f79ae" containerName="collect-profiles" Dec 09 12:30:08 crc kubenswrapper[4703]: I1209 12:30:08.338978 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="624667b4-444c-42a5-91e1-cd0bbc2f79ae" containerName="collect-profiles" Dec 09 12:30:08 crc kubenswrapper[4703]: I1209 12:30:08.339314 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="624667b4-444c-42a5-91e1-cd0bbc2f79ae" containerName="collect-profiles" Dec 09 12:30:08 crc kubenswrapper[4703]: I1209 12:30:08.340976 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 09 12:30:08 crc kubenswrapper[4703]: I1209 12:30:08.345060 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Dec 09 12:30:08 crc kubenswrapper[4703]: I1209 12:30:08.346733 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 09 12:30:08 crc kubenswrapper[4703]: I1209 12:30:08.359644 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 09 12:30:08 crc kubenswrapper[4703]: I1209 12:30:08.517785 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38e472f5-3c76-4959-b813-119ab542819e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"38e472f5-3c76-4959-b813-119ab542819e\") " pod="openstack/glance-default-external-api-0" Dec 09 12:30:08 crc kubenswrapper[4703]: I1209 12:30:08.517872 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqm4m\" (UniqueName: \"kubernetes.io/projected/38e472f5-3c76-4959-b813-119ab542819e-kube-api-access-xqm4m\") pod \"glance-default-external-api-0\" (UID: \"38e472f5-3c76-4959-b813-119ab542819e\") " pod="openstack/glance-default-external-api-0" Dec 09 12:30:08 crc kubenswrapper[4703]: I1209 12:30:08.517895 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/38e472f5-3c76-4959-b813-119ab542819e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"38e472f5-3c76-4959-b813-119ab542819e\") " pod="openstack/glance-default-external-api-0" Dec 09 12:30:08 crc kubenswrapper[4703]: I1209 12:30:08.517914 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38e472f5-3c76-4959-b813-119ab542819e-config-data\") pod \"glance-default-external-api-0\" (UID: \"38e472f5-3c76-4959-b813-119ab542819e\") " pod="openstack/glance-default-external-api-0" Dec 09 12:30:08 crc kubenswrapper[4703]: I1209 12:30:08.517959 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38e472f5-3c76-4959-b813-119ab542819e-scripts\") pod \"glance-default-external-api-0\" (UID: \"38e472f5-3c76-4959-b813-119ab542819e\") " pod="openstack/glance-default-external-api-0" Dec 09 12:30:08 crc kubenswrapper[4703]: I1209 12:30:08.518005 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b4442429-2353-4ea7-ac70-798afe17b703\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b4442429-2353-4ea7-ac70-798afe17b703\") pod \"glance-default-external-api-0\" (UID: \"38e472f5-3c76-4959-b813-119ab542819e\") " pod="openstack/glance-default-external-api-0" Dec 09 12:30:08 crc kubenswrapper[4703]: I1209 12:30:08.518125 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/38e472f5-3c76-4959-b813-119ab542819e-logs\") pod \"glance-default-external-api-0\" (UID: \"38e472f5-3c76-4959-b813-119ab542819e\") " pod="openstack/glance-default-external-api-0" Dec 09 12:30:08 crc kubenswrapper[4703]: I1209 12:30:08.518144 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/38e472f5-3c76-4959-b813-119ab542819e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"38e472f5-3c76-4959-b813-119ab542819e\") " pod="openstack/glance-default-external-api-0" Dec 09 12:30:08 crc kubenswrapper[4703]: I1209 12:30:08.620367 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b4442429-2353-4ea7-ac70-798afe17b703\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b4442429-2353-4ea7-ac70-798afe17b703\") pod \"glance-default-external-api-0\" (UID: \"38e472f5-3c76-4959-b813-119ab542819e\") " pod="openstack/glance-default-external-api-0" Dec 09 12:30:08 crc kubenswrapper[4703]: I1209 12:30:08.620600 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/38e472f5-3c76-4959-b813-119ab542819e-logs\") pod \"glance-default-external-api-0\" (UID: \"38e472f5-3c76-4959-b813-119ab542819e\") " pod="openstack/glance-default-external-api-0" Dec 09 12:30:08 crc kubenswrapper[4703]: I1209 12:30:08.620633 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/38e472f5-3c76-4959-b813-119ab542819e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"38e472f5-3c76-4959-b813-119ab542819e\") " pod="openstack/glance-default-external-api-0" Dec 09 12:30:08 crc kubenswrapper[4703]: I1209 12:30:08.620724 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38e472f5-3c76-4959-b813-119ab542819e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"38e472f5-3c76-4959-b813-119ab542819e\") " pod="openstack/glance-default-external-api-0" Dec 09 12:30:08 crc kubenswrapper[4703]: I1209 12:30:08.620762 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqm4m\" (UniqueName: \"kubernetes.io/projected/38e472f5-3c76-4959-b813-119ab542819e-kube-api-access-xqm4m\") pod \"glance-default-external-api-0\" (UID: \"38e472f5-3c76-4959-b813-119ab542819e\") " pod="openstack/glance-default-external-api-0" Dec 09 12:30:08 crc kubenswrapper[4703]: I1209 12:30:08.620792 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/38e472f5-3c76-4959-b813-119ab542819e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"38e472f5-3c76-4959-b813-119ab542819e\") " pod="openstack/glance-default-external-api-0" Dec 09 12:30:08 crc kubenswrapper[4703]: I1209 12:30:08.620815 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38e472f5-3c76-4959-b813-119ab542819e-config-data\") pod \"glance-default-external-api-0\" (UID: \"38e472f5-3c76-4959-b813-119ab542819e\") " pod="openstack/glance-default-external-api-0" Dec 09 12:30:08 crc kubenswrapper[4703]: I1209 12:30:08.620875 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38e472f5-3c76-4959-b813-119ab542819e-scripts\") pod \"glance-default-external-api-0\" (UID: \"38e472f5-3c76-4959-b813-119ab542819e\") " pod="openstack/glance-default-external-api-0" Dec 09 12:30:08 crc kubenswrapper[4703]: I1209 12:30:08.621365 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/38e472f5-3c76-4959-b813-119ab542819e-logs\") pod \"glance-default-external-api-0\" (UID: \"38e472f5-3c76-4959-b813-119ab542819e\") " pod="openstack/glance-default-external-api-0" Dec 09 12:30:08 crc kubenswrapper[4703]: I1209 12:30:08.622687 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/38e472f5-3c76-4959-b813-119ab542819e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"38e472f5-3c76-4959-b813-119ab542819e\") " pod="openstack/glance-default-external-api-0" Dec 09 12:30:08 crc kubenswrapper[4703]: I1209 12:30:08.625047 4703 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 09 12:30:08 crc kubenswrapper[4703]: I1209 12:30:08.625103 4703 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b4442429-2353-4ea7-ac70-798afe17b703\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b4442429-2353-4ea7-ac70-798afe17b703\") pod \"glance-default-external-api-0\" (UID: \"38e472f5-3c76-4959-b813-119ab542819e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/886135ab14297acda1ec732ab82ea94d7b160862d21cc21e3e28b5e1ec2b3603/globalmount\"" pod="openstack/glance-default-external-api-0" Dec 09 12:30:08 crc kubenswrapper[4703]: I1209 12:30:08.627290 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/38e472f5-3c76-4959-b813-119ab542819e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"38e472f5-3c76-4959-b813-119ab542819e\") " pod="openstack/glance-default-external-api-0" Dec 09 12:30:08 crc kubenswrapper[4703]: I1209 12:30:08.627804 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38e472f5-3c76-4959-b813-119ab542819e-config-data\") pod \"glance-default-external-api-0\" (UID: \"38e472f5-3c76-4959-b813-119ab542819e\") " pod="openstack/glance-default-external-api-0" Dec 09 12:30:08 crc kubenswrapper[4703]: I1209 12:30:08.630820 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38e472f5-3c76-4959-b813-119ab542819e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"38e472f5-3c76-4959-b813-119ab542819e\") " pod="openstack/glance-default-external-api-0" Dec 09 12:30:08 crc kubenswrapper[4703]: I1209 12:30:08.631017 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38e472f5-3c76-4959-b813-119ab542819e-scripts\") pod \"glance-default-external-api-0\" (UID: \"38e472f5-3c76-4959-b813-119ab542819e\") " pod="openstack/glance-default-external-api-0" Dec 09 12:30:08 crc kubenswrapper[4703]: I1209 12:30:08.643214 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqm4m\" (UniqueName: \"kubernetes.io/projected/38e472f5-3c76-4959-b813-119ab542819e-kube-api-access-xqm4m\") pod \"glance-default-external-api-0\" (UID: \"38e472f5-3c76-4959-b813-119ab542819e\") " pod="openstack/glance-default-external-api-0" Dec 09 12:30:08 crc kubenswrapper[4703]: I1209 12:30:08.690121 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b4442429-2353-4ea7-ac70-798afe17b703\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b4442429-2353-4ea7-ac70-798afe17b703\") pod \"glance-default-external-api-0\" (UID: \"38e472f5-3c76-4959-b813-119ab542819e\") " pod="openstack/glance-default-external-api-0" Dec 09 12:30:08 crc kubenswrapper[4703]: I1209 12:30:08.978522 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 09 12:30:09 crc kubenswrapper[4703]: I1209 12:30:09.087831 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db4f79b4-592c-4dc1-ad09-c1582b9d8497" path="/var/lib/kubelet/pods/db4f79b4-592c-4dc1-ad09-c1582b9d8497/volumes" Dec 09 12:30:09 crc kubenswrapper[4703]: I1209 12:30:09.640640 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 09 12:30:10 crc kubenswrapper[4703]: I1209 12:30:10.304035 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"38e472f5-3c76-4959-b813-119ab542819e","Type":"ContainerStarted","Data":"1c86f5ae54fa1fe0062b4d9906b5e2e3ff923eec9b68a87c52a2b86a9af8c431"} Dec 09 12:30:11 crc kubenswrapper[4703]: I1209 12:30:11.319045 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"38e472f5-3c76-4959-b813-119ab542819e","Type":"ContainerStarted","Data":"f5600e88a49b64e02947acb3886e51721c4951b03e787e90f381e7dc62514df5"} Dec 09 12:30:11 crc kubenswrapper[4703]: I1209 12:30:11.319980 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"38e472f5-3c76-4959-b813-119ab542819e","Type":"ContainerStarted","Data":"d1d22c00828cc7bfe090a837091afd9f44181c2d81f53484be26e4f9b4d51d30"} Dec 09 12:30:11 crc kubenswrapper[4703]: I1209 12:30:11.347960 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.34793266 podStartE2EDuration="3.34793266s" podCreationTimestamp="2025-12-09 12:30:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:30:11.343032518 +0000 UTC m=+1510.591796037" watchObservedRunningTime="2025-12-09 12:30:11.34793266 +0000 UTC m=+1510.596696179" Dec 09 12:30:18 crc kubenswrapper[4703]: I1209 12:30:18.978809 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 09 12:30:18 crc kubenswrapper[4703]: I1209 12:30:18.979875 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 09 12:30:19 crc kubenswrapper[4703]: I1209 12:30:19.013035 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 09 12:30:19 crc kubenswrapper[4703]: I1209 12:30:19.024534 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 09 12:30:19 crc kubenswrapper[4703]: I1209 12:30:19.426853 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 09 12:30:19 crc kubenswrapper[4703]: I1209 12:30:19.426911 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 09 12:30:20 crc kubenswrapper[4703]: I1209 12:30:20.439458 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-xdtxt" event={"ID":"955f8136-42b0-4fce-8853-3c467cf8d070","Type":"ContainerStarted","Data":"f5e137403eee78de79fb4ff40bf3ebdec88e1d547fca3518aa70ff40c87b7550"} Dec 09 12:30:20 crc kubenswrapper[4703]: I1209 12:30:20.464303 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-xdtxt" podStartSLOduration=3.197077102 podStartE2EDuration="36.464279036s" podCreationTimestamp="2025-12-09 12:29:44 +0000 UTC" firstStartedPulling="2025-12-09 12:29:46.255435374 +0000 UTC m=+1485.504198903" lastFinishedPulling="2025-12-09 12:30:19.522637318 +0000 UTC m=+1518.771400837" observedRunningTime="2025-12-09 12:30:20.461826706 +0000 UTC m=+1519.710590225" watchObservedRunningTime="2025-12-09 12:30:20.464279036 +0000 UTC m=+1519.713042565" Dec 09 12:30:21 crc kubenswrapper[4703]: I1209 12:30:21.458331 4703 generic.go:334] "Generic (PLEG): container finished" podID="8ec047e2-1461-4fc0-b1cf-d149cc23924b" containerID="a485ed4f1f568db8578cea11283f2481188b69eef867ce794b79211dc66a3225" exitCode=137 Dec 09 12:30:21 crc kubenswrapper[4703]: I1209 12:30:21.458830 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8ec047e2-1461-4fc0-b1cf-d149cc23924b","Type":"ContainerDied","Data":"a485ed4f1f568db8578cea11283f2481188b69eef867ce794b79211dc66a3225"} Dec 09 12:30:21 crc kubenswrapper[4703]: I1209 12:30:21.922911 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 12:30:21 crc kubenswrapper[4703]: I1209 12:30:21.941673 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 09 12:30:21 crc kubenswrapper[4703]: I1209 12:30:21.941904 4703 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 09 12:30:21 crc kubenswrapper[4703]: I1209 12:30:21.985146 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ec047e2-1461-4fc0-b1cf-d149cc23924b-config-data\") pod \"8ec047e2-1461-4fc0-b1cf-d149cc23924b\" (UID: \"8ec047e2-1461-4fc0-b1cf-d149cc23924b\") " Dec 09 12:30:21 crc kubenswrapper[4703]: I1209 12:30:21.985414 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8ec047e2-1461-4fc0-b1cf-d149cc23924b-run-httpd\") pod \"8ec047e2-1461-4fc0-b1cf-d149cc23924b\" (UID: \"8ec047e2-1461-4fc0-b1cf-d149cc23924b\") " Dec 09 12:30:21 crc kubenswrapper[4703]: I1209 12:30:21.985656 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8ec047e2-1461-4fc0-b1cf-d149cc23924b-log-httpd\") pod \"8ec047e2-1461-4fc0-b1cf-d149cc23924b\" (UID: \"8ec047e2-1461-4fc0-b1cf-d149cc23924b\") " Dec 09 12:30:21 crc kubenswrapper[4703]: I1209 12:30:21.985702 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ec047e2-1461-4fc0-b1cf-d149cc23924b-combined-ca-bundle\") pod \"8ec047e2-1461-4fc0-b1cf-d149cc23924b\" (UID: \"8ec047e2-1461-4fc0-b1cf-d149cc23924b\") " Dec 09 12:30:21 crc kubenswrapper[4703]: I1209 12:30:21.985754 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8ec047e2-1461-4fc0-b1cf-d149cc23924b-sg-core-conf-yaml\") pod \"8ec047e2-1461-4fc0-b1cf-d149cc23924b\" (UID: \"8ec047e2-1461-4fc0-b1cf-d149cc23924b\") " Dec 09 12:30:21 crc kubenswrapper[4703]: I1209 12:30:21.985862 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vbzv8\" (UniqueName: \"kubernetes.io/projected/8ec047e2-1461-4fc0-b1cf-d149cc23924b-kube-api-access-vbzv8\") pod \"8ec047e2-1461-4fc0-b1cf-d149cc23924b\" (UID: \"8ec047e2-1461-4fc0-b1cf-d149cc23924b\") " Dec 09 12:30:21 crc kubenswrapper[4703]: I1209 12:30:21.985900 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ec047e2-1461-4fc0-b1cf-d149cc23924b-scripts\") pod \"8ec047e2-1461-4fc0-b1cf-d149cc23924b\" (UID: \"8ec047e2-1461-4fc0-b1cf-d149cc23924b\") " Dec 09 12:30:21 crc kubenswrapper[4703]: I1209 12:30:21.986996 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ec047e2-1461-4fc0-b1cf-d149cc23924b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "8ec047e2-1461-4fc0-b1cf-d149cc23924b" (UID: "8ec047e2-1461-4fc0-b1cf-d149cc23924b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:30:21 crc kubenswrapper[4703]: I1209 12:30:21.987387 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ec047e2-1461-4fc0-b1cf-d149cc23924b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "8ec047e2-1461-4fc0-b1cf-d149cc23924b" (UID: "8ec047e2-1461-4fc0-b1cf-d149cc23924b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:30:22 crc kubenswrapper[4703]: I1209 12:30:22.010670 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ec047e2-1461-4fc0-b1cf-d149cc23924b-kube-api-access-vbzv8" (OuterVolumeSpecName: "kube-api-access-vbzv8") pod "8ec047e2-1461-4fc0-b1cf-d149cc23924b" (UID: "8ec047e2-1461-4fc0-b1cf-d149cc23924b"). InnerVolumeSpecName "kube-api-access-vbzv8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:30:22 crc kubenswrapper[4703]: I1209 12:30:22.017019 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ec047e2-1461-4fc0-b1cf-d149cc23924b-scripts" (OuterVolumeSpecName: "scripts") pod "8ec047e2-1461-4fc0-b1cf-d149cc23924b" (UID: "8ec047e2-1461-4fc0-b1cf-d149cc23924b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:30:22 crc kubenswrapper[4703]: I1209 12:30:22.073859 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ec047e2-1461-4fc0-b1cf-d149cc23924b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "8ec047e2-1461-4fc0-b1cf-d149cc23924b" (UID: "8ec047e2-1461-4fc0-b1cf-d149cc23924b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:30:22 crc kubenswrapper[4703]: I1209 12:30:22.096805 4703 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8ec047e2-1461-4fc0-b1cf-d149cc23924b-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 09 12:30:22 crc kubenswrapper[4703]: I1209 12:30:22.096847 4703 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8ec047e2-1461-4fc0-b1cf-d149cc23924b-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 09 12:30:22 crc kubenswrapper[4703]: I1209 12:30:22.096861 4703 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8ec047e2-1461-4fc0-b1cf-d149cc23924b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 09 12:30:22 crc kubenswrapper[4703]: I1209 12:30:22.096877 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vbzv8\" (UniqueName: \"kubernetes.io/projected/8ec047e2-1461-4fc0-b1cf-d149cc23924b-kube-api-access-vbzv8\") on node \"crc\" DevicePath \"\"" Dec 09 12:30:22 crc kubenswrapper[4703]: I1209 12:30:22.096889 4703 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ec047e2-1461-4fc0-b1cf-d149cc23924b-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 12:30:22 crc kubenswrapper[4703]: I1209 12:30:22.106838 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 09 12:30:22 crc kubenswrapper[4703]: I1209 12:30:22.205813 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ec047e2-1461-4fc0-b1cf-d149cc23924b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8ec047e2-1461-4fc0-b1cf-d149cc23924b" (UID: "8ec047e2-1461-4fc0-b1cf-d149cc23924b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:30:22 crc kubenswrapper[4703]: I1209 12:30:22.213699 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ec047e2-1461-4fc0-b1cf-d149cc23924b-config-data" (OuterVolumeSpecName: "config-data") pod "8ec047e2-1461-4fc0-b1cf-d149cc23924b" (UID: "8ec047e2-1461-4fc0-b1cf-d149cc23924b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:30:22 crc kubenswrapper[4703]: I1209 12:30:22.303528 4703 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ec047e2-1461-4fc0-b1cf-d149cc23924b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:30:22 crc kubenswrapper[4703]: I1209 12:30:22.303618 4703 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ec047e2-1461-4fc0-b1cf-d149cc23924b-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 12:30:22 crc kubenswrapper[4703]: I1209 12:30:22.474307 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8ec047e2-1461-4fc0-b1cf-d149cc23924b","Type":"ContainerDied","Data":"6918f1c9131d6a078c9545201a4c29c25d2381de0150e526b27448f44f3916c1"} Dec 09 12:30:22 crc kubenswrapper[4703]: I1209 12:30:22.474366 4703 scope.go:117] "RemoveContainer" containerID="a485ed4f1f568db8578cea11283f2481188b69eef867ce794b79211dc66a3225" Dec 09 12:30:22 crc kubenswrapper[4703]: I1209 12:30:22.474384 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 12:30:22 crc kubenswrapper[4703]: I1209 12:30:22.516931 4703 scope.go:117] "RemoveContainer" containerID="d229ccd0e57dad6cee969c55c5bb1dd86e7bc1da36d31bbb98d785d9e425bd9c" Dec 09 12:30:22 crc kubenswrapper[4703]: I1209 12:30:22.520795 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 09 12:30:22 crc kubenswrapper[4703]: I1209 12:30:22.551313 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 09 12:30:22 crc kubenswrapper[4703]: I1209 12:30:22.556339 4703 scope.go:117] "RemoveContainer" containerID="8cf5f01aea04ac23eff56442767dfaa85f4d9467e280928589b1ca9268ad367e" Dec 09 12:30:22 crc kubenswrapper[4703]: I1209 12:30:22.572087 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 09 12:30:22 crc kubenswrapper[4703]: E1209 12:30:22.575848 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ec047e2-1461-4fc0-b1cf-d149cc23924b" containerName="proxy-httpd" Dec 09 12:30:22 crc kubenswrapper[4703]: I1209 12:30:22.575924 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ec047e2-1461-4fc0-b1cf-d149cc23924b" containerName="proxy-httpd" Dec 09 12:30:22 crc kubenswrapper[4703]: E1209 12:30:22.575965 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ec047e2-1461-4fc0-b1cf-d149cc23924b" containerName="sg-core" Dec 09 12:30:22 crc kubenswrapper[4703]: I1209 12:30:22.575974 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ec047e2-1461-4fc0-b1cf-d149cc23924b" containerName="sg-core" Dec 09 12:30:22 crc kubenswrapper[4703]: E1209 12:30:22.576049 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ec047e2-1461-4fc0-b1cf-d149cc23924b" containerName="ceilometer-central-agent" Dec 09 12:30:22 crc kubenswrapper[4703]: I1209 12:30:22.576059 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ec047e2-1461-4fc0-b1cf-d149cc23924b" containerName="ceilometer-central-agent" Dec 09 12:30:22 crc kubenswrapper[4703]: E1209 12:30:22.576114 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ec047e2-1461-4fc0-b1cf-d149cc23924b" containerName="ceilometer-notification-agent" Dec 09 12:30:22 crc kubenswrapper[4703]: I1209 12:30:22.576121 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ec047e2-1461-4fc0-b1cf-d149cc23924b" containerName="ceilometer-notification-agent" Dec 09 12:30:22 crc kubenswrapper[4703]: I1209 12:30:22.576663 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ec047e2-1461-4fc0-b1cf-d149cc23924b" containerName="ceilometer-notification-agent" Dec 09 12:30:22 crc kubenswrapper[4703]: I1209 12:30:22.576708 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ec047e2-1461-4fc0-b1cf-d149cc23924b" containerName="proxy-httpd" Dec 09 12:30:22 crc kubenswrapper[4703]: I1209 12:30:22.576723 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ec047e2-1461-4fc0-b1cf-d149cc23924b" containerName="ceilometer-central-agent" Dec 09 12:30:22 crc kubenswrapper[4703]: I1209 12:30:22.576736 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ec047e2-1461-4fc0-b1cf-d149cc23924b" containerName="sg-core" Dec 09 12:30:22 crc kubenswrapper[4703]: I1209 12:30:22.579934 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 12:30:22 crc kubenswrapper[4703]: I1209 12:30:22.585910 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 09 12:30:22 crc kubenswrapper[4703]: I1209 12:30:22.586217 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 09 12:30:22 crc kubenswrapper[4703]: I1209 12:30:22.597626 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 09 12:30:22 crc kubenswrapper[4703]: I1209 12:30:22.603746 4703 scope.go:117] "RemoveContainer" containerID="e374ad9846e8bc4fa09abe97caf319e7922096a04787be0cd2238c81120960c0" Dec 09 12:30:22 crc kubenswrapper[4703]: I1209 12:30:22.612996 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bee6594d-eea7-4044-8404-4210a11002c6-scripts\") pod \"ceilometer-0\" (UID: \"bee6594d-eea7-4044-8404-4210a11002c6\") " pod="openstack/ceilometer-0" Dec 09 12:30:22 crc kubenswrapper[4703]: I1209 12:30:22.613061 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8q9wn\" (UniqueName: \"kubernetes.io/projected/bee6594d-eea7-4044-8404-4210a11002c6-kube-api-access-8q9wn\") pod \"ceilometer-0\" (UID: \"bee6594d-eea7-4044-8404-4210a11002c6\") " pod="openstack/ceilometer-0" Dec 09 12:30:22 crc kubenswrapper[4703]: I1209 12:30:22.613213 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bee6594d-eea7-4044-8404-4210a11002c6-config-data\") pod \"ceilometer-0\" (UID: \"bee6594d-eea7-4044-8404-4210a11002c6\") " pod="openstack/ceilometer-0" Dec 09 12:30:22 crc kubenswrapper[4703]: I1209 12:30:22.613237 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bee6594d-eea7-4044-8404-4210a11002c6-log-httpd\") pod \"ceilometer-0\" (UID: \"bee6594d-eea7-4044-8404-4210a11002c6\") " pod="openstack/ceilometer-0" Dec 09 12:30:22 crc kubenswrapper[4703]: I1209 12:30:22.613457 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bee6594d-eea7-4044-8404-4210a11002c6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bee6594d-eea7-4044-8404-4210a11002c6\") " pod="openstack/ceilometer-0" Dec 09 12:30:22 crc kubenswrapper[4703]: I1209 12:30:22.613556 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bee6594d-eea7-4044-8404-4210a11002c6-run-httpd\") pod \"ceilometer-0\" (UID: \"bee6594d-eea7-4044-8404-4210a11002c6\") " pod="openstack/ceilometer-0" Dec 09 12:30:22 crc kubenswrapper[4703]: I1209 12:30:22.614155 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bee6594d-eea7-4044-8404-4210a11002c6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bee6594d-eea7-4044-8404-4210a11002c6\") " pod="openstack/ceilometer-0" Dec 09 12:30:22 crc kubenswrapper[4703]: I1209 12:30:22.717098 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bee6594d-eea7-4044-8404-4210a11002c6-scripts\") pod \"ceilometer-0\" (UID: \"bee6594d-eea7-4044-8404-4210a11002c6\") " pod="openstack/ceilometer-0" Dec 09 12:30:22 crc kubenswrapper[4703]: I1209 12:30:22.717334 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8q9wn\" (UniqueName: \"kubernetes.io/projected/bee6594d-eea7-4044-8404-4210a11002c6-kube-api-access-8q9wn\") pod \"ceilometer-0\" (UID: \"bee6594d-eea7-4044-8404-4210a11002c6\") " pod="openstack/ceilometer-0" Dec 09 12:30:22 crc kubenswrapper[4703]: I1209 12:30:22.717451 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bee6594d-eea7-4044-8404-4210a11002c6-config-data\") pod \"ceilometer-0\" (UID: \"bee6594d-eea7-4044-8404-4210a11002c6\") " pod="openstack/ceilometer-0" Dec 09 12:30:22 crc kubenswrapper[4703]: I1209 12:30:22.717480 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bee6594d-eea7-4044-8404-4210a11002c6-log-httpd\") pod \"ceilometer-0\" (UID: \"bee6594d-eea7-4044-8404-4210a11002c6\") " pod="openstack/ceilometer-0" Dec 09 12:30:22 crc kubenswrapper[4703]: I1209 12:30:22.718218 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bee6594d-eea7-4044-8404-4210a11002c6-log-httpd\") pod \"ceilometer-0\" (UID: \"bee6594d-eea7-4044-8404-4210a11002c6\") " pod="openstack/ceilometer-0" Dec 09 12:30:22 crc kubenswrapper[4703]: I1209 12:30:22.718826 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bee6594d-eea7-4044-8404-4210a11002c6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bee6594d-eea7-4044-8404-4210a11002c6\") " pod="openstack/ceilometer-0" Dec 09 12:30:22 crc kubenswrapper[4703]: I1209 12:30:22.718960 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bee6594d-eea7-4044-8404-4210a11002c6-run-httpd\") pod \"ceilometer-0\" (UID: \"bee6594d-eea7-4044-8404-4210a11002c6\") " pod="openstack/ceilometer-0" Dec 09 12:30:22 crc kubenswrapper[4703]: I1209 12:30:22.719332 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bee6594d-eea7-4044-8404-4210a11002c6-run-httpd\") pod \"ceilometer-0\" (UID: \"bee6594d-eea7-4044-8404-4210a11002c6\") " pod="openstack/ceilometer-0" Dec 09 12:30:22 crc kubenswrapper[4703]: I1209 12:30:22.720237 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bee6594d-eea7-4044-8404-4210a11002c6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bee6594d-eea7-4044-8404-4210a11002c6\") " pod="openstack/ceilometer-0" Dec 09 12:30:22 crc kubenswrapper[4703]: I1209 12:30:22.722020 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bee6594d-eea7-4044-8404-4210a11002c6-scripts\") pod \"ceilometer-0\" (UID: \"bee6594d-eea7-4044-8404-4210a11002c6\") " pod="openstack/ceilometer-0" Dec 09 12:30:22 crc kubenswrapper[4703]: I1209 12:30:22.722639 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bee6594d-eea7-4044-8404-4210a11002c6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bee6594d-eea7-4044-8404-4210a11002c6\") " pod="openstack/ceilometer-0" Dec 09 12:30:22 crc kubenswrapper[4703]: I1209 12:30:22.724314 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bee6594d-eea7-4044-8404-4210a11002c6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bee6594d-eea7-4044-8404-4210a11002c6\") " pod="openstack/ceilometer-0" Dec 09 12:30:22 crc kubenswrapper[4703]: I1209 12:30:22.726589 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bee6594d-eea7-4044-8404-4210a11002c6-config-data\") pod \"ceilometer-0\" (UID: \"bee6594d-eea7-4044-8404-4210a11002c6\") " pod="openstack/ceilometer-0" Dec 09 12:30:22 crc kubenswrapper[4703]: I1209 12:30:22.746557 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8q9wn\" (UniqueName: \"kubernetes.io/projected/bee6594d-eea7-4044-8404-4210a11002c6-kube-api-access-8q9wn\") pod \"ceilometer-0\" (UID: \"bee6594d-eea7-4044-8404-4210a11002c6\") " pod="openstack/ceilometer-0" Dec 09 12:30:22 crc kubenswrapper[4703]: I1209 12:30:22.905183 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 12:30:23 crc kubenswrapper[4703]: I1209 12:30:23.094563 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ec047e2-1461-4fc0-b1cf-d149cc23924b" path="/var/lib/kubelet/pods/8ec047e2-1461-4fc0-b1cf-d149cc23924b/volumes" Dec 09 12:30:23 crc kubenswrapper[4703]: W1209 12:30:23.437041 4703 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbee6594d_eea7_4044_8404_4210a11002c6.slice/crio-16212a9969b8796a35b34958b3d3905d6f353beb760dead5b96d66f3eb771c5b WatchSource:0}: Error finding container 16212a9969b8796a35b34958b3d3905d6f353beb760dead5b96d66f3eb771c5b: Status 404 returned error can't find the container with id 16212a9969b8796a35b34958b3d3905d6f353beb760dead5b96d66f3eb771c5b Dec 09 12:30:23 crc kubenswrapper[4703]: I1209 12:30:23.443171 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 09 12:30:23 crc kubenswrapper[4703]: I1209 12:30:23.489825 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bee6594d-eea7-4044-8404-4210a11002c6","Type":"ContainerStarted","Data":"16212a9969b8796a35b34958b3d3905d6f353beb760dead5b96d66f3eb771c5b"} Dec 09 12:30:24 crc kubenswrapper[4703]: I1209 12:30:24.507777 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bee6594d-eea7-4044-8404-4210a11002c6","Type":"ContainerStarted","Data":"05043eabf6157ae390d134c325de17dd09fb446af0334e80a864a9718d44ce73"} Dec 09 12:30:25 crc kubenswrapper[4703]: I1209 12:30:25.532214 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bee6594d-eea7-4044-8404-4210a11002c6","Type":"ContainerStarted","Data":"766b563d79b84373a1fedaec7872f8e9a46fcc7572bb9a95daf5ebc5496b07fd"} Dec 09 12:30:26 crc kubenswrapper[4703]: I1209 12:30:26.547836 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bee6594d-eea7-4044-8404-4210a11002c6","Type":"ContainerStarted","Data":"f46a826e082edf0b6988dd959b983b3b6a2f207cc6cb53f6bea25ff15d7a14b1"} Dec 09 12:30:28 crc kubenswrapper[4703]: I1209 12:30:28.581401 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bee6594d-eea7-4044-8404-4210a11002c6","Type":"ContainerStarted","Data":"28c5f0b351836a1e432584bb5e2c92f85017eea9d15481a46f95b37c59a0785a"} Dec 09 12:30:28 crc kubenswrapper[4703]: I1209 12:30:28.581986 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 09 12:30:28 crc kubenswrapper[4703]: I1209 12:30:28.614572 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.202047074 podStartE2EDuration="6.614540612s" podCreationTimestamp="2025-12-09 12:30:22 +0000 UTC" firstStartedPulling="2025-12-09 12:30:23.440339729 +0000 UTC m=+1522.689103248" lastFinishedPulling="2025-12-09 12:30:27.852833267 +0000 UTC m=+1527.101596786" observedRunningTime="2025-12-09 12:30:28.606987234 +0000 UTC m=+1527.855750753" watchObservedRunningTime="2025-12-09 12:30:28.614540612 +0000 UTC m=+1527.863304131" Dec 09 12:30:30 crc kubenswrapper[4703]: I1209 12:30:30.083881 4703 patch_prober.go:28] interesting pod/machine-config-daemon-q8sfk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 12:30:30 crc kubenswrapper[4703]: I1209 12:30:30.084475 4703 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 12:30:30 crc kubenswrapper[4703]: I1209 12:30:30.084535 4703 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" Dec 09 12:30:30 crc kubenswrapper[4703]: I1209 12:30:30.085774 4703 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"aec88ccdbd5bc6c7f7f9dd534b57b24318f2c545ead2d59998c5ffb08ae72b46"} pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 12:30:30 crc kubenswrapper[4703]: I1209 12:30:30.085857 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" containerName="machine-config-daemon" containerID="cri-o://aec88ccdbd5bc6c7f7f9dd534b57b24318f2c545ead2d59998c5ffb08ae72b46" gracePeriod=600 Dec 09 12:30:30 crc kubenswrapper[4703]: E1209 12:30:30.222944 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 12:30:30 crc kubenswrapper[4703]: I1209 12:30:30.609835 4703 generic.go:334] "Generic (PLEG): container finished" podID="32956ceb-8540-406e-8693-e86efb46cd42" containerID="aec88ccdbd5bc6c7f7f9dd534b57b24318f2c545ead2d59998c5ffb08ae72b46" exitCode=0 Dec 09 12:30:30 crc kubenswrapper[4703]: I1209 12:30:30.609944 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" event={"ID":"32956ceb-8540-406e-8693-e86efb46cd42","Type":"ContainerDied","Data":"aec88ccdbd5bc6c7f7f9dd534b57b24318f2c545ead2d59998c5ffb08ae72b46"} Dec 09 12:30:30 crc kubenswrapper[4703]: I1209 12:30:30.610273 4703 scope.go:117] "RemoveContainer" containerID="75604f5dbc97ce29a121f656b65fc7350b377b2e69e9598ea482a258333f6101" Dec 09 12:30:30 crc kubenswrapper[4703]: I1209 12:30:30.611918 4703 scope.go:117] "RemoveContainer" containerID="aec88ccdbd5bc6c7f7f9dd534b57b24318f2c545ead2d59998c5ffb08ae72b46" Dec 09 12:30:30 crc kubenswrapper[4703]: E1209 12:30:30.612411 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 12:30:39 crc kubenswrapper[4703]: I1209 12:30:39.738761 4703 generic.go:334] "Generic (PLEG): container finished" podID="955f8136-42b0-4fce-8853-3c467cf8d070" containerID="f5e137403eee78de79fb4ff40bf3ebdec88e1d547fca3518aa70ff40c87b7550" exitCode=0 Dec 09 12:30:39 crc kubenswrapper[4703]: I1209 12:30:39.738834 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-xdtxt" event={"ID":"955f8136-42b0-4fce-8853-3c467cf8d070","Type":"ContainerDied","Data":"f5e137403eee78de79fb4ff40bf3ebdec88e1d547fca3518aa70ff40c87b7550"} Dec 09 12:30:41 crc kubenswrapper[4703]: I1209 12:30:41.256567 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-xdtxt" Dec 09 12:30:41 crc kubenswrapper[4703]: I1209 12:30:41.448535 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p2df6\" (UniqueName: \"kubernetes.io/projected/955f8136-42b0-4fce-8853-3c467cf8d070-kube-api-access-p2df6\") pod \"955f8136-42b0-4fce-8853-3c467cf8d070\" (UID: \"955f8136-42b0-4fce-8853-3c467cf8d070\") " Dec 09 12:30:41 crc kubenswrapper[4703]: I1209 12:30:41.448629 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/955f8136-42b0-4fce-8853-3c467cf8d070-config-data\") pod \"955f8136-42b0-4fce-8853-3c467cf8d070\" (UID: \"955f8136-42b0-4fce-8853-3c467cf8d070\") " Dec 09 12:30:41 crc kubenswrapper[4703]: I1209 12:30:41.448763 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/955f8136-42b0-4fce-8853-3c467cf8d070-combined-ca-bundle\") pod \"955f8136-42b0-4fce-8853-3c467cf8d070\" (UID: \"955f8136-42b0-4fce-8853-3c467cf8d070\") " Dec 09 12:30:41 crc kubenswrapper[4703]: I1209 12:30:41.448820 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/955f8136-42b0-4fce-8853-3c467cf8d070-scripts\") pod \"955f8136-42b0-4fce-8853-3c467cf8d070\" (UID: \"955f8136-42b0-4fce-8853-3c467cf8d070\") " Dec 09 12:30:41 crc kubenswrapper[4703]: I1209 12:30:41.464556 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/955f8136-42b0-4fce-8853-3c467cf8d070-kube-api-access-p2df6" (OuterVolumeSpecName: "kube-api-access-p2df6") pod "955f8136-42b0-4fce-8853-3c467cf8d070" (UID: "955f8136-42b0-4fce-8853-3c467cf8d070"). InnerVolumeSpecName "kube-api-access-p2df6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:30:41 crc kubenswrapper[4703]: I1209 12:30:41.469468 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/955f8136-42b0-4fce-8853-3c467cf8d070-scripts" (OuterVolumeSpecName: "scripts") pod "955f8136-42b0-4fce-8853-3c467cf8d070" (UID: "955f8136-42b0-4fce-8853-3c467cf8d070"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:30:41 crc kubenswrapper[4703]: I1209 12:30:41.483807 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/955f8136-42b0-4fce-8853-3c467cf8d070-config-data" (OuterVolumeSpecName: "config-data") pod "955f8136-42b0-4fce-8853-3c467cf8d070" (UID: "955f8136-42b0-4fce-8853-3c467cf8d070"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:30:41 crc kubenswrapper[4703]: I1209 12:30:41.490874 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/955f8136-42b0-4fce-8853-3c467cf8d070-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "955f8136-42b0-4fce-8853-3c467cf8d070" (UID: "955f8136-42b0-4fce-8853-3c467cf8d070"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:30:41 crc kubenswrapper[4703]: I1209 12:30:41.552505 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p2df6\" (UniqueName: \"kubernetes.io/projected/955f8136-42b0-4fce-8853-3c467cf8d070-kube-api-access-p2df6\") on node \"crc\" DevicePath \"\"" Dec 09 12:30:41 crc kubenswrapper[4703]: I1209 12:30:41.552553 4703 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/955f8136-42b0-4fce-8853-3c467cf8d070-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 12:30:41 crc kubenswrapper[4703]: I1209 12:30:41.552565 4703 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/955f8136-42b0-4fce-8853-3c467cf8d070-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:30:41 crc kubenswrapper[4703]: I1209 12:30:41.552575 4703 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/955f8136-42b0-4fce-8853-3c467cf8d070-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 12:30:41 crc kubenswrapper[4703]: I1209 12:30:41.773544 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-xdtxt" event={"ID":"955f8136-42b0-4fce-8853-3c467cf8d070","Type":"ContainerDied","Data":"a1377fb54f325e6c1c80fbc8782ef302ee0098c99d7014009e5fcc26872a32b7"} Dec 09 12:30:41 crc kubenswrapper[4703]: I1209 12:30:41.773603 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-xdtxt" Dec 09 12:30:41 crc kubenswrapper[4703]: I1209 12:30:41.773604 4703 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a1377fb54f325e6c1c80fbc8782ef302ee0098c99d7014009e5fcc26872a32b7" Dec 09 12:30:41 crc kubenswrapper[4703]: I1209 12:30:41.900615 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 09 12:30:41 crc kubenswrapper[4703]: E1209 12:30:41.901132 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="955f8136-42b0-4fce-8853-3c467cf8d070" containerName="nova-cell0-conductor-db-sync" Dec 09 12:30:41 crc kubenswrapper[4703]: I1209 12:30:41.901173 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="955f8136-42b0-4fce-8853-3c467cf8d070" containerName="nova-cell0-conductor-db-sync" Dec 09 12:30:41 crc kubenswrapper[4703]: I1209 12:30:41.901468 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="955f8136-42b0-4fce-8853-3c467cf8d070" containerName="nova-cell0-conductor-db-sync" Dec 09 12:30:41 crc kubenswrapper[4703]: I1209 12:30:41.904662 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 09 12:30:41 crc kubenswrapper[4703]: I1209 12:30:41.909835 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Dec 09 12:30:41 crc kubenswrapper[4703]: I1209 12:30:41.909857 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-295qv" Dec 09 12:30:41 crc kubenswrapper[4703]: I1209 12:30:41.921035 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 09 12:30:42 crc kubenswrapper[4703]: I1209 12:30:42.062759 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16228bdd-1db6-4397-b09a-b372d7957ad8-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"16228bdd-1db6-4397-b09a-b372d7957ad8\") " pod="openstack/nova-cell0-conductor-0" Dec 09 12:30:42 crc kubenswrapper[4703]: I1209 12:30:42.062840 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7wh6\" (UniqueName: \"kubernetes.io/projected/16228bdd-1db6-4397-b09a-b372d7957ad8-kube-api-access-c7wh6\") pod \"nova-cell0-conductor-0\" (UID: \"16228bdd-1db6-4397-b09a-b372d7957ad8\") " pod="openstack/nova-cell0-conductor-0" Dec 09 12:30:42 crc kubenswrapper[4703]: I1209 12:30:42.062862 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16228bdd-1db6-4397-b09a-b372d7957ad8-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"16228bdd-1db6-4397-b09a-b372d7957ad8\") " pod="openstack/nova-cell0-conductor-0" Dec 09 12:30:42 crc kubenswrapper[4703]: I1209 12:30:42.069545 4703 scope.go:117] "RemoveContainer" containerID="aec88ccdbd5bc6c7f7f9dd534b57b24318f2c545ead2d59998c5ffb08ae72b46" Dec 09 12:30:42 crc kubenswrapper[4703]: E1209 12:30:42.069995 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 12:30:42 crc kubenswrapper[4703]: I1209 12:30:42.164400 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7wh6\" (UniqueName: \"kubernetes.io/projected/16228bdd-1db6-4397-b09a-b372d7957ad8-kube-api-access-c7wh6\") pod \"nova-cell0-conductor-0\" (UID: \"16228bdd-1db6-4397-b09a-b372d7957ad8\") " pod="openstack/nova-cell0-conductor-0" Dec 09 12:30:42 crc kubenswrapper[4703]: I1209 12:30:42.164439 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16228bdd-1db6-4397-b09a-b372d7957ad8-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"16228bdd-1db6-4397-b09a-b372d7957ad8\") " pod="openstack/nova-cell0-conductor-0" Dec 09 12:30:42 crc kubenswrapper[4703]: I1209 12:30:42.164603 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16228bdd-1db6-4397-b09a-b372d7957ad8-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"16228bdd-1db6-4397-b09a-b372d7957ad8\") " pod="openstack/nova-cell0-conductor-0" Dec 09 12:30:42 crc kubenswrapper[4703]: I1209 12:30:42.172832 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16228bdd-1db6-4397-b09a-b372d7957ad8-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"16228bdd-1db6-4397-b09a-b372d7957ad8\") " pod="openstack/nova-cell0-conductor-0" Dec 09 12:30:42 crc kubenswrapper[4703]: I1209 12:30:42.172928 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16228bdd-1db6-4397-b09a-b372d7957ad8-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"16228bdd-1db6-4397-b09a-b372d7957ad8\") " pod="openstack/nova-cell0-conductor-0" Dec 09 12:30:42 crc kubenswrapper[4703]: I1209 12:30:42.185158 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7wh6\" (UniqueName: \"kubernetes.io/projected/16228bdd-1db6-4397-b09a-b372d7957ad8-kube-api-access-c7wh6\") pod \"nova-cell0-conductor-0\" (UID: \"16228bdd-1db6-4397-b09a-b372d7957ad8\") " pod="openstack/nova-cell0-conductor-0" Dec 09 12:30:42 crc kubenswrapper[4703]: I1209 12:30:42.236691 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 09 12:30:42 crc kubenswrapper[4703]: I1209 12:30:42.796712 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 09 12:30:43 crc kubenswrapper[4703]: I1209 12:30:43.817907 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"16228bdd-1db6-4397-b09a-b372d7957ad8","Type":"ContainerStarted","Data":"2419703b1b562909ff52a675615a3d8987ca3e2c953a553bd0d26d301579508d"} Dec 09 12:30:43 crc kubenswrapper[4703]: I1209 12:30:43.818556 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"16228bdd-1db6-4397-b09a-b372d7957ad8","Type":"ContainerStarted","Data":"d17787fef0fdaa345b87206c9954bca5b03a9341cf3c8a723a50f488d5b3b3a5"} Dec 09 12:30:43 crc kubenswrapper[4703]: I1209 12:30:43.818584 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Dec 09 12:30:43 crc kubenswrapper[4703]: I1209 12:30:43.852604 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.852579578 podStartE2EDuration="2.852579578s" podCreationTimestamp="2025-12-09 12:30:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:30:43.839903682 +0000 UTC m=+1543.088667201" watchObservedRunningTime="2025-12-09 12:30:43.852579578 +0000 UTC m=+1543.101343097" Dec 09 12:30:52 crc kubenswrapper[4703]: I1209 12:30:52.362685 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Dec 09 12:30:52 crc kubenswrapper[4703]: I1209 12:30:52.920719 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-zhqpt"] Dec 09 12:30:52 crc kubenswrapper[4703]: I1209 12:30:52.928413 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-zhqpt" Dec 09 12:30:52 crc kubenswrapper[4703]: I1209 12:30:52.937484 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Dec 09 12:30:52 crc kubenswrapper[4703]: I1209 12:30:52.937863 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Dec 09 12:30:52 crc kubenswrapper[4703]: I1209 12:30:52.993296 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdtqv\" (UniqueName: \"kubernetes.io/projected/25934d60-c1c2-41eb-9470-cdcab1791642-kube-api-access-mdtqv\") pod \"nova-cell0-cell-mapping-zhqpt\" (UID: \"25934d60-c1c2-41eb-9470-cdcab1791642\") " pod="openstack/nova-cell0-cell-mapping-zhqpt" Dec 09 12:30:52 crc kubenswrapper[4703]: I1209 12:30:52.993398 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25934d60-c1c2-41eb-9470-cdcab1791642-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-zhqpt\" (UID: \"25934d60-c1c2-41eb-9470-cdcab1791642\") " pod="openstack/nova-cell0-cell-mapping-zhqpt" Dec 09 12:30:52 crc kubenswrapper[4703]: I1209 12:30:52.993904 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25934d60-c1c2-41eb-9470-cdcab1791642-config-data\") pod \"nova-cell0-cell-mapping-zhqpt\" (UID: \"25934d60-c1c2-41eb-9470-cdcab1791642\") " pod="openstack/nova-cell0-cell-mapping-zhqpt" Dec 09 12:30:52 crc kubenswrapper[4703]: I1209 12:30:52.993997 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25934d60-c1c2-41eb-9470-cdcab1791642-scripts\") pod \"nova-cell0-cell-mapping-zhqpt\" (UID: \"25934d60-c1c2-41eb-9470-cdcab1791642\") " pod="openstack/nova-cell0-cell-mapping-zhqpt" Dec 09 12:30:53 crc kubenswrapper[4703]: I1209 12:30:53.006803 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-zhqpt"] Dec 09 12:30:53 crc kubenswrapper[4703]: I1209 12:30:53.017059 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 09 12:30:53 crc kubenswrapper[4703]: I1209 12:30:53.105057 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25934d60-c1c2-41eb-9470-cdcab1791642-config-data\") pod \"nova-cell0-cell-mapping-zhqpt\" (UID: \"25934d60-c1c2-41eb-9470-cdcab1791642\") " pod="openstack/nova-cell0-cell-mapping-zhqpt" Dec 09 12:30:53 crc kubenswrapper[4703]: I1209 12:30:53.105134 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25934d60-c1c2-41eb-9470-cdcab1791642-scripts\") pod \"nova-cell0-cell-mapping-zhqpt\" (UID: \"25934d60-c1c2-41eb-9470-cdcab1791642\") " pod="openstack/nova-cell0-cell-mapping-zhqpt" Dec 09 12:30:53 crc kubenswrapper[4703]: I1209 12:30:53.105278 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdtqv\" (UniqueName: \"kubernetes.io/projected/25934d60-c1c2-41eb-9470-cdcab1791642-kube-api-access-mdtqv\") pod \"nova-cell0-cell-mapping-zhqpt\" (UID: \"25934d60-c1c2-41eb-9470-cdcab1791642\") " pod="openstack/nova-cell0-cell-mapping-zhqpt" Dec 09 12:30:53 crc kubenswrapper[4703]: I1209 12:30:53.105332 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25934d60-c1c2-41eb-9470-cdcab1791642-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-zhqpt\" (UID: \"25934d60-c1c2-41eb-9470-cdcab1791642\") " pod="openstack/nova-cell0-cell-mapping-zhqpt" Dec 09 12:30:53 crc kubenswrapper[4703]: I1209 12:30:53.126652 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25934d60-c1c2-41eb-9470-cdcab1791642-scripts\") pod \"nova-cell0-cell-mapping-zhqpt\" (UID: \"25934d60-c1c2-41eb-9470-cdcab1791642\") " pod="openstack/nova-cell0-cell-mapping-zhqpt" Dec 09 12:30:53 crc kubenswrapper[4703]: I1209 12:30:53.131263 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25934d60-c1c2-41eb-9470-cdcab1791642-config-data\") pod \"nova-cell0-cell-mapping-zhqpt\" (UID: \"25934d60-c1c2-41eb-9470-cdcab1791642\") " pod="openstack/nova-cell0-cell-mapping-zhqpt" Dec 09 12:30:53 crc kubenswrapper[4703]: I1209 12:30:53.158634 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25934d60-c1c2-41eb-9470-cdcab1791642-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-zhqpt\" (UID: \"25934d60-c1c2-41eb-9470-cdcab1791642\") " pod="openstack/nova-cell0-cell-mapping-zhqpt" Dec 09 12:30:53 crc kubenswrapper[4703]: I1209 12:30:53.184572 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 09 12:30:53 crc kubenswrapper[4703]: I1209 12:30:53.201436 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 09 12:30:53 crc kubenswrapper[4703]: I1209 12:30:53.210733 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 09 12:30:53 crc kubenswrapper[4703]: I1209 12:30:53.214360 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdtqv\" (UniqueName: \"kubernetes.io/projected/25934d60-c1c2-41eb-9470-cdcab1791642-kube-api-access-mdtqv\") pod \"nova-cell0-cell-mapping-zhqpt\" (UID: \"25934d60-c1c2-41eb-9470-cdcab1791642\") " pod="openstack/nova-cell0-cell-mapping-zhqpt" Dec 09 12:30:53 crc kubenswrapper[4703]: I1209 12:30:53.312380 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03d4925e-cc3e-483d-845c-35da5aabe0b1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"03d4925e-cc3e-483d-845c-35da5aabe0b1\") " pod="openstack/nova-scheduler-0" Dec 09 12:30:53 crc kubenswrapper[4703]: I1209 12:30:53.312548 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03d4925e-cc3e-483d-845c-35da5aabe0b1-config-data\") pod \"nova-scheduler-0\" (UID: \"03d4925e-cc3e-483d-845c-35da5aabe0b1\") " pod="openstack/nova-scheduler-0" Dec 09 12:30:53 crc kubenswrapper[4703]: I1209 12:30:53.312610 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q56gt\" (UniqueName: \"kubernetes.io/projected/03d4925e-cc3e-483d-845c-35da5aabe0b1-kube-api-access-q56gt\") pod \"nova-scheduler-0\" (UID: \"03d4925e-cc3e-483d-845c-35da5aabe0b1\") " pod="openstack/nova-scheduler-0" Dec 09 12:30:53 crc kubenswrapper[4703]: I1209 12:30:53.321829 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-zhqpt" Dec 09 12:30:53 crc kubenswrapper[4703]: I1209 12:30:53.332865 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 09 12:30:53 crc kubenswrapper[4703]: I1209 12:30:53.391840 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 09 12:30:53 crc kubenswrapper[4703]: I1209 12:30:53.394070 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 09 12:30:53 crc kubenswrapper[4703]: I1209 12:30:53.398127 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 09 12:30:53 crc kubenswrapper[4703]: I1209 12:30:53.410499 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 09 12:30:53 crc kubenswrapper[4703]: I1209 12:30:53.464839 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0419040d-e120-4379-acc7-3d3081dcea7a-logs\") pod \"nova-metadata-0\" (UID: \"0419040d-e120-4379-acc7-3d3081dcea7a\") " pod="openstack/nova-metadata-0" Dec 09 12:30:53 crc kubenswrapper[4703]: I1209 12:30:53.464994 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03d4925e-cc3e-483d-845c-35da5aabe0b1-config-data\") pod \"nova-scheduler-0\" (UID: \"03d4925e-cc3e-483d-845c-35da5aabe0b1\") " pod="openstack/nova-scheduler-0" Dec 09 12:30:53 crc kubenswrapper[4703]: I1209 12:30:53.465092 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0419040d-e120-4379-acc7-3d3081dcea7a-config-data\") pod \"nova-metadata-0\" (UID: \"0419040d-e120-4379-acc7-3d3081dcea7a\") " pod="openstack/nova-metadata-0" Dec 09 12:30:53 crc kubenswrapper[4703]: I1209 12:30:53.465133 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q56gt\" (UniqueName: \"kubernetes.io/projected/03d4925e-cc3e-483d-845c-35da5aabe0b1-kube-api-access-q56gt\") pod \"nova-scheduler-0\" (UID: \"03d4925e-cc3e-483d-845c-35da5aabe0b1\") " pod="openstack/nova-scheduler-0" Dec 09 12:30:53 crc kubenswrapper[4703]: I1209 12:30:53.465218 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0419040d-e120-4379-acc7-3d3081dcea7a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0419040d-e120-4379-acc7-3d3081dcea7a\") " pod="openstack/nova-metadata-0" Dec 09 12:30:53 crc kubenswrapper[4703]: I1209 12:30:53.465371 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnjs4\" (UniqueName: \"kubernetes.io/projected/0419040d-e120-4379-acc7-3d3081dcea7a-kube-api-access-jnjs4\") pod \"nova-metadata-0\" (UID: \"0419040d-e120-4379-acc7-3d3081dcea7a\") " pod="openstack/nova-metadata-0" Dec 09 12:30:53 crc kubenswrapper[4703]: I1209 12:30:53.465436 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03d4925e-cc3e-483d-845c-35da5aabe0b1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"03d4925e-cc3e-483d-845c-35da5aabe0b1\") " pod="openstack/nova-scheduler-0" Dec 09 12:30:53 crc kubenswrapper[4703]: I1209 12:30:53.474103 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03d4925e-cc3e-483d-845c-35da5aabe0b1-config-data\") pod \"nova-scheduler-0\" (UID: \"03d4925e-cc3e-483d-845c-35da5aabe0b1\") " pod="openstack/nova-scheduler-0" Dec 09 12:30:53 crc kubenswrapper[4703]: I1209 12:30:53.489851 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 09 12:30:53 crc kubenswrapper[4703]: I1209 12:30:53.500813 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03d4925e-cc3e-483d-845c-35da5aabe0b1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"03d4925e-cc3e-483d-845c-35da5aabe0b1\") " pod="openstack/nova-scheduler-0" Dec 09 12:30:53 crc kubenswrapper[4703]: I1209 12:30:53.501622 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 09 12:30:53 crc kubenswrapper[4703]: I1209 12:30:53.510751 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 09 12:30:53 crc kubenswrapper[4703]: I1209 12:30:53.533851 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q56gt\" (UniqueName: \"kubernetes.io/projected/03d4925e-cc3e-483d-845c-35da5aabe0b1-kube-api-access-q56gt\") pod \"nova-scheduler-0\" (UID: \"03d4925e-cc3e-483d-845c-35da5aabe0b1\") " pod="openstack/nova-scheduler-0" Dec 09 12:30:53 crc kubenswrapper[4703]: I1209 12:30:53.538955 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 09 12:30:53 crc kubenswrapper[4703]: I1209 12:30:53.578489 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0419040d-e120-4379-acc7-3d3081dcea7a-logs\") pod \"nova-metadata-0\" (UID: \"0419040d-e120-4379-acc7-3d3081dcea7a\") " pod="openstack/nova-metadata-0" Dec 09 12:30:53 crc kubenswrapper[4703]: I1209 12:30:53.578563 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cn4nr\" (UniqueName: \"kubernetes.io/projected/a0fa2550-b4dd-4365-8be1-b918c233a938-kube-api-access-cn4nr\") pod \"nova-api-0\" (UID: \"a0fa2550-b4dd-4365-8be1-b918c233a938\") " pod="openstack/nova-api-0" Dec 09 12:30:53 crc kubenswrapper[4703]: I1209 12:30:53.578638 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0fa2550-b4dd-4365-8be1-b918c233a938-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a0fa2550-b4dd-4365-8be1-b918c233a938\") " pod="openstack/nova-api-0" Dec 09 12:30:53 crc kubenswrapper[4703]: I1209 12:30:53.578663 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0419040d-e120-4379-acc7-3d3081dcea7a-config-data\") pod \"nova-metadata-0\" (UID: \"0419040d-e120-4379-acc7-3d3081dcea7a\") " pod="openstack/nova-metadata-0" Dec 09 12:30:53 crc kubenswrapper[4703]: I1209 12:30:53.578702 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0419040d-e120-4379-acc7-3d3081dcea7a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0419040d-e120-4379-acc7-3d3081dcea7a\") " pod="openstack/nova-metadata-0" Dec 09 12:30:53 crc kubenswrapper[4703]: I1209 12:30:53.578771 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0fa2550-b4dd-4365-8be1-b918c233a938-config-data\") pod \"nova-api-0\" (UID: \"a0fa2550-b4dd-4365-8be1-b918c233a938\") " pod="openstack/nova-api-0" Dec 09 12:30:53 crc kubenswrapper[4703]: I1209 12:30:53.578807 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnjs4\" (UniqueName: \"kubernetes.io/projected/0419040d-e120-4379-acc7-3d3081dcea7a-kube-api-access-jnjs4\") pod \"nova-metadata-0\" (UID: \"0419040d-e120-4379-acc7-3d3081dcea7a\") " pod="openstack/nova-metadata-0" Dec 09 12:30:53 crc kubenswrapper[4703]: I1209 12:30:53.578830 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a0fa2550-b4dd-4365-8be1-b918c233a938-logs\") pod \"nova-api-0\" (UID: \"a0fa2550-b4dd-4365-8be1-b918c233a938\") " pod="openstack/nova-api-0" Dec 09 12:30:53 crc kubenswrapper[4703]: I1209 12:30:53.579413 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0419040d-e120-4379-acc7-3d3081dcea7a-logs\") pod \"nova-metadata-0\" (UID: \"0419040d-e120-4379-acc7-3d3081dcea7a\") " pod="openstack/nova-metadata-0" Dec 09 12:30:53 crc kubenswrapper[4703]: I1209 12:30:53.592361 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0419040d-e120-4379-acc7-3d3081dcea7a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0419040d-e120-4379-acc7-3d3081dcea7a\") " pod="openstack/nova-metadata-0" Dec 09 12:30:53 crc kubenswrapper[4703]: I1209 12:30:53.629132 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0419040d-e120-4379-acc7-3d3081dcea7a-config-data\") pod \"nova-metadata-0\" (UID: \"0419040d-e120-4379-acc7-3d3081dcea7a\") " pod="openstack/nova-metadata-0" Dec 09 12:30:53 crc kubenswrapper[4703]: I1209 12:30:53.634899 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 09 12:30:53 crc kubenswrapper[4703]: I1209 12:30:53.648990 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 09 12:30:53 crc kubenswrapper[4703]: I1209 12:30:53.650862 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 09 12:30:53 crc kubenswrapper[4703]: I1209 12:30:53.671782 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnjs4\" (UniqueName: \"kubernetes.io/projected/0419040d-e120-4379-acc7-3d3081dcea7a-kube-api-access-jnjs4\") pod \"nova-metadata-0\" (UID: \"0419040d-e120-4379-acc7-3d3081dcea7a\") " pod="openstack/nova-metadata-0" Dec 09 12:30:53 crc kubenswrapper[4703]: I1209 12:30:53.672782 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Dec 09 12:30:53 crc kubenswrapper[4703]: I1209 12:30:53.703401 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0fa2550-b4dd-4365-8be1-b918c233a938-config-data\") pod \"nova-api-0\" (UID: \"a0fa2550-b4dd-4365-8be1-b918c233a938\") " pod="openstack/nova-api-0" Dec 09 12:30:53 crc kubenswrapper[4703]: I1209 12:30:53.703682 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a0fa2550-b4dd-4365-8be1-b918c233a938-logs\") pod \"nova-api-0\" (UID: \"a0fa2550-b4dd-4365-8be1-b918c233a938\") " pod="openstack/nova-api-0" Dec 09 12:30:53 crc kubenswrapper[4703]: I1209 12:30:53.703789 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cn4nr\" (UniqueName: \"kubernetes.io/projected/a0fa2550-b4dd-4365-8be1-b918c233a938-kube-api-access-cn4nr\") pod \"nova-api-0\" (UID: \"a0fa2550-b4dd-4365-8be1-b918c233a938\") " pod="openstack/nova-api-0" Dec 09 12:30:53 crc kubenswrapper[4703]: I1209 12:30:53.703935 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db4f7913-939e-4090-b704-57aa4d2da879-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"db4f7913-939e-4090-b704-57aa4d2da879\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 12:30:53 crc kubenswrapper[4703]: I1209 12:30:53.703976 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0fa2550-b4dd-4365-8be1-b918c233a938-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a0fa2550-b4dd-4365-8be1-b918c233a938\") " pod="openstack/nova-api-0" Dec 09 12:30:53 crc kubenswrapper[4703]: I1209 12:30:53.704059 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5l69h\" (UniqueName: \"kubernetes.io/projected/db4f7913-939e-4090-b704-57aa4d2da879-kube-api-access-5l69h\") pod \"nova-cell1-novncproxy-0\" (UID: \"db4f7913-939e-4090-b704-57aa4d2da879\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 12:30:53 crc kubenswrapper[4703]: I1209 12:30:53.704084 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db4f7913-939e-4090-b704-57aa4d2da879-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"db4f7913-939e-4090-b704-57aa4d2da879\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 12:30:53 crc kubenswrapper[4703]: I1209 12:30:53.710822 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a0fa2550-b4dd-4365-8be1-b918c233a938-logs\") pod \"nova-api-0\" (UID: \"a0fa2550-b4dd-4365-8be1-b918c233a938\") " pod="openstack/nova-api-0" Dec 09 12:30:53 crc kubenswrapper[4703]: I1209 12:30:53.710878 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78cd565959-hnbh4"] Dec 09 12:30:53 crc kubenswrapper[4703]: I1209 12:30:53.732588 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78cd565959-hnbh4" Dec 09 12:30:53 crc kubenswrapper[4703]: I1209 12:30:53.772413 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cn4nr\" (UniqueName: \"kubernetes.io/projected/a0fa2550-b4dd-4365-8be1-b918c233a938-kube-api-access-cn4nr\") pod \"nova-api-0\" (UID: \"a0fa2550-b4dd-4365-8be1-b918c233a938\") " pod="openstack/nova-api-0" Dec 09 12:30:53 crc kubenswrapper[4703]: I1209 12:30:53.783571 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0fa2550-b4dd-4365-8be1-b918c233a938-config-data\") pod \"nova-api-0\" (UID: \"a0fa2550-b4dd-4365-8be1-b918c233a938\") " pod="openstack/nova-api-0" Dec 09 12:30:53 crc kubenswrapper[4703]: I1209 12:30:53.793946 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0fa2550-b4dd-4365-8be1-b918c233a938-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a0fa2550-b4dd-4365-8be1-b918c233a938\") " pod="openstack/nova-api-0" Dec 09 12:30:53 crc kubenswrapper[4703]: I1209 12:30:53.801920 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78cd565959-hnbh4"] Dec 09 12:30:53 crc kubenswrapper[4703]: I1209 12:30:53.813679 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c1d4c235-81e5-4672-bbb4-876c9e53861d-ovsdbserver-sb\") pod \"dnsmasq-dns-78cd565959-hnbh4\" (UID: \"c1d4c235-81e5-4672-bbb4-876c9e53861d\") " pod="openstack/dnsmasq-dns-78cd565959-hnbh4" Dec 09 12:30:53 crc kubenswrapper[4703]: I1209 12:30:53.814650 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pb89q\" (UniqueName: \"kubernetes.io/projected/c1d4c235-81e5-4672-bbb4-876c9e53861d-kube-api-access-pb89q\") pod \"dnsmasq-dns-78cd565959-hnbh4\" (UID: \"c1d4c235-81e5-4672-bbb4-876c9e53861d\") " pod="openstack/dnsmasq-dns-78cd565959-hnbh4" Dec 09 12:30:53 crc kubenswrapper[4703]: I1209 12:30:53.814796 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db4f7913-939e-4090-b704-57aa4d2da879-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"db4f7913-939e-4090-b704-57aa4d2da879\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 12:30:53 crc kubenswrapper[4703]: I1209 12:30:53.814929 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c1d4c235-81e5-4672-bbb4-876c9e53861d-ovsdbserver-nb\") pod \"dnsmasq-dns-78cd565959-hnbh4\" (UID: \"c1d4c235-81e5-4672-bbb4-876c9e53861d\") " pod="openstack/dnsmasq-dns-78cd565959-hnbh4" Dec 09 12:30:53 crc kubenswrapper[4703]: I1209 12:30:53.815064 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c1d4c235-81e5-4672-bbb4-876c9e53861d-dns-svc\") pod \"dnsmasq-dns-78cd565959-hnbh4\" (UID: \"c1d4c235-81e5-4672-bbb4-876c9e53861d\") " pod="openstack/dnsmasq-dns-78cd565959-hnbh4" Dec 09 12:30:53 crc kubenswrapper[4703]: I1209 12:30:53.815271 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5l69h\" (UniqueName: \"kubernetes.io/projected/db4f7913-939e-4090-b704-57aa4d2da879-kube-api-access-5l69h\") pod \"nova-cell1-novncproxy-0\" (UID: \"db4f7913-939e-4090-b704-57aa4d2da879\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 12:30:53 crc kubenswrapper[4703]: I1209 12:30:53.815405 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db4f7913-939e-4090-b704-57aa4d2da879-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"db4f7913-939e-4090-b704-57aa4d2da879\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 12:30:53 crc kubenswrapper[4703]: I1209 12:30:53.815529 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c1d4c235-81e5-4672-bbb4-876c9e53861d-dns-swift-storage-0\") pod \"dnsmasq-dns-78cd565959-hnbh4\" (UID: \"c1d4c235-81e5-4672-bbb4-876c9e53861d\") " pod="openstack/dnsmasq-dns-78cd565959-hnbh4" Dec 09 12:30:53 crc kubenswrapper[4703]: I1209 12:30:53.815687 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1d4c235-81e5-4672-bbb4-876c9e53861d-config\") pod \"dnsmasq-dns-78cd565959-hnbh4\" (UID: \"c1d4c235-81e5-4672-bbb4-876c9e53861d\") " pod="openstack/dnsmasq-dns-78cd565959-hnbh4" Dec 09 12:30:53 crc kubenswrapper[4703]: I1209 12:30:53.827224 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db4f7913-939e-4090-b704-57aa4d2da879-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"db4f7913-939e-4090-b704-57aa4d2da879\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 12:30:53 crc kubenswrapper[4703]: I1209 12:30:53.827814 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db4f7913-939e-4090-b704-57aa4d2da879-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"db4f7913-939e-4090-b704-57aa4d2da879\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 12:30:53 crc kubenswrapper[4703]: I1209 12:30:53.831385 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 09 12:30:53 crc kubenswrapper[4703]: I1209 12:30:53.851412 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5l69h\" (UniqueName: \"kubernetes.io/projected/db4f7913-939e-4090-b704-57aa4d2da879-kube-api-access-5l69h\") pod \"nova-cell1-novncproxy-0\" (UID: \"db4f7913-939e-4090-b704-57aa4d2da879\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 12:30:53 crc kubenswrapper[4703]: I1209 12:30:53.871957 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 09 12:30:53 crc kubenswrapper[4703]: I1209 12:30:53.906841 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 09 12:30:53 crc kubenswrapper[4703]: I1209 12:30:53.918511 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c1d4c235-81e5-4672-bbb4-876c9e53861d-ovsdbserver-sb\") pod \"dnsmasq-dns-78cd565959-hnbh4\" (UID: \"c1d4c235-81e5-4672-bbb4-876c9e53861d\") " pod="openstack/dnsmasq-dns-78cd565959-hnbh4" Dec 09 12:30:53 crc kubenswrapper[4703]: I1209 12:30:53.925518 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pb89q\" (UniqueName: \"kubernetes.io/projected/c1d4c235-81e5-4672-bbb4-876c9e53861d-kube-api-access-pb89q\") pod \"dnsmasq-dns-78cd565959-hnbh4\" (UID: \"c1d4c235-81e5-4672-bbb4-876c9e53861d\") " pod="openstack/dnsmasq-dns-78cd565959-hnbh4" Dec 09 12:30:53 crc kubenswrapper[4703]: I1209 12:30:53.925628 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c1d4c235-81e5-4672-bbb4-876c9e53861d-ovsdbserver-nb\") pod \"dnsmasq-dns-78cd565959-hnbh4\" (UID: \"c1d4c235-81e5-4672-bbb4-876c9e53861d\") " pod="openstack/dnsmasq-dns-78cd565959-hnbh4" Dec 09 12:30:53 crc kubenswrapper[4703]: I1209 12:30:53.925662 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c1d4c235-81e5-4672-bbb4-876c9e53861d-dns-svc\") pod \"dnsmasq-dns-78cd565959-hnbh4\" (UID: \"c1d4c235-81e5-4672-bbb4-876c9e53861d\") " pod="openstack/dnsmasq-dns-78cd565959-hnbh4" Dec 09 12:30:53 crc kubenswrapper[4703]: I1209 12:30:53.925851 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c1d4c235-81e5-4672-bbb4-876c9e53861d-dns-swift-storage-0\") pod \"dnsmasq-dns-78cd565959-hnbh4\" (UID: \"c1d4c235-81e5-4672-bbb4-876c9e53861d\") " pod="openstack/dnsmasq-dns-78cd565959-hnbh4" Dec 09 12:30:53 crc kubenswrapper[4703]: I1209 12:30:53.925963 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1d4c235-81e5-4672-bbb4-876c9e53861d-config\") pod \"dnsmasq-dns-78cd565959-hnbh4\" (UID: \"c1d4c235-81e5-4672-bbb4-876c9e53861d\") " pod="openstack/dnsmasq-dns-78cd565959-hnbh4" Dec 09 12:30:53 crc kubenswrapper[4703]: I1209 12:30:53.928106 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c1d4c235-81e5-4672-bbb4-876c9e53861d-ovsdbserver-nb\") pod \"dnsmasq-dns-78cd565959-hnbh4\" (UID: \"c1d4c235-81e5-4672-bbb4-876c9e53861d\") " pod="openstack/dnsmasq-dns-78cd565959-hnbh4" Dec 09 12:30:53 crc kubenswrapper[4703]: I1209 12:30:53.928814 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1d4c235-81e5-4672-bbb4-876c9e53861d-config\") pod \"dnsmasq-dns-78cd565959-hnbh4\" (UID: \"c1d4c235-81e5-4672-bbb4-876c9e53861d\") " pod="openstack/dnsmasq-dns-78cd565959-hnbh4" Dec 09 12:30:53 crc kubenswrapper[4703]: I1209 12:30:53.955438 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c1d4c235-81e5-4672-bbb4-876c9e53861d-ovsdbserver-sb\") pod \"dnsmasq-dns-78cd565959-hnbh4\" (UID: \"c1d4c235-81e5-4672-bbb4-876c9e53861d\") " pod="openstack/dnsmasq-dns-78cd565959-hnbh4" Dec 09 12:30:53 crc kubenswrapper[4703]: I1209 12:30:53.957133 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c1d4c235-81e5-4672-bbb4-876c9e53861d-dns-svc\") pod \"dnsmasq-dns-78cd565959-hnbh4\" (UID: \"c1d4c235-81e5-4672-bbb4-876c9e53861d\") " pod="openstack/dnsmasq-dns-78cd565959-hnbh4" Dec 09 12:30:53 crc kubenswrapper[4703]: I1209 12:30:53.957417 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c1d4c235-81e5-4672-bbb4-876c9e53861d-dns-swift-storage-0\") pod \"dnsmasq-dns-78cd565959-hnbh4\" (UID: \"c1d4c235-81e5-4672-bbb4-876c9e53861d\") " pod="openstack/dnsmasq-dns-78cd565959-hnbh4" Dec 09 12:30:53 crc kubenswrapper[4703]: I1209 12:30:53.975717 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pb89q\" (UniqueName: \"kubernetes.io/projected/c1d4c235-81e5-4672-bbb4-876c9e53861d-kube-api-access-pb89q\") pod \"dnsmasq-dns-78cd565959-hnbh4\" (UID: \"c1d4c235-81e5-4672-bbb4-876c9e53861d\") " pod="openstack/dnsmasq-dns-78cd565959-hnbh4" Dec 09 12:30:54 crc kubenswrapper[4703]: I1209 12:30:54.024981 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 09 12:30:54 crc kubenswrapper[4703]: I1209 12:30:54.119796 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78cd565959-hnbh4" Dec 09 12:30:54 crc kubenswrapper[4703]: I1209 12:30:54.567062 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-zhqpt"] Dec 09 12:30:54 crc kubenswrapper[4703]: I1209 12:30:54.777685 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 09 12:30:54 crc kubenswrapper[4703]: W1209 12:30:54.779808 4703 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod03d4925e_cc3e_483d_845c_35da5aabe0b1.slice/crio-b92e4c1410af41f14b43a477a3a63ac60c696650cb05f6462254eba8d2268289 WatchSource:0}: Error finding container b92e4c1410af41f14b43a477a3a63ac60c696650cb05f6462254eba8d2268289: Status 404 returned error can't find the container with id b92e4c1410af41f14b43a477a3a63ac60c696650cb05f6462254eba8d2268289 Dec 09 12:30:55 crc kubenswrapper[4703]: I1209 12:30:55.061838 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-zhqpt" event={"ID":"25934d60-c1c2-41eb-9470-cdcab1791642","Type":"ContainerStarted","Data":"4a9f04ce87d53b7cb3be2bf3ca56af97f08617520d8cd41677fd8eb0dc9f2eff"} Dec 09 12:30:55 crc kubenswrapper[4703]: I1209 12:30:55.061936 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-zhqpt" event={"ID":"25934d60-c1c2-41eb-9470-cdcab1791642","Type":"ContainerStarted","Data":"cd526ef70d14898e1ea08630a93b53b2885fb556663e8f1821255f881411d247"} Dec 09 12:30:55 crc kubenswrapper[4703]: I1209 12:30:55.094314 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-zhqpt" podStartSLOduration=3.094289879 podStartE2EDuration="3.094289879s" podCreationTimestamp="2025-12-09 12:30:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:30:55.085285804 +0000 UTC m=+1554.334049313" watchObservedRunningTime="2025-12-09 12:30:55.094289879 +0000 UTC m=+1554.343053398" Dec 09 12:30:55 crc kubenswrapper[4703]: I1209 12:30:55.105644 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"03d4925e-cc3e-483d-845c-35da5aabe0b1","Type":"ContainerStarted","Data":"b92e4c1410af41f14b43a477a3a63ac60c696650cb05f6462254eba8d2268289"} Dec 09 12:30:55 crc kubenswrapper[4703]: I1209 12:30:55.289512 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-mz25q"] Dec 09 12:30:55 crc kubenswrapper[4703]: I1209 12:30:55.291688 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-mz25q" Dec 09 12:30:55 crc kubenswrapper[4703]: I1209 12:30:55.308992 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Dec 09 12:30:55 crc kubenswrapper[4703]: I1209 12:30:55.309334 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Dec 09 12:30:55 crc kubenswrapper[4703]: I1209 12:30:55.356272 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 09 12:30:55 crc kubenswrapper[4703]: I1209 12:30:55.393281 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-mz25q"] Dec 09 12:30:55 crc kubenswrapper[4703]: I1209 12:30:55.413289 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 09 12:30:55 crc kubenswrapper[4703]: I1209 12:30:55.417145 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/544746e3-7d8d-4459-a5d0-e6688c51ccbd-scripts\") pod \"nova-cell1-conductor-db-sync-mz25q\" (UID: \"544746e3-7d8d-4459-a5d0-e6688c51ccbd\") " pod="openstack/nova-cell1-conductor-db-sync-mz25q" Dec 09 12:30:55 crc kubenswrapper[4703]: I1209 12:30:55.417363 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/544746e3-7d8d-4459-a5d0-e6688c51ccbd-config-data\") pod \"nova-cell1-conductor-db-sync-mz25q\" (UID: \"544746e3-7d8d-4459-a5d0-e6688c51ccbd\") " pod="openstack/nova-cell1-conductor-db-sync-mz25q" Dec 09 12:30:55 crc kubenswrapper[4703]: I1209 12:30:55.417431 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/544746e3-7d8d-4459-a5d0-e6688c51ccbd-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-mz25q\" (UID: \"544746e3-7d8d-4459-a5d0-e6688c51ccbd\") " pod="openstack/nova-cell1-conductor-db-sync-mz25q" Dec 09 12:30:55 crc kubenswrapper[4703]: I1209 12:30:55.417449 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6sljb\" (UniqueName: \"kubernetes.io/projected/544746e3-7d8d-4459-a5d0-e6688c51ccbd-kube-api-access-6sljb\") pod \"nova-cell1-conductor-db-sync-mz25q\" (UID: \"544746e3-7d8d-4459-a5d0-e6688c51ccbd\") " pod="openstack/nova-cell1-conductor-db-sync-mz25q" Dec 09 12:30:55 crc kubenswrapper[4703]: I1209 12:30:55.520235 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/544746e3-7d8d-4459-a5d0-e6688c51ccbd-config-data\") pod \"nova-cell1-conductor-db-sync-mz25q\" (UID: \"544746e3-7d8d-4459-a5d0-e6688c51ccbd\") " pod="openstack/nova-cell1-conductor-db-sync-mz25q" Dec 09 12:30:55 crc kubenswrapper[4703]: I1209 12:30:55.520378 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/544746e3-7d8d-4459-a5d0-e6688c51ccbd-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-mz25q\" (UID: \"544746e3-7d8d-4459-a5d0-e6688c51ccbd\") " pod="openstack/nova-cell1-conductor-db-sync-mz25q" Dec 09 12:30:55 crc kubenswrapper[4703]: I1209 12:30:55.520405 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6sljb\" (UniqueName: \"kubernetes.io/projected/544746e3-7d8d-4459-a5d0-e6688c51ccbd-kube-api-access-6sljb\") pod \"nova-cell1-conductor-db-sync-mz25q\" (UID: \"544746e3-7d8d-4459-a5d0-e6688c51ccbd\") " pod="openstack/nova-cell1-conductor-db-sync-mz25q" Dec 09 12:30:55 crc kubenswrapper[4703]: I1209 12:30:55.520446 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/544746e3-7d8d-4459-a5d0-e6688c51ccbd-scripts\") pod \"nova-cell1-conductor-db-sync-mz25q\" (UID: \"544746e3-7d8d-4459-a5d0-e6688c51ccbd\") " pod="openstack/nova-cell1-conductor-db-sync-mz25q" Dec 09 12:30:55 crc kubenswrapper[4703]: I1209 12:30:55.544243 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/544746e3-7d8d-4459-a5d0-e6688c51ccbd-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-mz25q\" (UID: \"544746e3-7d8d-4459-a5d0-e6688c51ccbd\") " pod="openstack/nova-cell1-conductor-db-sync-mz25q" Dec 09 12:30:55 crc kubenswrapper[4703]: I1209 12:30:55.547390 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/544746e3-7d8d-4459-a5d0-e6688c51ccbd-scripts\") pod \"nova-cell1-conductor-db-sync-mz25q\" (UID: \"544746e3-7d8d-4459-a5d0-e6688c51ccbd\") " pod="openstack/nova-cell1-conductor-db-sync-mz25q" Dec 09 12:30:55 crc kubenswrapper[4703]: I1209 12:30:55.563953 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/544746e3-7d8d-4459-a5d0-e6688c51ccbd-config-data\") pod \"nova-cell1-conductor-db-sync-mz25q\" (UID: \"544746e3-7d8d-4459-a5d0-e6688c51ccbd\") " pod="openstack/nova-cell1-conductor-db-sync-mz25q" Dec 09 12:30:55 crc kubenswrapper[4703]: I1209 12:30:55.593063 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6sljb\" (UniqueName: \"kubernetes.io/projected/544746e3-7d8d-4459-a5d0-e6688c51ccbd-kube-api-access-6sljb\") pod \"nova-cell1-conductor-db-sync-mz25q\" (UID: \"544746e3-7d8d-4459-a5d0-e6688c51ccbd\") " pod="openstack/nova-cell1-conductor-db-sync-mz25q" Dec 09 12:30:55 crc kubenswrapper[4703]: I1209 12:30:55.716815 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 09 12:30:55 crc kubenswrapper[4703]: I1209 12:30:55.734764 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78cd565959-hnbh4"] Dec 09 12:30:55 crc kubenswrapper[4703]: I1209 12:30:55.819441 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-mz25q" Dec 09 12:30:56 crc kubenswrapper[4703]: I1209 12:30:56.078327 4703 scope.go:117] "RemoveContainer" containerID="aec88ccdbd5bc6c7f7f9dd534b57b24318f2c545ead2d59998c5ffb08ae72b46" Dec 09 12:30:56 crc kubenswrapper[4703]: E1209 12:30:56.078926 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 12:30:56 crc kubenswrapper[4703]: I1209 12:30:56.125305 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78cd565959-hnbh4" event={"ID":"c1d4c235-81e5-4672-bbb4-876c9e53861d","Type":"ContainerStarted","Data":"d5f4ec2946f11b2b5d8de7fd81024cc1dba13ef3a173f3e924fbac728185482d"} Dec 09 12:30:56 crc kubenswrapper[4703]: I1209 12:30:56.128973 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a0fa2550-b4dd-4365-8be1-b918c233a938","Type":"ContainerStarted","Data":"db3e10525c875b8148d6902f96ba07eed6175b64bd56f6cc9db86136ef4b97ab"} Dec 09 12:30:56 crc kubenswrapper[4703]: I1209 12:30:56.135079 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"db4f7913-939e-4090-b704-57aa4d2da879","Type":"ContainerStarted","Data":"fd8bc34e89d8dac2363f786e430add26614139c6214dccacb6c1550e15014f73"} Dec 09 12:30:56 crc kubenswrapper[4703]: I1209 12:30:56.146568 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0419040d-e120-4379-acc7-3d3081dcea7a","Type":"ContainerStarted","Data":"85dee05cf5f3c7d7729cd0e357b1bde9717c2cfab3ffdf98bcf2f08e85bde1fc"} Dec 09 12:30:56 crc kubenswrapper[4703]: I1209 12:30:56.487096 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-mz25q"] Dec 09 12:30:56 crc kubenswrapper[4703]: W1209 12:30:56.503595 4703 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod544746e3_7d8d_4459_a5d0_e6688c51ccbd.slice/crio-2551c7850e7d260658c93856ea6084c661803bead954bc26c6aea4ea1a9683e8 WatchSource:0}: Error finding container 2551c7850e7d260658c93856ea6084c661803bead954bc26c6aea4ea1a9683e8: Status 404 returned error can't find the container with id 2551c7850e7d260658c93856ea6084c661803bead954bc26c6aea4ea1a9683e8 Dec 09 12:30:57 crc kubenswrapper[4703]: I1209 12:30:57.172333 4703 generic.go:334] "Generic (PLEG): container finished" podID="c1d4c235-81e5-4672-bbb4-876c9e53861d" containerID="582d8964df1c7146da1c04557f013ee46915c241f761dcf9849607419fd86c03" exitCode=0 Dec 09 12:30:57 crc kubenswrapper[4703]: I1209 12:30:57.172636 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78cd565959-hnbh4" event={"ID":"c1d4c235-81e5-4672-bbb4-876c9e53861d","Type":"ContainerDied","Data":"582d8964df1c7146da1c04557f013ee46915c241f761dcf9849607419fd86c03"} Dec 09 12:30:57 crc kubenswrapper[4703]: I1209 12:30:57.259647 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-mz25q" event={"ID":"544746e3-7d8d-4459-a5d0-e6688c51ccbd","Type":"ContainerStarted","Data":"9e13c46d74f28c98eeae969c2900de9c130c095741926371229528dd47c053e4"} Dec 09 12:30:57 crc kubenswrapper[4703]: I1209 12:30:57.259737 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-mz25q" event={"ID":"544746e3-7d8d-4459-a5d0-e6688c51ccbd","Type":"ContainerStarted","Data":"2551c7850e7d260658c93856ea6084c661803bead954bc26c6aea4ea1a9683e8"} Dec 09 12:30:57 crc kubenswrapper[4703]: I1209 12:30:57.316693 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-mz25q" podStartSLOduration=2.316658606 podStartE2EDuration="2.316658606s" podCreationTimestamp="2025-12-09 12:30:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:30:57.301837435 +0000 UTC m=+1556.550600954" watchObservedRunningTime="2025-12-09 12:30:57.316658606 +0000 UTC m=+1556.565422125" Dec 09 12:30:57 crc kubenswrapper[4703]: I1209 12:30:57.813541 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 09 12:30:57 crc kubenswrapper[4703]: I1209 12:30:57.894790 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 09 12:31:02 crc kubenswrapper[4703]: I1209 12:31:02.984759 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 09 12:31:02 crc kubenswrapper[4703]: I1209 12:31:02.985628 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="ab5733f2-517c-433a-bf6c-f1cd26dde97b" containerName="kube-state-metrics" containerID="cri-o://63db45493e255bade9b7208be7c74215ad6faec4f8719a73d20c5aab428154fb" gracePeriod=30 Dec 09 12:31:03 crc kubenswrapper[4703]: I1209 12:31:03.411082 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"03d4925e-cc3e-483d-845c-35da5aabe0b1","Type":"ContainerStarted","Data":"c66522e7d5747299df7170ef7925487088c665855d0c6cc393e7df84e82c83ce"} Dec 09 12:31:03 crc kubenswrapper[4703]: I1209 12:31:03.413529 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a0fa2550-b4dd-4365-8be1-b918c233a938","Type":"ContainerStarted","Data":"08db48dabf1cd675d0b3202da9728d3d8a3416373bb290f8701992abf3d79f6c"} Dec 09 12:31:03 crc kubenswrapper[4703]: I1209 12:31:03.416010 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"db4f7913-939e-4090-b704-57aa4d2da879","Type":"ContainerStarted","Data":"b3d7dc0b4f91c42eb020db90cca47b443cc8348b15a4fdf17a6c11ed00a86730"} Dec 09 12:31:03 crc kubenswrapper[4703]: I1209 12:31:03.416236 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="db4f7913-939e-4090-b704-57aa4d2da879" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://b3d7dc0b4f91c42eb020db90cca47b443cc8348b15a4fdf17a6c11ed00a86730" gracePeriod=30 Dec 09 12:31:03 crc kubenswrapper[4703]: I1209 12:31:03.433828 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.928885954 podStartE2EDuration="10.433798092s" podCreationTimestamp="2025-12-09 12:30:53 +0000 UTC" firstStartedPulling="2025-12-09 12:30:54.786262915 +0000 UTC m=+1554.035026434" lastFinishedPulling="2025-12-09 12:31:02.291175043 +0000 UTC m=+1561.539938572" observedRunningTime="2025-12-09 12:31:03.428379787 +0000 UTC m=+1562.677143316" watchObservedRunningTime="2025-12-09 12:31:03.433798092 +0000 UTC m=+1562.682561611" Dec 09 12:31:03 crc kubenswrapper[4703]: I1209 12:31:03.452639 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.94049119 podStartE2EDuration="10.452609661s" podCreationTimestamp="2025-12-09 12:30:53 +0000 UTC" firstStartedPulling="2025-12-09 12:30:55.781479002 +0000 UTC m=+1555.030242521" lastFinishedPulling="2025-12-09 12:31:02.293597473 +0000 UTC m=+1561.542360992" observedRunningTime="2025-12-09 12:31:03.448675043 +0000 UTC m=+1562.697438562" watchObservedRunningTime="2025-12-09 12:31:03.452609661 +0000 UTC m=+1562.701373180" Dec 09 12:31:03 crc kubenswrapper[4703]: I1209 12:31:03.636697 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 09 12:31:03 crc kubenswrapper[4703]: I1209 12:31:03.636999 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 09 12:31:03 crc kubenswrapper[4703]: I1209 12:31:03.711428 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 09 12:31:04 crc kubenswrapper[4703]: I1209 12:31:04.025351 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Dec 09 12:31:04 crc kubenswrapper[4703]: I1209 12:31:04.374258 4703 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-marketplace-kvftx" podUID="71b2ce30-0051-4329-b922-c8647bb87bb1" containerName="registry-server" probeResult="failure" output=< Dec 09 12:31:04 crc kubenswrapper[4703]: timeout: failed to connect service ":50051" within 1s Dec 09 12:31:04 crc kubenswrapper[4703]: > Dec 09 12:31:04 crc kubenswrapper[4703]: I1209 12:31:04.460768 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 09 12:31:06 crc kubenswrapper[4703]: I1209 12:31:06.454020 4703 generic.go:334] "Generic (PLEG): container finished" podID="ab5733f2-517c-433a-bf6c-f1cd26dde97b" containerID="63db45493e255bade9b7208be7c74215ad6faec4f8719a73d20c5aab428154fb" exitCode=2 Dec 09 12:31:06 crc kubenswrapper[4703]: I1209 12:31:06.454657 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"ab5733f2-517c-433a-bf6c-f1cd26dde97b","Type":"ContainerDied","Data":"63db45493e255bade9b7208be7c74215ad6faec4f8719a73d20c5aab428154fb"} Dec 09 12:31:06 crc kubenswrapper[4703]: I1209 12:31:06.466381 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0419040d-e120-4379-acc7-3d3081dcea7a","Type":"ContainerStarted","Data":"11d5de21fb62afae2cfaead0668b37b1aadbce509b8062bc7537879e85b6abc2"} Dec 09 12:31:06 crc kubenswrapper[4703]: I1209 12:31:06.489690 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78cd565959-hnbh4" event={"ID":"c1d4c235-81e5-4672-bbb4-876c9e53861d","Type":"ContainerStarted","Data":"1e3ea8abaa226d43882828880d6850a6f0979ade6397dd87946bf10b409c9a03"} Dec 09 12:31:06 crc kubenswrapper[4703]: I1209 12:31:06.489930 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-78cd565959-hnbh4" Dec 09 12:31:06 crc kubenswrapper[4703]: I1209 12:31:06.499415 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a0fa2550-b4dd-4365-8be1-b918c233a938","Type":"ContainerStarted","Data":"224ea1b03989209785d771ecdb9420fafdb7ea82ce00a4b63f9b7bc2c1746535"} Dec 09 12:31:06 crc kubenswrapper[4703]: I1209 12:31:06.526824 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-78cd565959-hnbh4" podStartSLOduration=13.526795015 podStartE2EDuration="13.526795015s" podCreationTimestamp="2025-12-09 12:30:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:31:06.515844971 +0000 UTC m=+1565.764608490" watchObservedRunningTime="2025-12-09 12:31:06.526795015 +0000 UTC m=+1565.775558534" Dec 09 12:31:06 crc kubenswrapper[4703]: I1209 12:31:06.576334 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=6.611300208 podStartE2EDuration="13.576306821s" podCreationTimestamp="2025-12-09 12:30:53 +0000 UTC" firstStartedPulling="2025-12-09 12:30:55.329335209 +0000 UTC m=+1554.578098728" lastFinishedPulling="2025-12-09 12:31:02.294341822 +0000 UTC m=+1561.543105341" observedRunningTime="2025-12-09 12:31:06.558488186 +0000 UTC m=+1565.807251705" watchObservedRunningTime="2025-12-09 12:31:06.576306821 +0000 UTC m=+1565.825070340" Dec 09 12:31:06 crc kubenswrapper[4703]: I1209 12:31:06.639699 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 09 12:31:06 crc kubenswrapper[4703]: I1209 12:31:06.808080 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v28sm\" (UniqueName: \"kubernetes.io/projected/ab5733f2-517c-433a-bf6c-f1cd26dde97b-kube-api-access-v28sm\") pod \"ab5733f2-517c-433a-bf6c-f1cd26dde97b\" (UID: \"ab5733f2-517c-433a-bf6c-f1cd26dde97b\") " Dec 09 12:31:06 crc kubenswrapper[4703]: I1209 12:31:06.830798 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab5733f2-517c-433a-bf6c-f1cd26dde97b-kube-api-access-v28sm" (OuterVolumeSpecName: "kube-api-access-v28sm") pod "ab5733f2-517c-433a-bf6c-f1cd26dde97b" (UID: "ab5733f2-517c-433a-bf6c-f1cd26dde97b"). InnerVolumeSpecName "kube-api-access-v28sm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:31:06 crc kubenswrapper[4703]: I1209 12:31:06.917975 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v28sm\" (UniqueName: \"kubernetes.io/projected/ab5733f2-517c-433a-bf6c-f1cd26dde97b-kube-api-access-v28sm\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:07 crc kubenswrapper[4703]: I1209 12:31:07.087277 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 09 12:31:07 crc kubenswrapper[4703]: I1209 12:31:07.089003 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bee6594d-eea7-4044-8404-4210a11002c6" containerName="ceilometer-central-agent" containerID="cri-o://05043eabf6157ae390d134c325de17dd09fb446af0334e80a864a9718d44ce73" gracePeriod=30 Dec 09 12:31:07 crc kubenswrapper[4703]: I1209 12:31:07.089354 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bee6594d-eea7-4044-8404-4210a11002c6" containerName="proxy-httpd" containerID="cri-o://28c5f0b351836a1e432584bb5e2c92f85017eea9d15481a46f95b37c59a0785a" gracePeriod=30 Dec 09 12:31:07 crc kubenswrapper[4703]: I1209 12:31:07.089585 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bee6594d-eea7-4044-8404-4210a11002c6" containerName="sg-core" containerID="cri-o://f46a826e082edf0b6988dd959b983b3b6a2f207cc6cb53f6bea25ff15d7a14b1" gracePeriod=30 Dec 09 12:31:07 crc kubenswrapper[4703]: I1209 12:31:07.089739 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bee6594d-eea7-4044-8404-4210a11002c6" containerName="ceilometer-notification-agent" containerID="cri-o://766b563d79b84373a1fedaec7872f8e9a46fcc7572bb9a95daf5ebc5496b07fd" gracePeriod=30 Dec 09 12:31:07 crc kubenswrapper[4703]: I1209 12:31:07.514131 4703 generic.go:334] "Generic (PLEG): container finished" podID="bee6594d-eea7-4044-8404-4210a11002c6" containerID="28c5f0b351836a1e432584bb5e2c92f85017eea9d15481a46f95b37c59a0785a" exitCode=0 Dec 09 12:31:07 crc kubenswrapper[4703]: I1209 12:31:07.514171 4703 generic.go:334] "Generic (PLEG): container finished" podID="bee6594d-eea7-4044-8404-4210a11002c6" containerID="f46a826e082edf0b6988dd959b983b3b6a2f207cc6cb53f6bea25ff15d7a14b1" exitCode=2 Dec 09 12:31:07 crc kubenswrapper[4703]: I1209 12:31:07.514233 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bee6594d-eea7-4044-8404-4210a11002c6","Type":"ContainerDied","Data":"28c5f0b351836a1e432584bb5e2c92f85017eea9d15481a46f95b37c59a0785a"} Dec 09 12:31:07 crc kubenswrapper[4703]: I1209 12:31:07.514265 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bee6594d-eea7-4044-8404-4210a11002c6","Type":"ContainerDied","Data":"f46a826e082edf0b6988dd959b983b3b6a2f207cc6cb53f6bea25ff15d7a14b1"} Dec 09 12:31:07 crc kubenswrapper[4703]: I1209 12:31:07.516158 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0419040d-e120-4379-acc7-3d3081dcea7a","Type":"ContainerStarted","Data":"8cb2ba01de0111377336b83e59562ed052947343c613ce903978582b9a3906ac"} Dec 09 12:31:07 crc kubenswrapper[4703]: I1209 12:31:07.516408 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="0419040d-e120-4379-acc7-3d3081dcea7a" containerName="nova-metadata-log" containerID="cri-o://11d5de21fb62afae2cfaead0668b37b1aadbce509b8062bc7537879e85b6abc2" gracePeriod=30 Dec 09 12:31:07 crc kubenswrapper[4703]: I1209 12:31:07.516490 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="0419040d-e120-4379-acc7-3d3081dcea7a" containerName="nova-metadata-metadata" containerID="cri-o://8cb2ba01de0111377336b83e59562ed052947343c613ce903978582b9a3906ac" gracePeriod=30 Dec 09 12:31:07 crc kubenswrapper[4703]: I1209 12:31:07.521121 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"ab5733f2-517c-433a-bf6c-f1cd26dde97b","Type":"ContainerDied","Data":"0f5ed5d45924690b28ee5c7af61742cd6d0610af4d65cc72fcd1d880d830c660"} Dec 09 12:31:07 crc kubenswrapper[4703]: I1209 12:31:07.521241 4703 scope.go:117] "RemoveContainer" containerID="63db45493e255bade9b7208be7c74215ad6faec4f8719a73d20c5aab428154fb" Dec 09 12:31:07 crc kubenswrapper[4703]: I1209 12:31:07.521353 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 09 12:31:07 crc kubenswrapper[4703]: I1209 12:31:07.547464 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=7.644842343 podStartE2EDuration="14.547438917s" podCreationTimestamp="2025-12-09 12:30:53 +0000 UTC" firstStartedPulling="2025-12-09 12:30:55.396459075 +0000 UTC m=+1554.645222594" lastFinishedPulling="2025-12-09 12:31:02.299055649 +0000 UTC m=+1561.547819168" observedRunningTime="2025-12-09 12:31:07.543634072 +0000 UTC m=+1566.792397591" watchObservedRunningTime="2025-12-09 12:31:07.547438917 +0000 UTC m=+1566.796202456" Dec 09 12:31:07 crc kubenswrapper[4703]: I1209 12:31:07.603294 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 09 12:31:07 crc kubenswrapper[4703]: I1209 12:31:07.631907 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 09 12:31:07 crc kubenswrapper[4703]: I1209 12:31:07.659094 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Dec 09 12:31:07 crc kubenswrapper[4703]: E1209 12:31:07.660515 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab5733f2-517c-433a-bf6c-f1cd26dde97b" containerName="kube-state-metrics" Dec 09 12:31:07 crc kubenswrapper[4703]: I1209 12:31:07.660555 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab5733f2-517c-433a-bf6c-f1cd26dde97b" containerName="kube-state-metrics" Dec 09 12:31:07 crc kubenswrapper[4703]: I1209 12:31:07.661340 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab5733f2-517c-433a-bf6c-f1cd26dde97b" containerName="kube-state-metrics" Dec 09 12:31:07 crc kubenswrapper[4703]: I1209 12:31:07.663677 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 09 12:31:07 crc kubenswrapper[4703]: I1209 12:31:07.669262 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Dec 09 12:31:07 crc kubenswrapper[4703]: I1209 12:31:07.669712 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Dec 09 12:31:07 crc kubenswrapper[4703]: I1209 12:31:07.689273 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 09 12:31:07 crc kubenswrapper[4703]: I1209 12:31:07.756173 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3341d0c1-348a-4664-90a6-94ee7865fd94-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"3341d0c1-348a-4664-90a6-94ee7865fd94\") " pod="openstack/kube-state-metrics-0" Dec 09 12:31:07 crc kubenswrapper[4703]: I1209 12:31:07.756241 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nz57w\" (UniqueName: \"kubernetes.io/projected/3341d0c1-348a-4664-90a6-94ee7865fd94-kube-api-access-nz57w\") pod \"kube-state-metrics-0\" (UID: \"3341d0c1-348a-4664-90a6-94ee7865fd94\") " pod="openstack/kube-state-metrics-0" Dec 09 12:31:07 crc kubenswrapper[4703]: I1209 12:31:07.756308 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/3341d0c1-348a-4664-90a6-94ee7865fd94-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"3341d0c1-348a-4664-90a6-94ee7865fd94\") " pod="openstack/kube-state-metrics-0" Dec 09 12:31:07 crc kubenswrapper[4703]: I1209 12:31:07.756426 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/3341d0c1-348a-4664-90a6-94ee7865fd94-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"3341d0c1-348a-4664-90a6-94ee7865fd94\") " pod="openstack/kube-state-metrics-0" Dec 09 12:31:07 crc kubenswrapper[4703]: I1209 12:31:07.859624 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/3341d0c1-348a-4664-90a6-94ee7865fd94-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"3341d0c1-348a-4664-90a6-94ee7865fd94\") " pod="openstack/kube-state-metrics-0" Dec 09 12:31:07 crc kubenswrapper[4703]: I1209 12:31:07.859792 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/3341d0c1-348a-4664-90a6-94ee7865fd94-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"3341d0c1-348a-4664-90a6-94ee7865fd94\") " pod="openstack/kube-state-metrics-0" Dec 09 12:31:07 crc kubenswrapper[4703]: I1209 12:31:07.859897 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3341d0c1-348a-4664-90a6-94ee7865fd94-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"3341d0c1-348a-4664-90a6-94ee7865fd94\") " pod="openstack/kube-state-metrics-0" Dec 09 12:31:07 crc kubenswrapper[4703]: I1209 12:31:07.859921 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nz57w\" (UniqueName: \"kubernetes.io/projected/3341d0c1-348a-4664-90a6-94ee7865fd94-kube-api-access-nz57w\") pod \"kube-state-metrics-0\" (UID: \"3341d0c1-348a-4664-90a6-94ee7865fd94\") " pod="openstack/kube-state-metrics-0" Dec 09 12:31:07 crc kubenswrapper[4703]: I1209 12:31:07.867084 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/3341d0c1-348a-4664-90a6-94ee7865fd94-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"3341d0c1-348a-4664-90a6-94ee7865fd94\") " pod="openstack/kube-state-metrics-0" Dec 09 12:31:07 crc kubenswrapper[4703]: I1209 12:31:07.868960 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3341d0c1-348a-4664-90a6-94ee7865fd94-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"3341d0c1-348a-4664-90a6-94ee7865fd94\") " pod="openstack/kube-state-metrics-0" Dec 09 12:31:07 crc kubenswrapper[4703]: I1209 12:31:07.886140 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/3341d0c1-348a-4664-90a6-94ee7865fd94-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"3341d0c1-348a-4664-90a6-94ee7865fd94\") " pod="openstack/kube-state-metrics-0" Dec 09 12:31:07 crc kubenswrapper[4703]: I1209 12:31:07.887841 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nz57w\" (UniqueName: \"kubernetes.io/projected/3341d0c1-348a-4664-90a6-94ee7865fd94-kube-api-access-nz57w\") pod \"kube-state-metrics-0\" (UID: \"3341d0c1-348a-4664-90a6-94ee7865fd94\") " pod="openstack/kube-state-metrics-0" Dec 09 12:31:08 crc kubenswrapper[4703]: I1209 12:31:08.011386 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 09 12:31:08 crc kubenswrapper[4703]: I1209 12:31:08.570989 4703 generic.go:334] "Generic (PLEG): container finished" podID="bee6594d-eea7-4044-8404-4210a11002c6" containerID="05043eabf6157ae390d134c325de17dd09fb446af0334e80a864a9718d44ce73" exitCode=0 Dec 09 12:31:08 crc kubenswrapper[4703]: I1209 12:31:08.571371 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bee6594d-eea7-4044-8404-4210a11002c6","Type":"ContainerDied","Data":"05043eabf6157ae390d134c325de17dd09fb446af0334e80a864a9718d44ce73"} Dec 09 12:31:08 crc kubenswrapper[4703]: I1209 12:31:08.573208 4703 generic.go:334] "Generic (PLEG): container finished" podID="0419040d-e120-4379-acc7-3d3081dcea7a" containerID="8cb2ba01de0111377336b83e59562ed052947343c613ce903978582b9a3906ac" exitCode=0 Dec 09 12:31:08 crc kubenswrapper[4703]: I1209 12:31:08.573224 4703 generic.go:334] "Generic (PLEG): container finished" podID="0419040d-e120-4379-acc7-3d3081dcea7a" containerID="11d5de21fb62afae2cfaead0668b37b1aadbce509b8062bc7537879e85b6abc2" exitCode=143 Dec 09 12:31:08 crc kubenswrapper[4703]: I1209 12:31:08.573253 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0419040d-e120-4379-acc7-3d3081dcea7a","Type":"ContainerDied","Data":"8cb2ba01de0111377336b83e59562ed052947343c613ce903978582b9a3906ac"} Dec 09 12:31:08 crc kubenswrapper[4703]: I1209 12:31:08.573270 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0419040d-e120-4379-acc7-3d3081dcea7a","Type":"ContainerDied","Data":"11d5de21fb62afae2cfaead0668b37b1aadbce509b8062bc7537879e85b6abc2"} Dec 09 12:31:08 crc kubenswrapper[4703]: I1209 12:31:08.752978 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 09 12:31:08 crc kubenswrapper[4703]: I1209 12:31:08.787258 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0419040d-e120-4379-acc7-3d3081dcea7a-combined-ca-bundle\") pod \"0419040d-e120-4379-acc7-3d3081dcea7a\" (UID: \"0419040d-e120-4379-acc7-3d3081dcea7a\") " Dec 09 12:31:08 crc kubenswrapper[4703]: I1209 12:31:08.787465 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0419040d-e120-4379-acc7-3d3081dcea7a-logs\") pod \"0419040d-e120-4379-acc7-3d3081dcea7a\" (UID: \"0419040d-e120-4379-acc7-3d3081dcea7a\") " Dec 09 12:31:08 crc kubenswrapper[4703]: I1209 12:31:08.787523 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0419040d-e120-4379-acc7-3d3081dcea7a-config-data\") pod \"0419040d-e120-4379-acc7-3d3081dcea7a\" (UID: \"0419040d-e120-4379-acc7-3d3081dcea7a\") " Dec 09 12:31:08 crc kubenswrapper[4703]: I1209 12:31:08.787593 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jnjs4\" (UniqueName: \"kubernetes.io/projected/0419040d-e120-4379-acc7-3d3081dcea7a-kube-api-access-jnjs4\") pod \"0419040d-e120-4379-acc7-3d3081dcea7a\" (UID: \"0419040d-e120-4379-acc7-3d3081dcea7a\") " Dec 09 12:31:08 crc kubenswrapper[4703]: I1209 12:31:08.788986 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0419040d-e120-4379-acc7-3d3081dcea7a-logs" (OuterVolumeSpecName: "logs") pod "0419040d-e120-4379-acc7-3d3081dcea7a" (UID: "0419040d-e120-4379-acc7-3d3081dcea7a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:31:08 crc kubenswrapper[4703]: I1209 12:31:08.800537 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0419040d-e120-4379-acc7-3d3081dcea7a-kube-api-access-jnjs4" (OuterVolumeSpecName: "kube-api-access-jnjs4") pod "0419040d-e120-4379-acc7-3d3081dcea7a" (UID: "0419040d-e120-4379-acc7-3d3081dcea7a"). InnerVolumeSpecName "kube-api-access-jnjs4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:31:08 crc kubenswrapper[4703]: I1209 12:31:08.833759 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0419040d-e120-4379-acc7-3d3081dcea7a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0419040d-e120-4379-acc7-3d3081dcea7a" (UID: "0419040d-e120-4379-acc7-3d3081dcea7a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:31:08 crc kubenswrapper[4703]: I1209 12:31:08.844620 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0419040d-e120-4379-acc7-3d3081dcea7a-config-data" (OuterVolumeSpecName: "config-data") pod "0419040d-e120-4379-acc7-3d3081dcea7a" (UID: "0419040d-e120-4379-acc7-3d3081dcea7a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:31:08 crc kubenswrapper[4703]: I1209 12:31:08.890561 4703 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0419040d-e120-4379-acc7-3d3081dcea7a-logs\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:08 crc kubenswrapper[4703]: I1209 12:31:08.890610 4703 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0419040d-e120-4379-acc7-3d3081dcea7a-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:08 crc kubenswrapper[4703]: I1209 12:31:08.890623 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jnjs4\" (UniqueName: \"kubernetes.io/projected/0419040d-e120-4379-acc7-3d3081dcea7a-kube-api-access-jnjs4\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:08 crc kubenswrapper[4703]: I1209 12:31:08.890637 4703 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0419040d-e120-4379-acc7-3d3081dcea7a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:09 crc kubenswrapper[4703]: I1209 12:31:09.072392 4703 scope.go:117] "RemoveContainer" containerID="aec88ccdbd5bc6c7f7f9dd534b57b24318f2c545ead2d59998c5ffb08ae72b46" Dec 09 12:31:09 crc kubenswrapper[4703]: E1209 12:31:09.072896 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 12:31:09 crc kubenswrapper[4703]: I1209 12:31:09.081533 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab5733f2-517c-433a-bf6c-f1cd26dde97b" path="/var/lib/kubelet/pods/ab5733f2-517c-433a-bf6c-f1cd26dde97b/volumes" Dec 09 12:31:09 crc kubenswrapper[4703]: I1209 12:31:09.140647 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 09 12:31:09 crc kubenswrapper[4703]: I1209 12:31:09.587179 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0419040d-e120-4379-acc7-3d3081dcea7a","Type":"ContainerDied","Data":"85dee05cf5f3c7d7729cd0e357b1bde9717c2cfab3ffdf98bcf2f08e85bde1fc"} Dec 09 12:31:09 crc kubenswrapper[4703]: I1209 12:31:09.587289 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 09 12:31:09 crc kubenswrapper[4703]: I1209 12:31:09.587331 4703 scope.go:117] "RemoveContainer" containerID="8cb2ba01de0111377336b83e59562ed052947343c613ce903978582b9a3906ac" Dec 09 12:31:09 crc kubenswrapper[4703]: I1209 12:31:09.591736 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"3341d0c1-348a-4664-90a6-94ee7865fd94","Type":"ContainerStarted","Data":"50702f27187ae8120f262fd6a261207a6819506b5aab35925f563e3bf6b50376"} Dec 09 12:31:09 crc kubenswrapper[4703]: I1209 12:31:09.623451 4703 scope.go:117] "RemoveContainer" containerID="11d5de21fb62afae2cfaead0668b37b1aadbce509b8062bc7537879e85b6abc2" Dec 09 12:31:09 crc kubenswrapper[4703]: I1209 12:31:09.628572 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 09 12:31:09 crc kubenswrapper[4703]: I1209 12:31:09.650630 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 09 12:31:09 crc kubenswrapper[4703]: I1209 12:31:09.671937 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 09 12:31:09 crc kubenswrapper[4703]: E1209 12:31:09.672705 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0419040d-e120-4379-acc7-3d3081dcea7a" containerName="nova-metadata-metadata" Dec 09 12:31:09 crc kubenswrapper[4703]: I1209 12:31:09.672727 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="0419040d-e120-4379-acc7-3d3081dcea7a" containerName="nova-metadata-metadata" Dec 09 12:31:09 crc kubenswrapper[4703]: E1209 12:31:09.672750 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0419040d-e120-4379-acc7-3d3081dcea7a" containerName="nova-metadata-log" Dec 09 12:31:09 crc kubenswrapper[4703]: I1209 12:31:09.672757 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="0419040d-e120-4379-acc7-3d3081dcea7a" containerName="nova-metadata-log" Dec 09 12:31:09 crc kubenswrapper[4703]: I1209 12:31:09.673019 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="0419040d-e120-4379-acc7-3d3081dcea7a" containerName="nova-metadata-log" Dec 09 12:31:09 crc kubenswrapper[4703]: I1209 12:31:09.673049 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="0419040d-e120-4379-acc7-3d3081dcea7a" containerName="nova-metadata-metadata" Dec 09 12:31:09 crc kubenswrapper[4703]: I1209 12:31:09.675812 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 09 12:31:09 crc kubenswrapper[4703]: I1209 12:31:09.682178 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 09 12:31:09 crc kubenswrapper[4703]: I1209 12:31:09.682472 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 09 12:31:09 crc kubenswrapper[4703]: I1209 12:31:09.706584 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 09 12:31:09 crc kubenswrapper[4703]: I1209 12:31:09.712077 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ed30071-45f1-49f4-8b61-c982312f19eb-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4ed30071-45f1-49f4-8b61-c982312f19eb\") " pod="openstack/nova-metadata-0" Dec 09 12:31:09 crc kubenswrapper[4703]: I1209 12:31:09.712155 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xq2ms\" (UniqueName: \"kubernetes.io/projected/4ed30071-45f1-49f4-8b61-c982312f19eb-kube-api-access-xq2ms\") pod \"nova-metadata-0\" (UID: \"4ed30071-45f1-49f4-8b61-c982312f19eb\") " pod="openstack/nova-metadata-0" Dec 09 12:31:09 crc kubenswrapper[4703]: I1209 12:31:09.712236 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ed30071-45f1-49f4-8b61-c982312f19eb-config-data\") pod \"nova-metadata-0\" (UID: \"4ed30071-45f1-49f4-8b61-c982312f19eb\") " pod="openstack/nova-metadata-0" Dec 09 12:31:09 crc kubenswrapper[4703]: I1209 12:31:09.712322 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ed30071-45f1-49f4-8b61-c982312f19eb-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4ed30071-45f1-49f4-8b61-c982312f19eb\") " pod="openstack/nova-metadata-0" Dec 09 12:31:09 crc kubenswrapper[4703]: I1209 12:31:09.712377 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ed30071-45f1-49f4-8b61-c982312f19eb-logs\") pod \"nova-metadata-0\" (UID: \"4ed30071-45f1-49f4-8b61-c982312f19eb\") " pod="openstack/nova-metadata-0" Dec 09 12:31:09 crc kubenswrapper[4703]: I1209 12:31:09.815107 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ed30071-45f1-49f4-8b61-c982312f19eb-config-data\") pod \"nova-metadata-0\" (UID: \"4ed30071-45f1-49f4-8b61-c982312f19eb\") " pod="openstack/nova-metadata-0" Dec 09 12:31:09 crc kubenswrapper[4703]: I1209 12:31:09.815752 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ed30071-45f1-49f4-8b61-c982312f19eb-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4ed30071-45f1-49f4-8b61-c982312f19eb\") " pod="openstack/nova-metadata-0" Dec 09 12:31:09 crc kubenswrapper[4703]: I1209 12:31:09.815863 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ed30071-45f1-49f4-8b61-c982312f19eb-logs\") pod \"nova-metadata-0\" (UID: \"4ed30071-45f1-49f4-8b61-c982312f19eb\") " pod="openstack/nova-metadata-0" Dec 09 12:31:09 crc kubenswrapper[4703]: I1209 12:31:09.816170 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ed30071-45f1-49f4-8b61-c982312f19eb-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4ed30071-45f1-49f4-8b61-c982312f19eb\") " pod="openstack/nova-metadata-0" Dec 09 12:31:09 crc kubenswrapper[4703]: I1209 12:31:09.816250 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xq2ms\" (UniqueName: \"kubernetes.io/projected/4ed30071-45f1-49f4-8b61-c982312f19eb-kube-api-access-xq2ms\") pod \"nova-metadata-0\" (UID: \"4ed30071-45f1-49f4-8b61-c982312f19eb\") " pod="openstack/nova-metadata-0" Dec 09 12:31:09 crc kubenswrapper[4703]: I1209 12:31:09.818863 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ed30071-45f1-49f4-8b61-c982312f19eb-logs\") pod \"nova-metadata-0\" (UID: \"4ed30071-45f1-49f4-8b61-c982312f19eb\") " pod="openstack/nova-metadata-0" Dec 09 12:31:09 crc kubenswrapper[4703]: I1209 12:31:09.822103 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ed30071-45f1-49f4-8b61-c982312f19eb-config-data\") pod \"nova-metadata-0\" (UID: \"4ed30071-45f1-49f4-8b61-c982312f19eb\") " pod="openstack/nova-metadata-0" Dec 09 12:31:09 crc kubenswrapper[4703]: I1209 12:31:09.824931 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ed30071-45f1-49f4-8b61-c982312f19eb-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4ed30071-45f1-49f4-8b61-c982312f19eb\") " pod="openstack/nova-metadata-0" Dec 09 12:31:09 crc kubenswrapper[4703]: I1209 12:31:09.825234 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ed30071-45f1-49f4-8b61-c982312f19eb-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4ed30071-45f1-49f4-8b61-c982312f19eb\") " pod="openstack/nova-metadata-0" Dec 09 12:31:09 crc kubenswrapper[4703]: I1209 12:31:09.837691 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xq2ms\" (UniqueName: \"kubernetes.io/projected/4ed30071-45f1-49f4-8b61-c982312f19eb-kube-api-access-xq2ms\") pod \"nova-metadata-0\" (UID: \"4ed30071-45f1-49f4-8b61-c982312f19eb\") " pod="openstack/nova-metadata-0" Dec 09 12:31:09 crc kubenswrapper[4703]: I1209 12:31:09.971434 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 09 12:31:10 crc kubenswrapper[4703]: I1209 12:31:10.578436 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 09 12:31:10 crc kubenswrapper[4703]: I1209 12:31:10.623432 4703 generic.go:334] "Generic (PLEG): container finished" podID="bee6594d-eea7-4044-8404-4210a11002c6" containerID="766b563d79b84373a1fedaec7872f8e9a46fcc7572bb9a95daf5ebc5496b07fd" exitCode=0 Dec 09 12:31:10 crc kubenswrapper[4703]: I1209 12:31:10.623581 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bee6594d-eea7-4044-8404-4210a11002c6","Type":"ContainerDied","Data":"766b563d79b84373a1fedaec7872f8e9a46fcc7572bb9a95daf5ebc5496b07fd"} Dec 09 12:31:10 crc kubenswrapper[4703]: I1209 12:31:10.626273 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4ed30071-45f1-49f4-8b61-c982312f19eb","Type":"ContainerStarted","Data":"1b0ebe505393ea5c42683c9d862f089787fc092824ce96e6d926d9c86da2e1c4"} Dec 09 12:31:10 crc kubenswrapper[4703]: I1209 12:31:10.632449 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"3341d0c1-348a-4664-90a6-94ee7865fd94","Type":"ContainerStarted","Data":"18fd41788526b0a7ca3879f28c40c908a1e297e3a386c93544d751eae2b0bb82"} Dec 09 12:31:10 crc kubenswrapper[4703]: I1209 12:31:10.632638 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Dec 09 12:31:10 crc kubenswrapper[4703]: I1209 12:31:10.661751 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=3.22129476 podStartE2EDuration="3.661725901s" podCreationTimestamp="2025-12-09 12:31:07 +0000 UTC" firstStartedPulling="2025-12-09 12:31:09.183454579 +0000 UTC m=+1568.432218098" lastFinishedPulling="2025-12-09 12:31:09.62388572 +0000 UTC m=+1568.872649239" observedRunningTime="2025-12-09 12:31:10.657894965 +0000 UTC m=+1569.906658484" watchObservedRunningTime="2025-12-09 12:31:10.661725901 +0000 UTC m=+1569.910489420" Dec 09 12:31:10 crc kubenswrapper[4703]: I1209 12:31:10.725469 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 12:31:10 crc kubenswrapper[4703]: I1209 12:31:10.778157 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8q9wn\" (UniqueName: \"kubernetes.io/projected/bee6594d-eea7-4044-8404-4210a11002c6-kube-api-access-8q9wn\") pod \"bee6594d-eea7-4044-8404-4210a11002c6\" (UID: \"bee6594d-eea7-4044-8404-4210a11002c6\") " Dec 09 12:31:10 crc kubenswrapper[4703]: I1209 12:31:10.778300 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bee6594d-eea7-4044-8404-4210a11002c6-combined-ca-bundle\") pod \"bee6594d-eea7-4044-8404-4210a11002c6\" (UID: \"bee6594d-eea7-4044-8404-4210a11002c6\") " Dec 09 12:31:10 crc kubenswrapper[4703]: I1209 12:31:10.778385 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bee6594d-eea7-4044-8404-4210a11002c6-sg-core-conf-yaml\") pod \"bee6594d-eea7-4044-8404-4210a11002c6\" (UID: \"bee6594d-eea7-4044-8404-4210a11002c6\") " Dec 09 12:31:10 crc kubenswrapper[4703]: I1209 12:31:10.778407 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bee6594d-eea7-4044-8404-4210a11002c6-log-httpd\") pod \"bee6594d-eea7-4044-8404-4210a11002c6\" (UID: \"bee6594d-eea7-4044-8404-4210a11002c6\") " Dec 09 12:31:10 crc kubenswrapper[4703]: I1209 12:31:10.778483 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bee6594d-eea7-4044-8404-4210a11002c6-run-httpd\") pod \"bee6594d-eea7-4044-8404-4210a11002c6\" (UID: \"bee6594d-eea7-4044-8404-4210a11002c6\") " Dec 09 12:31:10 crc kubenswrapper[4703]: I1209 12:31:10.778502 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bee6594d-eea7-4044-8404-4210a11002c6-config-data\") pod \"bee6594d-eea7-4044-8404-4210a11002c6\" (UID: \"bee6594d-eea7-4044-8404-4210a11002c6\") " Dec 09 12:31:10 crc kubenswrapper[4703]: I1209 12:31:10.778613 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bee6594d-eea7-4044-8404-4210a11002c6-scripts\") pod \"bee6594d-eea7-4044-8404-4210a11002c6\" (UID: \"bee6594d-eea7-4044-8404-4210a11002c6\") " Dec 09 12:31:10 crc kubenswrapper[4703]: I1209 12:31:10.780737 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bee6594d-eea7-4044-8404-4210a11002c6-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "bee6594d-eea7-4044-8404-4210a11002c6" (UID: "bee6594d-eea7-4044-8404-4210a11002c6"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:31:10 crc kubenswrapper[4703]: I1209 12:31:10.781149 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bee6594d-eea7-4044-8404-4210a11002c6-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "bee6594d-eea7-4044-8404-4210a11002c6" (UID: "bee6594d-eea7-4044-8404-4210a11002c6"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:31:10 crc kubenswrapper[4703]: I1209 12:31:10.792958 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bee6594d-eea7-4044-8404-4210a11002c6-kube-api-access-8q9wn" (OuterVolumeSpecName: "kube-api-access-8q9wn") pod "bee6594d-eea7-4044-8404-4210a11002c6" (UID: "bee6594d-eea7-4044-8404-4210a11002c6"). InnerVolumeSpecName "kube-api-access-8q9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:31:10 crc kubenswrapper[4703]: I1209 12:31:10.798943 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bee6594d-eea7-4044-8404-4210a11002c6-scripts" (OuterVolumeSpecName: "scripts") pod "bee6594d-eea7-4044-8404-4210a11002c6" (UID: "bee6594d-eea7-4044-8404-4210a11002c6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:31:10 crc kubenswrapper[4703]: I1209 12:31:10.823672 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bee6594d-eea7-4044-8404-4210a11002c6-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "bee6594d-eea7-4044-8404-4210a11002c6" (UID: "bee6594d-eea7-4044-8404-4210a11002c6"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:31:10 crc kubenswrapper[4703]: I1209 12:31:10.885337 4703 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bee6594d-eea7-4044-8404-4210a11002c6-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:10 crc kubenswrapper[4703]: I1209 12:31:10.885650 4703 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bee6594d-eea7-4044-8404-4210a11002c6-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:10 crc kubenswrapper[4703]: I1209 12:31:10.885728 4703 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bee6594d-eea7-4044-8404-4210a11002c6-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:10 crc kubenswrapper[4703]: I1209 12:31:10.885834 4703 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bee6594d-eea7-4044-8404-4210a11002c6-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:10 crc kubenswrapper[4703]: I1209 12:31:10.885906 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8q9wn\" (UniqueName: \"kubernetes.io/projected/bee6594d-eea7-4044-8404-4210a11002c6-kube-api-access-8q9wn\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:10 crc kubenswrapper[4703]: I1209 12:31:10.915148 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bee6594d-eea7-4044-8404-4210a11002c6-config-data" (OuterVolumeSpecName: "config-data") pod "bee6594d-eea7-4044-8404-4210a11002c6" (UID: "bee6594d-eea7-4044-8404-4210a11002c6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:31:10 crc kubenswrapper[4703]: I1209 12:31:10.956142 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bee6594d-eea7-4044-8404-4210a11002c6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bee6594d-eea7-4044-8404-4210a11002c6" (UID: "bee6594d-eea7-4044-8404-4210a11002c6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:31:10 crc kubenswrapper[4703]: I1209 12:31:10.990483 4703 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bee6594d-eea7-4044-8404-4210a11002c6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:10 crc kubenswrapper[4703]: I1209 12:31:10.990676 4703 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bee6594d-eea7-4044-8404-4210a11002c6-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:11 crc kubenswrapper[4703]: I1209 12:31:11.085313 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0419040d-e120-4379-acc7-3d3081dcea7a" path="/var/lib/kubelet/pods/0419040d-e120-4379-acc7-3d3081dcea7a/volumes" Dec 09 12:31:11 crc kubenswrapper[4703]: I1209 12:31:11.652143 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bee6594d-eea7-4044-8404-4210a11002c6","Type":"ContainerDied","Data":"16212a9969b8796a35b34958b3d3905d6f353beb760dead5b96d66f3eb771c5b"} Dec 09 12:31:11 crc kubenswrapper[4703]: I1209 12:31:11.652212 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 12:31:11 crc kubenswrapper[4703]: I1209 12:31:11.652222 4703 scope.go:117] "RemoveContainer" containerID="28c5f0b351836a1e432584bb5e2c92f85017eea9d15481a46f95b37c59a0785a" Dec 09 12:31:11 crc kubenswrapper[4703]: I1209 12:31:11.658152 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4ed30071-45f1-49f4-8b61-c982312f19eb","Type":"ContainerStarted","Data":"7c197b9c260b4ff633168e9e78206457151c1c99959528821f876e88196f0ce3"} Dec 09 12:31:11 crc kubenswrapper[4703]: I1209 12:31:11.658211 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4ed30071-45f1-49f4-8b61-c982312f19eb","Type":"ContainerStarted","Data":"6d67d010bc3325f8a777821c4fc45b633aa0e21bebedc99642bd8643a8f80c78"} Dec 09 12:31:11 crc kubenswrapper[4703]: I1209 12:31:11.698401 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.698378303 podStartE2EDuration="2.698378303s" podCreationTimestamp="2025-12-09 12:31:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:31:11.691823 +0000 UTC m=+1570.940586529" watchObservedRunningTime="2025-12-09 12:31:11.698378303 +0000 UTC m=+1570.947141822" Dec 09 12:31:11 crc kubenswrapper[4703]: I1209 12:31:11.740237 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 09 12:31:11 crc kubenswrapper[4703]: I1209 12:31:11.753577 4703 scope.go:117] "RemoveContainer" containerID="f46a826e082edf0b6988dd959b983b3b6a2f207cc6cb53f6bea25ff15d7a14b1" Dec 09 12:31:11 crc kubenswrapper[4703]: I1209 12:31:11.764850 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 09 12:31:11 crc kubenswrapper[4703]: I1209 12:31:11.790870 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 09 12:31:11 crc kubenswrapper[4703]: E1209 12:31:11.791482 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bee6594d-eea7-4044-8404-4210a11002c6" containerName="ceilometer-central-agent" Dec 09 12:31:11 crc kubenswrapper[4703]: I1209 12:31:11.791504 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="bee6594d-eea7-4044-8404-4210a11002c6" containerName="ceilometer-central-agent" Dec 09 12:31:11 crc kubenswrapper[4703]: E1209 12:31:11.791517 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bee6594d-eea7-4044-8404-4210a11002c6" containerName="sg-core" Dec 09 12:31:11 crc kubenswrapper[4703]: I1209 12:31:11.791523 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="bee6594d-eea7-4044-8404-4210a11002c6" containerName="sg-core" Dec 09 12:31:11 crc kubenswrapper[4703]: E1209 12:31:11.791531 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bee6594d-eea7-4044-8404-4210a11002c6" containerName="proxy-httpd" Dec 09 12:31:11 crc kubenswrapper[4703]: I1209 12:31:11.791539 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="bee6594d-eea7-4044-8404-4210a11002c6" containerName="proxy-httpd" Dec 09 12:31:11 crc kubenswrapper[4703]: E1209 12:31:11.791565 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bee6594d-eea7-4044-8404-4210a11002c6" containerName="ceilometer-notification-agent" Dec 09 12:31:11 crc kubenswrapper[4703]: I1209 12:31:11.791573 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="bee6594d-eea7-4044-8404-4210a11002c6" containerName="ceilometer-notification-agent" Dec 09 12:31:11 crc kubenswrapper[4703]: I1209 12:31:11.791839 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="bee6594d-eea7-4044-8404-4210a11002c6" containerName="ceilometer-central-agent" Dec 09 12:31:11 crc kubenswrapper[4703]: I1209 12:31:11.791854 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="bee6594d-eea7-4044-8404-4210a11002c6" containerName="ceilometer-notification-agent" Dec 09 12:31:11 crc kubenswrapper[4703]: I1209 12:31:11.791868 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="bee6594d-eea7-4044-8404-4210a11002c6" containerName="proxy-httpd" Dec 09 12:31:11 crc kubenswrapper[4703]: I1209 12:31:11.791883 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="bee6594d-eea7-4044-8404-4210a11002c6" containerName="sg-core" Dec 09 12:31:11 crc kubenswrapper[4703]: I1209 12:31:11.794162 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 12:31:11 crc kubenswrapper[4703]: I1209 12:31:11.799120 4703 scope.go:117] "RemoveContainer" containerID="766b563d79b84373a1fedaec7872f8e9a46fcc7572bb9a95daf5ebc5496b07fd" Dec 09 12:31:11 crc kubenswrapper[4703]: I1209 12:31:11.799631 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 09 12:31:11 crc kubenswrapper[4703]: I1209 12:31:11.800372 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Dec 09 12:31:11 crc kubenswrapper[4703]: I1209 12:31:11.800692 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 09 12:31:11 crc kubenswrapper[4703]: I1209 12:31:11.806242 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 09 12:31:11 crc kubenswrapper[4703]: I1209 12:31:11.839490 4703 scope.go:117] "RemoveContainer" containerID="05043eabf6157ae390d134c325de17dd09fb446af0334e80a864a9718d44ce73" Dec 09 12:31:11 crc kubenswrapper[4703]: I1209 12:31:11.924841 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e6f4580-9b21-4b99-a48d-56df09b6863d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8e6f4580-9b21-4b99-a48d-56df09b6863d\") " pod="openstack/ceilometer-0" Dec 09 12:31:11 crc kubenswrapper[4703]: I1209 12:31:11.925259 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8e6f4580-9b21-4b99-a48d-56df09b6863d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8e6f4580-9b21-4b99-a48d-56df09b6863d\") " pod="openstack/ceilometer-0" Dec 09 12:31:11 crc kubenswrapper[4703]: I1209 12:31:11.925299 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8e6f4580-9b21-4b99-a48d-56df09b6863d-log-httpd\") pod \"ceilometer-0\" (UID: \"8e6f4580-9b21-4b99-a48d-56df09b6863d\") " pod="openstack/ceilometer-0" Dec 09 12:31:11 crc kubenswrapper[4703]: I1209 12:31:11.925327 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7b4m\" (UniqueName: \"kubernetes.io/projected/8e6f4580-9b21-4b99-a48d-56df09b6863d-kube-api-access-g7b4m\") pod \"ceilometer-0\" (UID: \"8e6f4580-9b21-4b99-a48d-56df09b6863d\") " pod="openstack/ceilometer-0" Dec 09 12:31:11 crc kubenswrapper[4703]: I1209 12:31:11.925365 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e6f4580-9b21-4b99-a48d-56df09b6863d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8e6f4580-9b21-4b99-a48d-56df09b6863d\") " pod="openstack/ceilometer-0" Dec 09 12:31:11 crc kubenswrapper[4703]: I1209 12:31:11.925401 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e6f4580-9b21-4b99-a48d-56df09b6863d-config-data\") pod \"ceilometer-0\" (UID: \"8e6f4580-9b21-4b99-a48d-56df09b6863d\") " pod="openstack/ceilometer-0" Dec 09 12:31:11 crc kubenswrapper[4703]: I1209 12:31:11.925449 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8e6f4580-9b21-4b99-a48d-56df09b6863d-run-httpd\") pod \"ceilometer-0\" (UID: \"8e6f4580-9b21-4b99-a48d-56df09b6863d\") " pod="openstack/ceilometer-0" Dec 09 12:31:11 crc kubenswrapper[4703]: I1209 12:31:11.925478 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e6f4580-9b21-4b99-a48d-56df09b6863d-scripts\") pod \"ceilometer-0\" (UID: \"8e6f4580-9b21-4b99-a48d-56df09b6863d\") " pod="openstack/ceilometer-0" Dec 09 12:31:12 crc kubenswrapper[4703]: I1209 12:31:12.027396 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8e6f4580-9b21-4b99-a48d-56df09b6863d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8e6f4580-9b21-4b99-a48d-56df09b6863d\") " pod="openstack/ceilometer-0" Dec 09 12:31:12 crc kubenswrapper[4703]: I1209 12:31:12.027467 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8e6f4580-9b21-4b99-a48d-56df09b6863d-log-httpd\") pod \"ceilometer-0\" (UID: \"8e6f4580-9b21-4b99-a48d-56df09b6863d\") " pod="openstack/ceilometer-0" Dec 09 12:31:12 crc kubenswrapper[4703]: I1209 12:31:12.027506 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7b4m\" (UniqueName: \"kubernetes.io/projected/8e6f4580-9b21-4b99-a48d-56df09b6863d-kube-api-access-g7b4m\") pod \"ceilometer-0\" (UID: \"8e6f4580-9b21-4b99-a48d-56df09b6863d\") " pod="openstack/ceilometer-0" Dec 09 12:31:12 crc kubenswrapper[4703]: I1209 12:31:12.027552 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e6f4580-9b21-4b99-a48d-56df09b6863d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8e6f4580-9b21-4b99-a48d-56df09b6863d\") " pod="openstack/ceilometer-0" Dec 09 12:31:12 crc kubenswrapper[4703]: I1209 12:31:12.027588 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e6f4580-9b21-4b99-a48d-56df09b6863d-config-data\") pod \"ceilometer-0\" (UID: \"8e6f4580-9b21-4b99-a48d-56df09b6863d\") " pod="openstack/ceilometer-0" Dec 09 12:31:12 crc kubenswrapper[4703]: I1209 12:31:12.027632 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8e6f4580-9b21-4b99-a48d-56df09b6863d-run-httpd\") pod \"ceilometer-0\" (UID: \"8e6f4580-9b21-4b99-a48d-56df09b6863d\") " pod="openstack/ceilometer-0" Dec 09 12:31:12 crc kubenswrapper[4703]: I1209 12:31:12.027673 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e6f4580-9b21-4b99-a48d-56df09b6863d-scripts\") pod \"ceilometer-0\" (UID: \"8e6f4580-9b21-4b99-a48d-56df09b6863d\") " pod="openstack/ceilometer-0" Dec 09 12:31:12 crc kubenswrapper[4703]: I1209 12:31:12.027726 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e6f4580-9b21-4b99-a48d-56df09b6863d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8e6f4580-9b21-4b99-a48d-56df09b6863d\") " pod="openstack/ceilometer-0" Dec 09 12:31:12 crc kubenswrapper[4703]: I1209 12:31:12.029091 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8e6f4580-9b21-4b99-a48d-56df09b6863d-run-httpd\") pod \"ceilometer-0\" (UID: \"8e6f4580-9b21-4b99-a48d-56df09b6863d\") " pod="openstack/ceilometer-0" Dec 09 12:31:12 crc kubenswrapper[4703]: I1209 12:31:12.029215 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8e6f4580-9b21-4b99-a48d-56df09b6863d-log-httpd\") pod \"ceilometer-0\" (UID: \"8e6f4580-9b21-4b99-a48d-56df09b6863d\") " pod="openstack/ceilometer-0" Dec 09 12:31:12 crc kubenswrapper[4703]: I1209 12:31:12.034894 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e6f4580-9b21-4b99-a48d-56df09b6863d-scripts\") pod \"ceilometer-0\" (UID: \"8e6f4580-9b21-4b99-a48d-56df09b6863d\") " pod="openstack/ceilometer-0" Dec 09 12:31:12 crc kubenswrapper[4703]: I1209 12:31:12.036480 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e6f4580-9b21-4b99-a48d-56df09b6863d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8e6f4580-9b21-4b99-a48d-56df09b6863d\") " pod="openstack/ceilometer-0" Dec 09 12:31:12 crc kubenswrapper[4703]: I1209 12:31:12.049909 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8e6f4580-9b21-4b99-a48d-56df09b6863d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8e6f4580-9b21-4b99-a48d-56df09b6863d\") " pod="openstack/ceilometer-0" Dec 09 12:31:12 crc kubenswrapper[4703]: I1209 12:31:12.050759 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e6f4580-9b21-4b99-a48d-56df09b6863d-config-data\") pod \"ceilometer-0\" (UID: \"8e6f4580-9b21-4b99-a48d-56df09b6863d\") " pod="openstack/ceilometer-0" Dec 09 12:31:12 crc kubenswrapper[4703]: I1209 12:31:12.052660 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7b4m\" (UniqueName: \"kubernetes.io/projected/8e6f4580-9b21-4b99-a48d-56df09b6863d-kube-api-access-g7b4m\") pod \"ceilometer-0\" (UID: \"8e6f4580-9b21-4b99-a48d-56df09b6863d\") " pod="openstack/ceilometer-0" Dec 09 12:31:12 crc kubenswrapper[4703]: I1209 12:31:12.053169 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e6f4580-9b21-4b99-a48d-56df09b6863d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8e6f4580-9b21-4b99-a48d-56df09b6863d\") " pod="openstack/ceilometer-0" Dec 09 12:31:12 crc kubenswrapper[4703]: I1209 12:31:12.124680 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 12:31:12 crc kubenswrapper[4703]: I1209 12:31:12.678275 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 09 12:31:13 crc kubenswrapper[4703]: I1209 12:31:13.081810 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bee6594d-eea7-4044-8404-4210a11002c6" path="/var/lib/kubelet/pods/bee6594d-eea7-4044-8404-4210a11002c6/volumes" Dec 09 12:31:13 crc kubenswrapper[4703]: I1209 12:31:13.701445 4703 generic.go:334] "Generic (PLEG): container finished" podID="25934d60-c1c2-41eb-9470-cdcab1791642" containerID="4a9f04ce87d53b7cb3be2bf3ca56af97f08617520d8cd41677fd8eb0dc9f2eff" exitCode=0 Dec 09 12:31:13 crc kubenswrapper[4703]: I1209 12:31:13.701569 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-zhqpt" event={"ID":"25934d60-c1c2-41eb-9470-cdcab1791642","Type":"ContainerDied","Data":"4a9f04ce87d53b7cb3be2bf3ca56af97f08617520d8cd41677fd8eb0dc9f2eff"} Dec 09 12:31:13 crc kubenswrapper[4703]: I1209 12:31:13.704578 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8e6f4580-9b21-4b99-a48d-56df09b6863d","Type":"ContainerStarted","Data":"d26bdd74a38cb56b4c770f7d6f0286a1ee1e74f6f683727a70b3cdb301a0ec8c"} Dec 09 12:31:13 crc kubenswrapper[4703]: I1209 12:31:13.704640 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8e6f4580-9b21-4b99-a48d-56df09b6863d","Type":"ContainerStarted","Data":"94d4470cecec010545b42d422d4576dc571f3d826d894c4c88d1143f29dfa75c"} Dec 09 12:31:13 crc kubenswrapper[4703]: I1209 12:31:13.908493 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 09 12:31:13 crc kubenswrapper[4703]: I1209 12:31:13.908881 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 09 12:31:14 crc kubenswrapper[4703]: I1209 12:31:14.122514 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-78cd565959-hnbh4" Dec 09 12:31:14 crc kubenswrapper[4703]: I1209 12:31:14.245260 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67bdc55879-vmqd9"] Dec 09 12:31:14 crc kubenswrapper[4703]: I1209 12:31:14.247450 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-67bdc55879-vmqd9" podUID="a71f8c26-a813-4d5b-9ce4-c9b6075a3153" containerName="dnsmasq-dns" containerID="cri-o://3d10f8ed77019f4d15e88bd101653da99f26003f8b29e93f8ede48be5156ed20" gracePeriod=10 Dec 09 12:31:14 crc kubenswrapper[4703]: I1209 12:31:14.719974 4703 generic.go:334] "Generic (PLEG): container finished" podID="544746e3-7d8d-4459-a5d0-e6688c51ccbd" containerID="9e13c46d74f28c98eeae969c2900de9c130c095741926371229528dd47c053e4" exitCode=0 Dec 09 12:31:14 crc kubenswrapper[4703]: I1209 12:31:14.720202 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-mz25q" event={"ID":"544746e3-7d8d-4459-a5d0-e6688c51ccbd","Type":"ContainerDied","Data":"9e13c46d74f28c98eeae969c2900de9c130c095741926371229528dd47c053e4"} Dec 09 12:31:14 crc kubenswrapper[4703]: I1209 12:31:14.722668 4703 generic.go:334] "Generic (PLEG): container finished" podID="a71f8c26-a813-4d5b-9ce4-c9b6075a3153" containerID="3d10f8ed77019f4d15e88bd101653da99f26003f8b29e93f8ede48be5156ed20" exitCode=0 Dec 09 12:31:14 crc kubenswrapper[4703]: I1209 12:31:14.722728 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67bdc55879-vmqd9" event={"ID":"a71f8c26-a813-4d5b-9ce4-c9b6075a3153","Type":"ContainerDied","Data":"3d10f8ed77019f4d15e88bd101653da99f26003f8b29e93f8ede48be5156ed20"} Dec 09 12:31:14 crc kubenswrapper[4703]: I1209 12:31:14.733659 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8e6f4580-9b21-4b99-a48d-56df09b6863d","Type":"ContainerStarted","Data":"875998a873b05c99494028fac72afac183d53648c2f16b0fe9f95f56b7a94dd9"} Dec 09 12:31:14 crc kubenswrapper[4703]: I1209 12:31:14.972267 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 09 12:31:14 crc kubenswrapper[4703]: I1209 12:31:14.973959 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 09 12:31:15 crc kubenswrapper[4703]: I1209 12:31:15.010558 4703 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a0fa2550-b4dd-4365-8be1-b918c233a938" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.208:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 09 12:31:15 crc kubenswrapper[4703]: I1209 12:31:15.010638 4703 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a0fa2550-b4dd-4365-8be1-b918c233a938" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.208:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 09 12:31:15 crc kubenswrapper[4703]: I1209 12:31:15.194969 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67bdc55879-vmqd9" Dec 09 12:31:15 crc kubenswrapper[4703]: I1209 12:31:15.231325 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a71f8c26-a813-4d5b-9ce4-c9b6075a3153-ovsdbserver-sb\") pod \"a71f8c26-a813-4d5b-9ce4-c9b6075a3153\" (UID: \"a71f8c26-a813-4d5b-9ce4-c9b6075a3153\") " Dec 09 12:31:15 crc kubenswrapper[4703]: I1209 12:31:15.231440 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a71f8c26-a813-4d5b-9ce4-c9b6075a3153-dns-svc\") pod \"a71f8c26-a813-4d5b-9ce4-c9b6075a3153\" (UID: \"a71f8c26-a813-4d5b-9ce4-c9b6075a3153\") " Dec 09 12:31:15 crc kubenswrapper[4703]: I1209 12:31:15.231502 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a71f8c26-a813-4d5b-9ce4-c9b6075a3153-ovsdbserver-nb\") pod \"a71f8c26-a813-4d5b-9ce4-c9b6075a3153\" (UID: \"a71f8c26-a813-4d5b-9ce4-c9b6075a3153\") " Dec 09 12:31:15 crc kubenswrapper[4703]: I1209 12:31:15.231542 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a71f8c26-a813-4d5b-9ce4-c9b6075a3153-dns-swift-storage-0\") pod \"a71f8c26-a813-4d5b-9ce4-c9b6075a3153\" (UID: \"a71f8c26-a813-4d5b-9ce4-c9b6075a3153\") " Dec 09 12:31:15 crc kubenswrapper[4703]: I1209 12:31:15.231663 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-96zdt\" (UniqueName: \"kubernetes.io/projected/a71f8c26-a813-4d5b-9ce4-c9b6075a3153-kube-api-access-96zdt\") pod \"a71f8c26-a813-4d5b-9ce4-c9b6075a3153\" (UID: \"a71f8c26-a813-4d5b-9ce4-c9b6075a3153\") " Dec 09 12:31:15 crc kubenswrapper[4703]: I1209 12:31:15.231808 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a71f8c26-a813-4d5b-9ce4-c9b6075a3153-config\") pod \"a71f8c26-a813-4d5b-9ce4-c9b6075a3153\" (UID: \"a71f8c26-a813-4d5b-9ce4-c9b6075a3153\") " Dec 09 12:31:15 crc kubenswrapper[4703]: I1209 12:31:15.252649 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a71f8c26-a813-4d5b-9ce4-c9b6075a3153-kube-api-access-96zdt" (OuterVolumeSpecName: "kube-api-access-96zdt") pod "a71f8c26-a813-4d5b-9ce4-c9b6075a3153" (UID: "a71f8c26-a813-4d5b-9ce4-c9b6075a3153"). InnerVolumeSpecName "kube-api-access-96zdt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:31:15 crc kubenswrapper[4703]: I1209 12:31:15.377169 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-96zdt\" (UniqueName: \"kubernetes.io/projected/a71f8c26-a813-4d5b-9ce4-c9b6075a3153-kube-api-access-96zdt\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:15 crc kubenswrapper[4703]: I1209 12:31:15.384884 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a71f8c26-a813-4d5b-9ce4-c9b6075a3153-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a71f8c26-a813-4d5b-9ce4-c9b6075a3153" (UID: "a71f8c26-a813-4d5b-9ce4-c9b6075a3153"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:31:15 crc kubenswrapper[4703]: I1209 12:31:15.433825 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a71f8c26-a813-4d5b-9ce4-c9b6075a3153-config" (OuterVolumeSpecName: "config") pod "a71f8c26-a813-4d5b-9ce4-c9b6075a3153" (UID: "a71f8c26-a813-4d5b-9ce4-c9b6075a3153"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:31:15 crc kubenswrapper[4703]: I1209 12:31:15.435980 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a71f8c26-a813-4d5b-9ce4-c9b6075a3153-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a71f8c26-a813-4d5b-9ce4-c9b6075a3153" (UID: "a71f8c26-a813-4d5b-9ce4-c9b6075a3153"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:31:15 crc kubenswrapper[4703]: I1209 12:31:15.439818 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a71f8c26-a813-4d5b-9ce4-c9b6075a3153-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a71f8c26-a813-4d5b-9ce4-c9b6075a3153" (UID: "a71f8c26-a813-4d5b-9ce4-c9b6075a3153"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:31:15 crc kubenswrapper[4703]: I1209 12:31:15.450549 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a71f8c26-a813-4d5b-9ce4-c9b6075a3153-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a71f8c26-a813-4d5b-9ce4-c9b6075a3153" (UID: "a71f8c26-a813-4d5b-9ce4-c9b6075a3153"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:31:15 crc kubenswrapper[4703]: I1209 12:31:15.480497 4703 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a71f8c26-a813-4d5b-9ce4-c9b6075a3153-config\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:15 crc kubenswrapper[4703]: I1209 12:31:15.480553 4703 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a71f8c26-a813-4d5b-9ce4-c9b6075a3153-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:15 crc kubenswrapper[4703]: I1209 12:31:15.480568 4703 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a71f8c26-a813-4d5b-9ce4-c9b6075a3153-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:15 crc kubenswrapper[4703]: I1209 12:31:15.480590 4703 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a71f8c26-a813-4d5b-9ce4-c9b6075a3153-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:15 crc kubenswrapper[4703]: I1209 12:31:15.480608 4703 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a71f8c26-a813-4d5b-9ce4-c9b6075a3153-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:15 crc kubenswrapper[4703]: I1209 12:31:15.608571 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-zhqpt" Dec 09 12:31:15 crc kubenswrapper[4703]: I1209 12:31:15.689894 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25934d60-c1c2-41eb-9470-cdcab1791642-combined-ca-bundle\") pod \"25934d60-c1c2-41eb-9470-cdcab1791642\" (UID: \"25934d60-c1c2-41eb-9470-cdcab1791642\") " Dec 09 12:31:15 crc kubenswrapper[4703]: I1209 12:31:15.690100 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mdtqv\" (UniqueName: \"kubernetes.io/projected/25934d60-c1c2-41eb-9470-cdcab1791642-kube-api-access-mdtqv\") pod \"25934d60-c1c2-41eb-9470-cdcab1791642\" (UID: \"25934d60-c1c2-41eb-9470-cdcab1791642\") " Dec 09 12:31:15 crc kubenswrapper[4703]: I1209 12:31:15.690226 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25934d60-c1c2-41eb-9470-cdcab1791642-config-data\") pod \"25934d60-c1c2-41eb-9470-cdcab1791642\" (UID: \"25934d60-c1c2-41eb-9470-cdcab1791642\") " Dec 09 12:31:15 crc kubenswrapper[4703]: I1209 12:31:15.690483 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25934d60-c1c2-41eb-9470-cdcab1791642-scripts\") pod \"25934d60-c1c2-41eb-9470-cdcab1791642\" (UID: \"25934d60-c1c2-41eb-9470-cdcab1791642\") " Dec 09 12:31:15 crc kubenswrapper[4703]: I1209 12:31:15.695415 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25934d60-c1c2-41eb-9470-cdcab1791642-kube-api-access-mdtqv" (OuterVolumeSpecName: "kube-api-access-mdtqv") pod "25934d60-c1c2-41eb-9470-cdcab1791642" (UID: "25934d60-c1c2-41eb-9470-cdcab1791642"). InnerVolumeSpecName "kube-api-access-mdtqv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:31:15 crc kubenswrapper[4703]: I1209 12:31:15.703115 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25934d60-c1c2-41eb-9470-cdcab1791642-scripts" (OuterVolumeSpecName: "scripts") pod "25934d60-c1c2-41eb-9470-cdcab1791642" (UID: "25934d60-c1c2-41eb-9470-cdcab1791642"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:31:15 crc kubenswrapper[4703]: I1209 12:31:15.773716 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25934d60-c1c2-41eb-9470-cdcab1791642-config-data" (OuterVolumeSpecName: "config-data") pod "25934d60-c1c2-41eb-9470-cdcab1791642" (UID: "25934d60-c1c2-41eb-9470-cdcab1791642"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:31:15 crc kubenswrapper[4703]: I1209 12:31:15.780730 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25934d60-c1c2-41eb-9470-cdcab1791642-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "25934d60-c1c2-41eb-9470-cdcab1791642" (UID: "25934d60-c1c2-41eb-9470-cdcab1791642"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:31:15 crc kubenswrapper[4703]: I1209 12:31:15.796816 4703 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25934d60-c1c2-41eb-9470-cdcab1791642-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:15 crc kubenswrapper[4703]: I1209 12:31:15.796869 4703 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25934d60-c1c2-41eb-9470-cdcab1791642-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:15 crc kubenswrapper[4703]: I1209 12:31:15.796883 4703 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25934d60-c1c2-41eb-9470-cdcab1791642-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:15 crc kubenswrapper[4703]: I1209 12:31:15.796899 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mdtqv\" (UniqueName: \"kubernetes.io/projected/25934d60-c1c2-41eb-9470-cdcab1791642-kube-api-access-mdtqv\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:15 crc kubenswrapper[4703]: I1209 12:31:15.807937 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-zhqpt" event={"ID":"25934d60-c1c2-41eb-9470-cdcab1791642","Type":"ContainerDied","Data":"cd526ef70d14898e1ea08630a93b53b2885fb556663e8f1821255f881411d247"} Dec 09 12:31:15 crc kubenswrapper[4703]: I1209 12:31:15.807989 4703 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd526ef70d14898e1ea08630a93b53b2885fb556663e8f1821255f881411d247" Dec 09 12:31:15 crc kubenswrapper[4703]: I1209 12:31:15.808083 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-zhqpt" Dec 09 12:31:15 crc kubenswrapper[4703]: I1209 12:31:15.831140 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8e6f4580-9b21-4b99-a48d-56df09b6863d","Type":"ContainerStarted","Data":"95f9a0056afa1ac922e8f9ad5739c14e756b393576e27c55ceab2ca4145c81ab"} Dec 09 12:31:15 crc kubenswrapper[4703]: I1209 12:31:15.845889 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67bdc55879-vmqd9" Dec 09 12:31:15 crc kubenswrapper[4703]: I1209 12:31:15.846421 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67bdc55879-vmqd9" event={"ID":"a71f8c26-a813-4d5b-9ce4-c9b6075a3153","Type":"ContainerDied","Data":"277e9bcfbf658b9ada62d6fa32435fb64ddf89869298538fb505b1d4ad22acc8"} Dec 09 12:31:15 crc kubenswrapper[4703]: I1209 12:31:15.846510 4703 scope.go:117] "RemoveContainer" containerID="3d10f8ed77019f4d15e88bd101653da99f26003f8b29e93f8ede48be5156ed20" Dec 09 12:31:15 crc kubenswrapper[4703]: I1209 12:31:15.932098 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 09 12:31:15 crc kubenswrapper[4703]: I1209 12:31:15.933245 4703 scope.go:117] "RemoveContainer" containerID="3ec64ab8b25ed181288e3c77b88c650509b4af8b0f7782418d89a46335589e07" Dec 09 12:31:15 crc kubenswrapper[4703]: I1209 12:31:15.938452 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="a0fa2550-b4dd-4365-8be1-b918c233a938" containerName="nova-api-log" containerID="cri-o://08db48dabf1cd675d0b3202da9728d3d8a3416373bb290f8701992abf3d79f6c" gracePeriod=30 Dec 09 12:31:15 crc kubenswrapper[4703]: I1209 12:31:15.939072 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="a0fa2550-b4dd-4365-8be1-b918c233a938" containerName="nova-api-api" containerID="cri-o://224ea1b03989209785d771ecdb9420fafdb7ea82ce00a4b63f9b7bc2c1746535" gracePeriod=30 Dec 09 12:31:15 crc kubenswrapper[4703]: I1209 12:31:15.967289 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67bdc55879-vmqd9"] Dec 09 12:31:15 crc kubenswrapper[4703]: I1209 12:31:15.987406 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-67bdc55879-vmqd9"] Dec 09 12:31:16 crc kubenswrapper[4703]: I1209 12:31:16.003819 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 09 12:31:16 crc kubenswrapper[4703]: I1209 12:31:16.004126 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="03d4925e-cc3e-483d-845c-35da5aabe0b1" containerName="nova-scheduler-scheduler" containerID="cri-o://c66522e7d5747299df7170ef7925487088c665855d0c6cc393e7df84e82c83ce" gracePeriod=30 Dec 09 12:31:16 crc kubenswrapper[4703]: I1209 12:31:16.030982 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 09 12:31:16 crc kubenswrapper[4703]: I1209 12:31:16.586326 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-mz25q" Dec 09 12:31:16 crc kubenswrapper[4703]: I1209 12:31:16.723237 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/544746e3-7d8d-4459-a5d0-e6688c51ccbd-config-data\") pod \"544746e3-7d8d-4459-a5d0-e6688c51ccbd\" (UID: \"544746e3-7d8d-4459-a5d0-e6688c51ccbd\") " Dec 09 12:31:16 crc kubenswrapper[4703]: I1209 12:31:16.723630 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6sljb\" (UniqueName: \"kubernetes.io/projected/544746e3-7d8d-4459-a5d0-e6688c51ccbd-kube-api-access-6sljb\") pod \"544746e3-7d8d-4459-a5d0-e6688c51ccbd\" (UID: \"544746e3-7d8d-4459-a5d0-e6688c51ccbd\") " Dec 09 12:31:16 crc kubenswrapper[4703]: I1209 12:31:16.723809 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/544746e3-7d8d-4459-a5d0-e6688c51ccbd-scripts\") pod \"544746e3-7d8d-4459-a5d0-e6688c51ccbd\" (UID: \"544746e3-7d8d-4459-a5d0-e6688c51ccbd\") " Dec 09 12:31:16 crc kubenswrapper[4703]: I1209 12:31:16.723985 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/544746e3-7d8d-4459-a5d0-e6688c51ccbd-combined-ca-bundle\") pod \"544746e3-7d8d-4459-a5d0-e6688c51ccbd\" (UID: \"544746e3-7d8d-4459-a5d0-e6688c51ccbd\") " Dec 09 12:31:16 crc kubenswrapper[4703]: I1209 12:31:16.740610 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/544746e3-7d8d-4459-a5d0-e6688c51ccbd-kube-api-access-6sljb" (OuterVolumeSpecName: "kube-api-access-6sljb") pod "544746e3-7d8d-4459-a5d0-e6688c51ccbd" (UID: "544746e3-7d8d-4459-a5d0-e6688c51ccbd"). InnerVolumeSpecName "kube-api-access-6sljb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:31:16 crc kubenswrapper[4703]: I1209 12:31:16.756401 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/544746e3-7d8d-4459-a5d0-e6688c51ccbd-scripts" (OuterVolumeSpecName: "scripts") pod "544746e3-7d8d-4459-a5d0-e6688c51ccbd" (UID: "544746e3-7d8d-4459-a5d0-e6688c51ccbd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:31:16 crc kubenswrapper[4703]: I1209 12:31:16.791329 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/544746e3-7d8d-4459-a5d0-e6688c51ccbd-config-data" (OuterVolumeSpecName: "config-data") pod "544746e3-7d8d-4459-a5d0-e6688c51ccbd" (UID: "544746e3-7d8d-4459-a5d0-e6688c51ccbd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:31:16 crc kubenswrapper[4703]: I1209 12:31:16.793679 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/544746e3-7d8d-4459-a5d0-e6688c51ccbd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "544746e3-7d8d-4459-a5d0-e6688c51ccbd" (UID: "544746e3-7d8d-4459-a5d0-e6688c51ccbd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:31:16 crc kubenswrapper[4703]: I1209 12:31:16.839864 4703 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/544746e3-7d8d-4459-a5d0-e6688c51ccbd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:16 crc kubenswrapper[4703]: I1209 12:31:16.839902 4703 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/544746e3-7d8d-4459-a5d0-e6688c51ccbd-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:16 crc kubenswrapper[4703]: I1209 12:31:16.839914 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6sljb\" (UniqueName: \"kubernetes.io/projected/544746e3-7d8d-4459-a5d0-e6688c51ccbd-kube-api-access-6sljb\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:16 crc kubenswrapper[4703]: I1209 12:31:16.839925 4703 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/544746e3-7d8d-4459-a5d0-e6688c51ccbd-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:16 crc kubenswrapper[4703]: I1209 12:31:16.868322 4703 generic.go:334] "Generic (PLEG): container finished" podID="a0fa2550-b4dd-4365-8be1-b918c233a938" containerID="08db48dabf1cd675d0b3202da9728d3d8a3416373bb290f8701992abf3d79f6c" exitCode=143 Dec 09 12:31:16 crc kubenswrapper[4703]: I1209 12:31:16.868414 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a0fa2550-b4dd-4365-8be1-b918c233a938","Type":"ContainerDied","Data":"08db48dabf1cd675d0b3202da9728d3d8a3416373bb290f8701992abf3d79f6c"} Dec 09 12:31:16 crc kubenswrapper[4703]: I1209 12:31:16.875501 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="4ed30071-45f1-49f4-8b61-c982312f19eb" containerName="nova-metadata-log" containerID="cri-o://6d67d010bc3325f8a777821c4fc45b633aa0e21bebedc99642bd8643a8f80c78" gracePeriod=30 Dec 09 12:31:16 crc kubenswrapper[4703]: I1209 12:31:16.875916 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-mz25q" Dec 09 12:31:16 crc kubenswrapper[4703]: I1209 12:31:16.880324 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-mz25q" event={"ID":"544746e3-7d8d-4459-a5d0-e6688c51ccbd","Type":"ContainerDied","Data":"2551c7850e7d260658c93856ea6084c661803bead954bc26c6aea4ea1a9683e8"} Dec 09 12:31:16 crc kubenswrapper[4703]: I1209 12:31:16.880382 4703 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2551c7850e7d260658c93856ea6084c661803bead954bc26c6aea4ea1a9683e8" Dec 09 12:31:16 crc kubenswrapper[4703]: I1209 12:31:16.880623 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="4ed30071-45f1-49f4-8b61-c982312f19eb" containerName="nova-metadata-metadata" containerID="cri-o://7c197b9c260b4ff633168e9e78206457151c1c99959528821f876e88196f0ce3" gracePeriod=30 Dec 09 12:31:16 crc kubenswrapper[4703]: I1209 12:31:16.994318 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 09 12:31:16 crc kubenswrapper[4703]: E1209 12:31:16.994886 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a71f8c26-a813-4d5b-9ce4-c9b6075a3153" containerName="init" Dec 09 12:31:16 crc kubenswrapper[4703]: I1209 12:31:16.994911 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="a71f8c26-a813-4d5b-9ce4-c9b6075a3153" containerName="init" Dec 09 12:31:16 crc kubenswrapper[4703]: E1209 12:31:16.994941 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="544746e3-7d8d-4459-a5d0-e6688c51ccbd" containerName="nova-cell1-conductor-db-sync" Dec 09 12:31:16 crc kubenswrapper[4703]: I1209 12:31:16.994950 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="544746e3-7d8d-4459-a5d0-e6688c51ccbd" containerName="nova-cell1-conductor-db-sync" Dec 09 12:31:16 crc kubenswrapper[4703]: E1209 12:31:16.994979 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a71f8c26-a813-4d5b-9ce4-c9b6075a3153" containerName="dnsmasq-dns" Dec 09 12:31:16 crc kubenswrapper[4703]: I1209 12:31:16.994988 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="a71f8c26-a813-4d5b-9ce4-c9b6075a3153" containerName="dnsmasq-dns" Dec 09 12:31:16 crc kubenswrapper[4703]: E1209 12:31:16.995010 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25934d60-c1c2-41eb-9470-cdcab1791642" containerName="nova-manage" Dec 09 12:31:16 crc kubenswrapper[4703]: I1209 12:31:16.995018 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="25934d60-c1c2-41eb-9470-cdcab1791642" containerName="nova-manage" Dec 09 12:31:16 crc kubenswrapper[4703]: I1209 12:31:16.995284 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="544746e3-7d8d-4459-a5d0-e6688c51ccbd" containerName="nova-cell1-conductor-db-sync" Dec 09 12:31:16 crc kubenswrapper[4703]: I1209 12:31:16.995309 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="25934d60-c1c2-41eb-9470-cdcab1791642" containerName="nova-manage" Dec 09 12:31:16 crc kubenswrapper[4703]: I1209 12:31:16.995327 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="a71f8c26-a813-4d5b-9ce4-c9b6075a3153" containerName="dnsmasq-dns" Dec 09 12:31:16 crc kubenswrapper[4703]: I1209 12:31:16.996423 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 09 12:31:16 crc kubenswrapper[4703]: I1209 12:31:16.998814 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Dec 09 12:31:17 crc kubenswrapper[4703]: I1209 12:31:17.018473 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 09 12:31:17 crc kubenswrapper[4703]: I1209 12:31:17.118350 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a71f8c26-a813-4d5b-9ce4-c9b6075a3153" path="/var/lib/kubelet/pods/a71f8c26-a813-4d5b-9ce4-c9b6075a3153/volumes" Dec 09 12:31:17 crc kubenswrapper[4703]: I1209 12:31:17.153737 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkrjd\" (UniqueName: \"kubernetes.io/projected/8eceabb3-1419-4d9d-a3d7-fad5725b8ae9-kube-api-access-lkrjd\") pod \"nova-cell1-conductor-0\" (UID: \"8eceabb3-1419-4d9d-a3d7-fad5725b8ae9\") " pod="openstack/nova-cell1-conductor-0" Dec 09 12:31:17 crc kubenswrapper[4703]: I1209 12:31:17.153873 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8eceabb3-1419-4d9d-a3d7-fad5725b8ae9-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"8eceabb3-1419-4d9d-a3d7-fad5725b8ae9\") " pod="openstack/nova-cell1-conductor-0" Dec 09 12:31:17 crc kubenswrapper[4703]: I1209 12:31:17.153978 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8eceabb3-1419-4d9d-a3d7-fad5725b8ae9-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"8eceabb3-1419-4d9d-a3d7-fad5725b8ae9\") " pod="openstack/nova-cell1-conductor-0" Dec 09 12:31:17 crc kubenswrapper[4703]: I1209 12:31:17.256884 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8eceabb3-1419-4d9d-a3d7-fad5725b8ae9-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"8eceabb3-1419-4d9d-a3d7-fad5725b8ae9\") " pod="openstack/nova-cell1-conductor-0" Dec 09 12:31:17 crc kubenswrapper[4703]: I1209 12:31:17.257151 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkrjd\" (UniqueName: \"kubernetes.io/projected/8eceabb3-1419-4d9d-a3d7-fad5725b8ae9-kube-api-access-lkrjd\") pod \"nova-cell1-conductor-0\" (UID: \"8eceabb3-1419-4d9d-a3d7-fad5725b8ae9\") " pod="openstack/nova-cell1-conductor-0" Dec 09 12:31:17 crc kubenswrapper[4703]: I1209 12:31:17.257249 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8eceabb3-1419-4d9d-a3d7-fad5725b8ae9-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"8eceabb3-1419-4d9d-a3d7-fad5725b8ae9\") " pod="openstack/nova-cell1-conductor-0" Dec 09 12:31:17 crc kubenswrapper[4703]: I1209 12:31:17.263860 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8eceabb3-1419-4d9d-a3d7-fad5725b8ae9-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"8eceabb3-1419-4d9d-a3d7-fad5725b8ae9\") " pod="openstack/nova-cell1-conductor-0" Dec 09 12:31:17 crc kubenswrapper[4703]: I1209 12:31:17.265219 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8eceabb3-1419-4d9d-a3d7-fad5725b8ae9-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"8eceabb3-1419-4d9d-a3d7-fad5725b8ae9\") " pod="openstack/nova-cell1-conductor-0" Dec 09 12:31:17 crc kubenswrapper[4703]: I1209 12:31:17.313123 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkrjd\" (UniqueName: \"kubernetes.io/projected/8eceabb3-1419-4d9d-a3d7-fad5725b8ae9-kube-api-access-lkrjd\") pod \"nova-cell1-conductor-0\" (UID: \"8eceabb3-1419-4d9d-a3d7-fad5725b8ae9\") " pod="openstack/nova-cell1-conductor-0" Dec 09 12:31:17 crc kubenswrapper[4703]: I1209 12:31:17.337808 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 09 12:31:17 crc kubenswrapper[4703]: I1209 12:31:17.909296 4703 generic.go:334] "Generic (PLEG): container finished" podID="4ed30071-45f1-49f4-8b61-c982312f19eb" containerID="7c197b9c260b4ff633168e9e78206457151c1c99959528821f876e88196f0ce3" exitCode=0 Dec 09 12:31:17 crc kubenswrapper[4703]: I1209 12:31:17.909601 4703 generic.go:334] "Generic (PLEG): container finished" podID="4ed30071-45f1-49f4-8b61-c982312f19eb" containerID="6d67d010bc3325f8a777821c4fc45b633aa0e21bebedc99642bd8643a8f80c78" exitCode=143 Dec 09 12:31:17 crc kubenswrapper[4703]: I1209 12:31:17.909559 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4ed30071-45f1-49f4-8b61-c982312f19eb","Type":"ContainerDied","Data":"7c197b9c260b4ff633168e9e78206457151c1c99959528821f876e88196f0ce3"} Dec 09 12:31:17 crc kubenswrapper[4703]: I1209 12:31:17.911147 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4ed30071-45f1-49f4-8b61-c982312f19eb","Type":"ContainerDied","Data":"6d67d010bc3325f8a777821c4fc45b633aa0e21bebedc99642bd8643a8f80c78"} Dec 09 12:31:18 crc kubenswrapper[4703]: I1209 12:31:18.055493 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Dec 09 12:31:18 crc kubenswrapper[4703]: I1209 12:31:18.073792 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 09 12:31:18 crc kubenswrapper[4703]: I1209 12:31:18.353740 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 09 12:31:18 crc kubenswrapper[4703]: I1209 12:31:18.516865 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ed30071-45f1-49f4-8b61-c982312f19eb-combined-ca-bundle\") pod \"4ed30071-45f1-49f4-8b61-c982312f19eb\" (UID: \"4ed30071-45f1-49f4-8b61-c982312f19eb\") " Dec 09 12:31:18 crc kubenswrapper[4703]: I1209 12:31:18.517248 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xq2ms\" (UniqueName: \"kubernetes.io/projected/4ed30071-45f1-49f4-8b61-c982312f19eb-kube-api-access-xq2ms\") pod \"4ed30071-45f1-49f4-8b61-c982312f19eb\" (UID: \"4ed30071-45f1-49f4-8b61-c982312f19eb\") " Dec 09 12:31:18 crc kubenswrapper[4703]: I1209 12:31:18.517366 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ed30071-45f1-49f4-8b61-c982312f19eb-nova-metadata-tls-certs\") pod \"4ed30071-45f1-49f4-8b61-c982312f19eb\" (UID: \"4ed30071-45f1-49f4-8b61-c982312f19eb\") " Dec 09 12:31:18 crc kubenswrapper[4703]: I1209 12:31:18.517394 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ed30071-45f1-49f4-8b61-c982312f19eb-config-data\") pod \"4ed30071-45f1-49f4-8b61-c982312f19eb\" (UID: \"4ed30071-45f1-49f4-8b61-c982312f19eb\") " Dec 09 12:31:18 crc kubenswrapper[4703]: I1209 12:31:18.517418 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ed30071-45f1-49f4-8b61-c982312f19eb-logs\") pod \"4ed30071-45f1-49f4-8b61-c982312f19eb\" (UID: \"4ed30071-45f1-49f4-8b61-c982312f19eb\") " Dec 09 12:31:18 crc kubenswrapper[4703]: I1209 12:31:18.518312 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ed30071-45f1-49f4-8b61-c982312f19eb-logs" (OuterVolumeSpecName: "logs") pod "4ed30071-45f1-49f4-8b61-c982312f19eb" (UID: "4ed30071-45f1-49f4-8b61-c982312f19eb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:31:18 crc kubenswrapper[4703]: I1209 12:31:18.526433 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ed30071-45f1-49f4-8b61-c982312f19eb-kube-api-access-xq2ms" (OuterVolumeSpecName: "kube-api-access-xq2ms") pod "4ed30071-45f1-49f4-8b61-c982312f19eb" (UID: "4ed30071-45f1-49f4-8b61-c982312f19eb"). InnerVolumeSpecName "kube-api-access-xq2ms". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:31:18 crc kubenswrapper[4703]: I1209 12:31:18.573407 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ed30071-45f1-49f4-8b61-c982312f19eb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4ed30071-45f1-49f4-8b61-c982312f19eb" (UID: "4ed30071-45f1-49f4-8b61-c982312f19eb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:31:18 crc kubenswrapper[4703]: I1209 12:31:18.606414 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ed30071-45f1-49f4-8b61-c982312f19eb-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "4ed30071-45f1-49f4-8b61-c982312f19eb" (UID: "4ed30071-45f1-49f4-8b61-c982312f19eb"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:31:18 crc kubenswrapper[4703]: I1209 12:31:18.612517 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ed30071-45f1-49f4-8b61-c982312f19eb-config-data" (OuterVolumeSpecName: "config-data") pod "4ed30071-45f1-49f4-8b61-c982312f19eb" (UID: "4ed30071-45f1-49f4-8b61-c982312f19eb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:31:18 crc kubenswrapper[4703]: I1209 12:31:18.619861 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xq2ms\" (UniqueName: \"kubernetes.io/projected/4ed30071-45f1-49f4-8b61-c982312f19eb-kube-api-access-xq2ms\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:18 crc kubenswrapper[4703]: I1209 12:31:18.620094 4703 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ed30071-45f1-49f4-8b61-c982312f19eb-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:18 crc kubenswrapper[4703]: I1209 12:31:18.620156 4703 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ed30071-45f1-49f4-8b61-c982312f19eb-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:18 crc kubenswrapper[4703]: I1209 12:31:18.620329 4703 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ed30071-45f1-49f4-8b61-c982312f19eb-logs\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:18 crc kubenswrapper[4703]: I1209 12:31:18.620400 4703 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ed30071-45f1-49f4-8b61-c982312f19eb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:18 crc kubenswrapper[4703]: E1209 12:31:18.640488 4703 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c66522e7d5747299df7170ef7925487088c665855d0c6cc393e7df84e82c83ce" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 09 12:31:18 crc kubenswrapper[4703]: E1209 12:31:18.642785 4703 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c66522e7d5747299df7170ef7925487088c665855d0c6cc393e7df84e82c83ce" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 09 12:31:18 crc kubenswrapper[4703]: E1209 12:31:18.644589 4703 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c66522e7d5747299df7170ef7925487088c665855d0c6cc393e7df84e82c83ce" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 09 12:31:18 crc kubenswrapper[4703]: E1209 12:31:18.644794 4703 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="03d4925e-cc3e-483d-845c-35da5aabe0b1" containerName="nova-scheduler-scheduler" Dec 09 12:31:18 crc kubenswrapper[4703]: I1209 12:31:18.937985 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 09 12:31:18 crc kubenswrapper[4703]: I1209 12:31:18.938017 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4ed30071-45f1-49f4-8b61-c982312f19eb","Type":"ContainerDied","Data":"1b0ebe505393ea5c42683c9d862f089787fc092824ce96e6d926d9c86da2e1c4"} Dec 09 12:31:18 crc kubenswrapper[4703]: I1209 12:31:18.938134 4703 scope.go:117] "RemoveContainer" containerID="7c197b9c260b4ff633168e9e78206457151c1c99959528821f876e88196f0ce3" Dec 09 12:31:18 crc kubenswrapper[4703]: I1209 12:31:18.944910 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8e6f4580-9b21-4b99-a48d-56df09b6863d","Type":"ContainerStarted","Data":"c8680b06f3e0e798f5d1b1e497f72bcd0d29c85033186fc128d4eb0285a3a0a8"} Dec 09 12:31:18 crc kubenswrapper[4703]: I1209 12:31:18.945082 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 09 12:31:18 crc kubenswrapper[4703]: I1209 12:31:18.960108 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"8eceabb3-1419-4d9d-a3d7-fad5725b8ae9","Type":"ContainerStarted","Data":"14db1f40e0689f5a211a33b1e66472bb47ff40fc6c280ea095151d1e8c757482"} Dec 09 12:31:18 crc kubenswrapper[4703]: I1209 12:31:18.960172 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"8eceabb3-1419-4d9d-a3d7-fad5725b8ae9","Type":"ContainerStarted","Data":"90ff4e55711a995e1e5b726ca4c560a5acbf44ff43d4ed90d536e1ac12eea86c"} Dec 09 12:31:18 crc kubenswrapper[4703]: I1209 12:31:18.984853 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.899679894 podStartE2EDuration="7.984826835s" podCreationTimestamp="2025-12-09 12:31:11 +0000 UTC" firstStartedPulling="2025-12-09 12:31:12.678926224 +0000 UTC m=+1571.927689753" lastFinishedPulling="2025-12-09 12:31:17.764073175 +0000 UTC m=+1577.012836694" observedRunningTime="2025-12-09 12:31:18.969866831 +0000 UTC m=+1578.218630360" watchObservedRunningTime="2025-12-09 12:31:18.984826835 +0000 UTC m=+1578.233590354" Dec 09 12:31:19 crc kubenswrapper[4703]: I1209 12:31:19.002911 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=3.002858344 podStartE2EDuration="3.002858344s" podCreationTimestamp="2025-12-09 12:31:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:31:18.996685671 +0000 UTC m=+1578.245449190" watchObservedRunningTime="2025-12-09 12:31:19.002858344 +0000 UTC m=+1578.251621883" Dec 09 12:31:19 crc kubenswrapper[4703]: I1209 12:31:19.017204 4703 scope.go:117] "RemoveContainer" containerID="6d67d010bc3325f8a777821c4fc45b633aa0e21bebedc99642bd8643a8f80c78" Dec 09 12:31:19 crc kubenswrapper[4703]: I1209 12:31:19.062638 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 09 12:31:19 crc kubenswrapper[4703]: I1209 12:31:19.106844 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 09 12:31:19 crc kubenswrapper[4703]: I1209 12:31:19.106901 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 09 12:31:19 crc kubenswrapper[4703]: E1209 12:31:19.107485 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ed30071-45f1-49f4-8b61-c982312f19eb" containerName="nova-metadata-metadata" Dec 09 12:31:19 crc kubenswrapper[4703]: I1209 12:31:19.110523 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ed30071-45f1-49f4-8b61-c982312f19eb" containerName="nova-metadata-metadata" Dec 09 12:31:19 crc kubenswrapper[4703]: E1209 12:31:19.110597 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ed30071-45f1-49f4-8b61-c982312f19eb" containerName="nova-metadata-log" Dec 09 12:31:19 crc kubenswrapper[4703]: I1209 12:31:19.110611 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ed30071-45f1-49f4-8b61-c982312f19eb" containerName="nova-metadata-log" Dec 09 12:31:19 crc kubenswrapper[4703]: I1209 12:31:19.110937 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ed30071-45f1-49f4-8b61-c982312f19eb" containerName="nova-metadata-log" Dec 09 12:31:19 crc kubenswrapper[4703]: I1209 12:31:19.110975 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ed30071-45f1-49f4-8b61-c982312f19eb" containerName="nova-metadata-metadata" Dec 09 12:31:19 crc kubenswrapper[4703]: I1209 12:31:19.136207 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 09 12:31:19 crc kubenswrapper[4703]: I1209 12:31:19.136353 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 09 12:31:19 crc kubenswrapper[4703]: I1209 12:31:19.145881 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 09 12:31:19 crc kubenswrapper[4703]: I1209 12:31:19.146014 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 09 12:31:19 crc kubenswrapper[4703]: I1209 12:31:19.233927 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1167f24a-93e8-4578-b9e7-44bca208de2c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"1167f24a-93e8-4578-b9e7-44bca208de2c\") " pod="openstack/nova-metadata-0" Dec 09 12:31:19 crc kubenswrapper[4703]: I1209 12:31:19.235323 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqptj\" (UniqueName: \"kubernetes.io/projected/1167f24a-93e8-4578-b9e7-44bca208de2c-kube-api-access-nqptj\") pod \"nova-metadata-0\" (UID: \"1167f24a-93e8-4578-b9e7-44bca208de2c\") " pod="openstack/nova-metadata-0" Dec 09 12:31:19 crc kubenswrapper[4703]: I1209 12:31:19.235469 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1167f24a-93e8-4578-b9e7-44bca208de2c-logs\") pod \"nova-metadata-0\" (UID: \"1167f24a-93e8-4578-b9e7-44bca208de2c\") " pod="openstack/nova-metadata-0" Dec 09 12:31:19 crc kubenswrapper[4703]: I1209 12:31:19.235713 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1167f24a-93e8-4578-b9e7-44bca208de2c-config-data\") pod \"nova-metadata-0\" (UID: \"1167f24a-93e8-4578-b9e7-44bca208de2c\") " pod="openstack/nova-metadata-0" Dec 09 12:31:19 crc kubenswrapper[4703]: I1209 12:31:19.235871 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1167f24a-93e8-4578-b9e7-44bca208de2c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1167f24a-93e8-4578-b9e7-44bca208de2c\") " pod="openstack/nova-metadata-0" Dec 09 12:31:19 crc kubenswrapper[4703]: I1209 12:31:19.338826 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1167f24a-93e8-4578-b9e7-44bca208de2c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"1167f24a-93e8-4578-b9e7-44bca208de2c\") " pod="openstack/nova-metadata-0" Dec 09 12:31:19 crc kubenswrapper[4703]: I1209 12:31:19.338930 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqptj\" (UniqueName: \"kubernetes.io/projected/1167f24a-93e8-4578-b9e7-44bca208de2c-kube-api-access-nqptj\") pod \"nova-metadata-0\" (UID: \"1167f24a-93e8-4578-b9e7-44bca208de2c\") " pod="openstack/nova-metadata-0" Dec 09 12:31:19 crc kubenswrapper[4703]: I1209 12:31:19.338957 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1167f24a-93e8-4578-b9e7-44bca208de2c-logs\") pod \"nova-metadata-0\" (UID: \"1167f24a-93e8-4578-b9e7-44bca208de2c\") " pod="openstack/nova-metadata-0" Dec 09 12:31:19 crc kubenswrapper[4703]: I1209 12:31:19.338999 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1167f24a-93e8-4578-b9e7-44bca208de2c-config-data\") pod \"nova-metadata-0\" (UID: \"1167f24a-93e8-4578-b9e7-44bca208de2c\") " pod="openstack/nova-metadata-0" Dec 09 12:31:19 crc kubenswrapper[4703]: I1209 12:31:19.339032 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1167f24a-93e8-4578-b9e7-44bca208de2c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1167f24a-93e8-4578-b9e7-44bca208de2c\") " pod="openstack/nova-metadata-0" Dec 09 12:31:19 crc kubenswrapper[4703]: I1209 12:31:19.340795 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1167f24a-93e8-4578-b9e7-44bca208de2c-logs\") pod \"nova-metadata-0\" (UID: \"1167f24a-93e8-4578-b9e7-44bca208de2c\") " pod="openstack/nova-metadata-0" Dec 09 12:31:19 crc kubenswrapper[4703]: I1209 12:31:19.347562 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1167f24a-93e8-4578-b9e7-44bca208de2c-config-data\") pod \"nova-metadata-0\" (UID: \"1167f24a-93e8-4578-b9e7-44bca208de2c\") " pod="openstack/nova-metadata-0" Dec 09 12:31:19 crc kubenswrapper[4703]: I1209 12:31:19.353811 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1167f24a-93e8-4578-b9e7-44bca208de2c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"1167f24a-93e8-4578-b9e7-44bca208de2c\") " pod="openstack/nova-metadata-0" Dec 09 12:31:19 crc kubenswrapper[4703]: I1209 12:31:19.360173 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1167f24a-93e8-4578-b9e7-44bca208de2c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1167f24a-93e8-4578-b9e7-44bca208de2c\") " pod="openstack/nova-metadata-0" Dec 09 12:31:19 crc kubenswrapper[4703]: I1209 12:31:19.367937 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqptj\" (UniqueName: \"kubernetes.io/projected/1167f24a-93e8-4578-b9e7-44bca208de2c-kube-api-access-nqptj\") pod \"nova-metadata-0\" (UID: \"1167f24a-93e8-4578-b9e7-44bca208de2c\") " pod="openstack/nova-metadata-0" Dec 09 12:31:19 crc kubenswrapper[4703]: I1209 12:31:19.463731 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 09 12:31:19 crc kubenswrapper[4703]: I1209 12:31:19.976713 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Dec 09 12:31:20 crc kubenswrapper[4703]: I1209 12:31:20.125591 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 09 12:31:20 crc kubenswrapper[4703]: W1209 12:31:20.145922 4703 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1167f24a_93e8_4578_b9e7_44bca208de2c.slice/crio-1857e1a06c1846c3b5043ea036788e40243ffaa0ffed95e81cd9af7e1db85cf6 WatchSource:0}: Error finding container 1857e1a06c1846c3b5043ea036788e40243ffaa0ffed95e81cd9af7e1db85cf6: Status 404 returned error can't find the container with id 1857e1a06c1846c3b5043ea036788e40243ffaa0ffed95e81cd9af7e1db85cf6 Dec 09 12:31:20 crc kubenswrapper[4703]: I1209 12:31:20.665549 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 09 12:31:20 crc kubenswrapper[4703]: I1209 12:31:20.937777 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03d4925e-cc3e-483d-845c-35da5aabe0b1-config-data\") pod \"03d4925e-cc3e-483d-845c-35da5aabe0b1\" (UID: \"03d4925e-cc3e-483d-845c-35da5aabe0b1\") " Dec 09 12:31:20 crc kubenswrapper[4703]: I1209 12:31:20.938029 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q56gt\" (UniqueName: \"kubernetes.io/projected/03d4925e-cc3e-483d-845c-35da5aabe0b1-kube-api-access-q56gt\") pod \"03d4925e-cc3e-483d-845c-35da5aabe0b1\" (UID: \"03d4925e-cc3e-483d-845c-35da5aabe0b1\") " Dec 09 12:31:20 crc kubenswrapper[4703]: I1209 12:31:20.938628 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03d4925e-cc3e-483d-845c-35da5aabe0b1-combined-ca-bundle\") pod \"03d4925e-cc3e-483d-845c-35da5aabe0b1\" (UID: \"03d4925e-cc3e-483d-845c-35da5aabe0b1\") " Dec 09 12:31:20 crc kubenswrapper[4703]: I1209 12:31:20.946616 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03d4925e-cc3e-483d-845c-35da5aabe0b1-kube-api-access-q56gt" (OuterVolumeSpecName: "kube-api-access-q56gt") pod "03d4925e-cc3e-483d-845c-35da5aabe0b1" (UID: "03d4925e-cc3e-483d-845c-35da5aabe0b1"). InnerVolumeSpecName "kube-api-access-q56gt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:31:20 crc kubenswrapper[4703]: I1209 12:31:20.974677 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03d4925e-cc3e-483d-845c-35da5aabe0b1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "03d4925e-cc3e-483d-845c-35da5aabe0b1" (UID: "03d4925e-cc3e-483d-845c-35da5aabe0b1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:31:20 crc kubenswrapper[4703]: I1209 12:31:20.978365 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03d4925e-cc3e-483d-845c-35da5aabe0b1-config-data" (OuterVolumeSpecName: "config-data") pod "03d4925e-cc3e-483d-845c-35da5aabe0b1" (UID: "03d4925e-cc3e-483d-845c-35da5aabe0b1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:31:21 crc kubenswrapper[4703]: I1209 12:31:21.000094 4703 generic.go:334] "Generic (PLEG): container finished" podID="03d4925e-cc3e-483d-845c-35da5aabe0b1" containerID="c66522e7d5747299df7170ef7925487088c665855d0c6cc393e7df84e82c83ce" exitCode=0 Dec 09 12:31:21 crc kubenswrapper[4703]: I1209 12:31:21.000575 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 09 12:31:21 crc kubenswrapper[4703]: I1209 12:31:21.000618 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"03d4925e-cc3e-483d-845c-35da5aabe0b1","Type":"ContainerDied","Data":"c66522e7d5747299df7170ef7925487088c665855d0c6cc393e7df84e82c83ce"} Dec 09 12:31:21 crc kubenswrapper[4703]: I1209 12:31:21.001606 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"03d4925e-cc3e-483d-845c-35da5aabe0b1","Type":"ContainerDied","Data":"b92e4c1410af41f14b43a477a3a63ac60c696650cb05f6462254eba8d2268289"} Dec 09 12:31:21 crc kubenswrapper[4703]: I1209 12:31:21.001710 4703 scope.go:117] "RemoveContainer" containerID="c66522e7d5747299df7170ef7925487088c665855d0c6cc393e7df84e82c83ce" Dec 09 12:31:21 crc kubenswrapper[4703]: I1209 12:31:21.014620 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1167f24a-93e8-4578-b9e7-44bca208de2c","Type":"ContainerStarted","Data":"69fce99ba6cf97efb89b40fe942d16d62f986bb754d5974d71ed874867b69023"} Dec 09 12:31:21 crc kubenswrapper[4703]: I1209 12:31:21.015550 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1167f24a-93e8-4578-b9e7-44bca208de2c","Type":"ContainerStarted","Data":"86cdd72d3e398e05be8bfbf68b63f4b3d01609abd7c732bf286f6a8be4e9e44c"} Dec 09 12:31:21 crc kubenswrapper[4703]: I1209 12:31:21.015712 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1167f24a-93e8-4578-b9e7-44bca208de2c","Type":"ContainerStarted","Data":"1857e1a06c1846c3b5043ea036788e40243ffaa0ffed95e81cd9af7e1db85cf6"} Dec 09 12:31:21 crc kubenswrapper[4703]: I1209 12:31:21.044398 4703 scope.go:117] "RemoveContainer" containerID="c66522e7d5747299df7170ef7925487088c665855d0c6cc393e7df84e82c83ce" Dec 09 12:31:21 crc kubenswrapper[4703]: I1209 12:31:21.045253 4703 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03d4925e-cc3e-483d-845c-35da5aabe0b1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:21 crc kubenswrapper[4703]: I1209 12:31:21.046250 4703 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03d4925e-cc3e-483d-845c-35da5aabe0b1-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:21 crc kubenswrapper[4703]: I1209 12:31:21.046282 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q56gt\" (UniqueName: \"kubernetes.io/projected/03d4925e-cc3e-483d-845c-35da5aabe0b1-kube-api-access-q56gt\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:21 crc kubenswrapper[4703]: E1209 12:31:21.047474 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c66522e7d5747299df7170ef7925487088c665855d0c6cc393e7df84e82c83ce\": container with ID starting with c66522e7d5747299df7170ef7925487088c665855d0c6cc393e7df84e82c83ce not found: ID does not exist" containerID="c66522e7d5747299df7170ef7925487088c665855d0c6cc393e7df84e82c83ce" Dec 09 12:31:21 crc kubenswrapper[4703]: I1209 12:31:21.047527 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c66522e7d5747299df7170ef7925487088c665855d0c6cc393e7df84e82c83ce"} err="failed to get container status \"c66522e7d5747299df7170ef7925487088c665855d0c6cc393e7df84e82c83ce\": rpc error: code = NotFound desc = could not find container \"c66522e7d5747299df7170ef7925487088c665855d0c6cc393e7df84e82c83ce\": container with ID starting with c66522e7d5747299df7170ef7925487088c665855d0c6cc393e7df84e82c83ce not found: ID does not exist" Dec 09 12:31:21 crc kubenswrapper[4703]: I1209 12:31:21.055087 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.055056562 podStartE2EDuration="2.055056562s" podCreationTimestamp="2025-12-09 12:31:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:31:21.04777428 +0000 UTC m=+1580.296537809" watchObservedRunningTime="2025-12-09 12:31:21.055056562 +0000 UTC m=+1580.303820081" Dec 09 12:31:21 crc kubenswrapper[4703]: I1209 12:31:21.167600 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ed30071-45f1-49f4-8b61-c982312f19eb" path="/var/lib/kubelet/pods/4ed30071-45f1-49f4-8b61-c982312f19eb/volumes" Dec 09 12:31:21 crc kubenswrapper[4703]: I1209 12:31:21.175379 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 09 12:31:21 crc kubenswrapper[4703]: I1209 12:31:21.215292 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Dec 09 12:31:21 crc kubenswrapper[4703]: I1209 12:31:21.281429 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 09 12:31:21 crc kubenswrapper[4703]: E1209 12:31:21.282379 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03d4925e-cc3e-483d-845c-35da5aabe0b1" containerName="nova-scheduler-scheduler" Dec 09 12:31:21 crc kubenswrapper[4703]: I1209 12:31:21.282536 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="03d4925e-cc3e-483d-845c-35da5aabe0b1" containerName="nova-scheduler-scheduler" Dec 09 12:31:21 crc kubenswrapper[4703]: I1209 12:31:21.282893 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="03d4925e-cc3e-483d-845c-35da5aabe0b1" containerName="nova-scheduler-scheduler" Dec 09 12:31:21 crc kubenswrapper[4703]: I1209 12:31:21.284098 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 09 12:31:21 crc kubenswrapper[4703]: I1209 12:31:21.305851 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 09 12:31:21 crc kubenswrapper[4703]: I1209 12:31:21.312603 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 09 12:31:21 crc kubenswrapper[4703]: I1209 12:31:21.477139 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/828c53a9-97b5-4039-bca6-9005809584e3-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"828c53a9-97b5-4039-bca6-9005809584e3\") " pod="openstack/nova-scheduler-0" Dec 09 12:31:21 crc kubenswrapper[4703]: I1209 12:31:21.477632 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vbls\" (UniqueName: \"kubernetes.io/projected/828c53a9-97b5-4039-bca6-9005809584e3-kube-api-access-9vbls\") pod \"nova-scheduler-0\" (UID: \"828c53a9-97b5-4039-bca6-9005809584e3\") " pod="openstack/nova-scheduler-0" Dec 09 12:31:21 crc kubenswrapper[4703]: I1209 12:31:21.477900 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/828c53a9-97b5-4039-bca6-9005809584e3-config-data\") pod \"nova-scheduler-0\" (UID: \"828c53a9-97b5-4039-bca6-9005809584e3\") " pod="openstack/nova-scheduler-0" Dec 09 12:31:21 crc kubenswrapper[4703]: I1209 12:31:21.579993 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/828c53a9-97b5-4039-bca6-9005809584e3-config-data\") pod \"nova-scheduler-0\" (UID: \"828c53a9-97b5-4039-bca6-9005809584e3\") " pod="openstack/nova-scheduler-0" Dec 09 12:31:21 crc kubenswrapper[4703]: I1209 12:31:21.580251 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/828c53a9-97b5-4039-bca6-9005809584e3-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"828c53a9-97b5-4039-bca6-9005809584e3\") " pod="openstack/nova-scheduler-0" Dec 09 12:31:21 crc kubenswrapper[4703]: I1209 12:31:21.580295 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vbls\" (UniqueName: \"kubernetes.io/projected/828c53a9-97b5-4039-bca6-9005809584e3-kube-api-access-9vbls\") pod \"nova-scheduler-0\" (UID: \"828c53a9-97b5-4039-bca6-9005809584e3\") " pod="openstack/nova-scheduler-0" Dec 09 12:31:21 crc kubenswrapper[4703]: I1209 12:31:21.591145 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/828c53a9-97b5-4039-bca6-9005809584e3-config-data\") pod \"nova-scheduler-0\" (UID: \"828c53a9-97b5-4039-bca6-9005809584e3\") " pod="openstack/nova-scheduler-0" Dec 09 12:31:21 crc kubenswrapper[4703]: I1209 12:31:21.608425 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vbls\" (UniqueName: \"kubernetes.io/projected/828c53a9-97b5-4039-bca6-9005809584e3-kube-api-access-9vbls\") pod \"nova-scheduler-0\" (UID: \"828c53a9-97b5-4039-bca6-9005809584e3\") " pod="openstack/nova-scheduler-0" Dec 09 12:31:21 crc kubenswrapper[4703]: I1209 12:31:21.608547 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/828c53a9-97b5-4039-bca6-9005809584e3-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"828c53a9-97b5-4039-bca6-9005809584e3\") " pod="openstack/nova-scheduler-0" Dec 09 12:31:21 crc kubenswrapper[4703]: I1209 12:31:21.742518 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 09 12:31:22 crc kubenswrapper[4703]: I1209 12:31:22.452404 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 09 12:31:23 crc kubenswrapper[4703]: I1209 12:31:23.046598 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"828c53a9-97b5-4039-bca6-9005809584e3","Type":"ContainerStarted","Data":"d3a38150585efbacae8459515ea5722319dc446bed4affa411f1cc847b2dfa4b"} Dec 09 12:31:23 crc kubenswrapper[4703]: I1209 12:31:23.047111 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"828c53a9-97b5-4039-bca6-9005809584e3","Type":"ContainerStarted","Data":"025bfb8948946c244e5d3a9a903b5688ff65dc46307cf14cd12c8dec859a6bf6"} Dec 09 12:31:23 crc kubenswrapper[4703]: I1209 12:31:23.083808 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03d4925e-cc3e-483d-845c-35da5aabe0b1" path="/var/lib/kubelet/pods/03d4925e-cc3e-483d-845c-35da5aabe0b1/volumes" Dec 09 12:31:23 crc kubenswrapper[4703]: I1209 12:31:23.083943 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.083913216 podStartE2EDuration="2.083913216s" podCreationTimestamp="2025-12-09 12:31:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:31:23.068002109 +0000 UTC m=+1582.316765628" watchObservedRunningTime="2025-12-09 12:31:23.083913216 +0000 UTC m=+1582.332676735" Dec 09 12:31:23 crc kubenswrapper[4703]: I1209 12:31:23.908568 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 09 12:31:23 crc kubenswrapper[4703]: I1209 12:31:23.909784 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 09 12:31:24 crc kubenswrapper[4703]: I1209 12:31:24.062889 4703 generic.go:334] "Generic (PLEG): container finished" podID="a0fa2550-b4dd-4365-8be1-b918c233a938" containerID="224ea1b03989209785d771ecdb9420fafdb7ea82ce00a4b63f9b7bc2c1746535" exitCode=0 Dec 09 12:31:24 crc kubenswrapper[4703]: I1209 12:31:24.063285 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a0fa2550-b4dd-4365-8be1-b918c233a938","Type":"ContainerDied","Data":"224ea1b03989209785d771ecdb9420fafdb7ea82ce00a4b63f9b7bc2c1746535"} Dec 09 12:31:24 crc kubenswrapper[4703]: I1209 12:31:24.070257 4703 scope.go:117] "RemoveContainer" containerID="aec88ccdbd5bc6c7f7f9dd534b57b24318f2c545ead2d59998c5ffb08ae72b46" Dec 09 12:31:24 crc kubenswrapper[4703]: E1209 12:31:24.070727 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 12:31:24 crc kubenswrapper[4703]: I1209 12:31:24.464967 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 09 12:31:24 crc kubenswrapper[4703]: I1209 12:31:24.465037 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 09 12:31:24 crc kubenswrapper[4703]: I1209 12:31:24.980024 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 09 12:31:25 crc kubenswrapper[4703]: I1209 12:31:25.088693 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0fa2550-b4dd-4365-8be1-b918c233a938-config-data\") pod \"a0fa2550-b4dd-4365-8be1-b918c233a938\" (UID: \"a0fa2550-b4dd-4365-8be1-b918c233a938\") " Dec 09 12:31:25 crc kubenswrapper[4703]: I1209 12:31:25.089028 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a0fa2550-b4dd-4365-8be1-b918c233a938-logs\") pod \"a0fa2550-b4dd-4365-8be1-b918c233a938\" (UID: \"a0fa2550-b4dd-4365-8be1-b918c233a938\") " Dec 09 12:31:25 crc kubenswrapper[4703]: I1209 12:31:25.089074 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cn4nr\" (UniqueName: \"kubernetes.io/projected/a0fa2550-b4dd-4365-8be1-b918c233a938-kube-api-access-cn4nr\") pod \"a0fa2550-b4dd-4365-8be1-b918c233a938\" (UID: \"a0fa2550-b4dd-4365-8be1-b918c233a938\") " Dec 09 12:31:25 crc kubenswrapper[4703]: I1209 12:31:25.089159 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0fa2550-b4dd-4365-8be1-b918c233a938-combined-ca-bundle\") pod \"a0fa2550-b4dd-4365-8be1-b918c233a938\" (UID: \"a0fa2550-b4dd-4365-8be1-b918c233a938\") " Dec 09 12:31:25 crc kubenswrapper[4703]: I1209 12:31:25.089681 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0fa2550-b4dd-4365-8be1-b918c233a938-logs" (OuterVolumeSpecName: "logs") pod "a0fa2550-b4dd-4365-8be1-b918c233a938" (UID: "a0fa2550-b4dd-4365-8be1-b918c233a938"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:31:25 crc kubenswrapper[4703]: I1209 12:31:25.107237 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0fa2550-b4dd-4365-8be1-b918c233a938-kube-api-access-cn4nr" (OuterVolumeSpecName: "kube-api-access-cn4nr") pod "a0fa2550-b4dd-4365-8be1-b918c233a938" (UID: "a0fa2550-b4dd-4365-8be1-b918c233a938"). InnerVolumeSpecName "kube-api-access-cn4nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:31:25 crc kubenswrapper[4703]: I1209 12:31:25.112508 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 09 12:31:25 crc kubenswrapper[4703]: I1209 12:31:25.138857 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0fa2550-b4dd-4365-8be1-b918c233a938-config-data" (OuterVolumeSpecName: "config-data") pod "a0fa2550-b4dd-4365-8be1-b918c233a938" (UID: "a0fa2550-b4dd-4365-8be1-b918c233a938"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:31:25 crc kubenswrapper[4703]: I1209 12:31:25.148137 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0fa2550-b4dd-4365-8be1-b918c233a938-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a0fa2550-b4dd-4365-8be1-b918c233a938" (UID: "a0fa2550-b4dd-4365-8be1-b918c233a938"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:31:25 crc kubenswrapper[4703]: I1209 12:31:25.192710 4703 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0fa2550-b4dd-4365-8be1-b918c233a938-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:25 crc kubenswrapper[4703]: I1209 12:31:25.192765 4703 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0fa2550-b4dd-4365-8be1-b918c233a938-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:25 crc kubenswrapper[4703]: I1209 12:31:25.192778 4703 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a0fa2550-b4dd-4365-8be1-b918c233a938-logs\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:25 crc kubenswrapper[4703]: I1209 12:31:25.192791 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cn4nr\" (UniqueName: \"kubernetes.io/projected/a0fa2550-b4dd-4365-8be1-b918c233a938-kube-api-access-cn4nr\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:25 crc kubenswrapper[4703]: I1209 12:31:25.248454 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a0fa2550-b4dd-4365-8be1-b918c233a938","Type":"ContainerDied","Data":"db3e10525c875b8148d6902f96ba07eed6175b64bd56f6cc9db86136ef4b97ab"} Dec 09 12:31:25 crc kubenswrapper[4703]: I1209 12:31:25.248567 4703 scope.go:117] "RemoveContainer" containerID="224ea1b03989209785d771ecdb9420fafdb7ea82ce00a4b63f9b7bc2c1746535" Dec 09 12:31:25 crc kubenswrapper[4703]: I1209 12:31:25.274877 4703 scope.go:117] "RemoveContainer" containerID="08db48dabf1cd675d0b3202da9728d3d8a3416373bb290f8701992abf3d79f6c" Dec 09 12:31:25 crc kubenswrapper[4703]: I1209 12:31:25.455239 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 09 12:31:25 crc kubenswrapper[4703]: I1209 12:31:25.468694 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 09 12:31:25 crc kubenswrapper[4703]: I1209 12:31:25.494343 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 09 12:31:25 crc kubenswrapper[4703]: E1209 12:31:25.494970 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0fa2550-b4dd-4365-8be1-b918c233a938" containerName="nova-api-log" Dec 09 12:31:25 crc kubenswrapper[4703]: I1209 12:31:25.494993 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0fa2550-b4dd-4365-8be1-b918c233a938" containerName="nova-api-log" Dec 09 12:31:25 crc kubenswrapper[4703]: E1209 12:31:25.495016 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0fa2550-b4dd-4365-8be1-b918c233a938" containerName="nova-api-api" Dec 09 12:31:25 crc kubenswrapper[4703]: I1209 12:31:25.495023 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0fa2550-b4dd-4365-8be1-b918c233a938" containerName="nova-api-api" Dec 09 12:31:25 crc kubenswrapper[4703]: I1209 12:31:25.497607 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0fa2550-b4dd-4365-8be1-b918c233a938" containerName="nova-api-log" Dec 09 12:31:25 crc kubenswrapper[4703]: I1209 12:31:25.497677 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0fa2550-b4dd-4365-8be1-b918c233a938" containerName="nova-api-api" Dec 09 12:31:25 crc kubenswrapper[4703]: I1209 12:31:25.499438 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 09 12:31:25 crc kubenswrapper[4703]: I1209 12:31:25.501993 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 09 12:31:25 crc kubenswrapper[4703]: I1209 12:31:25.505511 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 09 12:31:25 crc kubenswrapper[4703]: I1209 12:31:25.602648 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53184350-7fa0-45a1-ba83-b03735f6c261-logs\") pod \"nova-api-0\" (UID: \"53184350-7fa0-45a1-ba83-b03735f6c261\") " pod="openstack/nova-api-0" Dec 09 12:31:25 crc kubenswrapper[4703]: I1209 12:31:25.602829 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53184350-7fa0-45a1-ba83-b03735f6c261-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"53184350-7fa0-45a1-ba83-b03735f6c261\") " pod="openstack/nova-api-0" Dec 09 12:31:25 crc kubenswrapper[4703]: I1209 12:31:25.602899 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6czz\" (UniqueName: \"kubernetes.io/projected/53184350-7fa0-45a1-ba83-b03735f6c261-kube-api-access-t6czz\") pod \"nova-api-0\" (UID: \"53184350-7fa0-45a1-ba83-b03735f6c261\") " pod="openstack/nova-api-0" Dec 09 12:31:25 crc kubenswrapper[4703]: I1209 12:31:25.603097 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53184350-7fa0-45a1-ba83-b03735f6c261-config-data\") pod \"nova-api-0\" (UID: \"53184350-7fa0-45a1-ba83-b03735f6c261\") " pod="openstack/nova-api-0" Dec 09 12:31:25 crc kubenswrapper[4703]: I1209 12:31:25.705302 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6czz\" (UniqueName: \"kubernetes.io/projected/53184350-7fa0-45a1-ba83-b03735f6c261-kube-api-access-t6czz\") pod \"nova-api-0\" (UID: \"53184350-7fa0-45a1-ba83-b03735f6c261\") " pod="openstack/nova-api-0" Dec 09 12:31:25 crc kubenswrapper[4703]: I1209 12:31:25.705574 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53184350-7fa0-45a1-ba83-b03735f6c261-config-data\") pod \"nova-api-0\" (UID: \"53184350-7fa0-45a1-ba83-b03735f6c261\") " pod="openstack/nova-api-0" Dec 09 12:31:25 crc kubenswrapper[4703]: I1209 12:31:25.705655 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53184350-7fa0-45a1-ba83-b03735f6c261-logs\") pod \"nova-api-0\" (UID: \"53184350-7fa0-45a1-ba83-b03735f6c261\") " pod="openstack/nova-api-0" Dec 09 12:31:25 crc kubenswrapper[4703]: I1209 12:31:25.705740 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53184350-7fa0-45a1-ba83-b03735f6c261-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"53184350-7fa0-45a1-ba83-b03735f6c261\") " pod="openstack/nova-api-0" Dec 09 12:31:25 crc kubenswrapper[4703]: I1209 12:31:25.706332 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53184350-7fa0-45a1-ba83-b03735f6c261-logs\") pod \"nova-api-0\" (UID: \"53184350-7fa0-45a1-ba83-b03735f6c261\") " pod="openstack/nova-api-0" Dec 09 12:31:25 crc kubenswrapper[4703]: I1209 12:31:25.711103 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53184350-7fa0-45a1-ba83-b03735f6c261-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"53184350-7fa0-45a1-ba83-b03735f6c261\") " pod="openstack/nova-api-0" Dec 09 12:31:25 crc kubenswrapper[4703]: I1209 12:31:25.721928 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53184350-7fa0-45a1-ba83-b03735f6c261-config-data\") pod \"nova-api-0\" (UID: \"53184350-7fa0-45a1-ba83-b03735f6c261\") " pod="openstack/nova-api-0" Dec 09 12:31:25 crc kubenswrapper[4703]: I1209 12:31:25.727793 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6czz\" (UniqueName: \"kubernetes.io/projected/53184350-7fa0-45a1-ba83-b03735f6c261-kube-api-access-t6czz\") pod \"nova-api-0\" (UID: \"53184350-7fa0-45a1-ba83-b03735f6c261\") " pod="openstack/nova-api-0" Dec 09 12:31:25 crc kubenswrapper[4703]: I1209 12:31:25.834874 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 09 12:31:26 crc kubenswrapper[4703]: I1209 12:31:26.438061 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 09 12:31:26 crc kubenswrapper[4703]: W1209 12:31:26.440715 4703 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod53184350_7fa0_45a1_ba83_b03735f6c261.slice/crio-e44f70400bf9f5f7110676d035c066ab2bd248c54655ed1c0381f63ef1772708 WatchSource:0}: Error finding container e44f70400bf9f5f7110676d035c066ab2bd248c54655ed1c0381f63ef1772708: Status 404 returned error can't find the container with id e44f70400bf9f5f7110676d035c066ab2bd248c54655ed1c0381f63ef1772708 Dec 09 12:31:26 crc kubenswrapper[4703]: I1209 12:31:26.746325 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 09 12:31:27 crc kubenswrapper[4703]: I1209 12:31:27.099796 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0fa2550-b4dd-4365-8be1-b918c233a938" path="/var/lib/kubelet/pods/a0fa2550-b4dd-4365-8be1-b918c233a938/volumes" Dec 09 12:31:27 crc kubenswrapper[4703]: I1209 12:31:27.153907 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"53184350-7fa0-45a1-ba83-b03735f6c261","Type":"ContainerStarted","Data":"f5e4c85f2571d5e6aaeeb15fb13a0172a43115609aae53de1c9348495cb363de"} Dec 09 12:31:27 crc kubenswrapper[4703]: I1209 12:31:27.154207 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"53184350-7fa0-45a1-ba83-b03735f6c261","Type":"ContainerStarted","Data":"7488ed608448613a60ebc602396db673e0ad3ea7a2355cc16b76e77e175fb52b"} Dec 09 12:31:27 crc kubenswrapper[4703]: I1209 12:31:27.154312 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"53184350-7fa0-45a1-ba83-b03735f6c261","Type":"ContainerStarted","Data":"e44f70400bf9f5f7110676d035c066ab2bd248c54655ed1c0381f63ef1772708"} Dec 09 12:31:27 crc kubenswrapper[4703]: I1209 12:31:27.191116 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.191088 podStartE2EDuration="2.191088s" podCreationTimestamp="2025-12-09 12:31:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:31:27.177884059 +0000 UTC m=+1586.426647578" watchObservedRunningTime="2025-12-09 12:31:27.191088 +0000 UTC m=+1586.439851519" Dec 09 12:31:27 crc kubenswrapper[4703]: I1209 12:31:27.391598 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Dec 09 12:31:29 crc kubenswrapper[4703]: I1209 12:31:29.464927 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 09 12:31:29 crc kubenswrapper[4703]: I1209 12:31:29.465724 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 09 12:31:30 crc kubenswrapper[4703]: I1209 12:31:30.478535 4703 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="1167f24a-93e8-4578-b9e7-44bca208de2c" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.216:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 09 12:31:30 crc kubenswrapper[4703]: I1209 12:31:30.478817 4703 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="1167f24a-93e8-4578-b9e7-44bca208de2c" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.216:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 09 12:31:31 crc kubenswrapper[4703]: I1209 12:31:31.744408 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 09 12:31:31 crc kubenswrapper[4703]: I1209 12:31:31.781296 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 09 12:31:32 crc kubenswrapper[4703]: I1209 12:31:32.349878 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 09 12:31:34 crc kubenswrapper[4703]: I1209 12:31:34.030246 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 09 12:31:34 crc kubenswrapper[4703]: I1209 12:31:34.151742 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5l69h\" (UniqueName: \"kubernetes.io/projected/db4f7913-939e-4090-b704-57aa4d2da879-kube-api-access-5l69h\") pod \"db4f7913-939e-4090-b704-57aa4d2da879\" (UID: \"db4f7913-939e-4090-b704-57aa4d2da879\") " Dec 09 12:31:34 crc kubenswrapper[4703]: I1209 12:31:34.152364 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db4f7913-939e-4090-b704-57aa4d2da879-config-data\") pod \"db4f7913-939e-4090-b704-57aa4d2da879\" (UID: \"db4f7913-939e-4090-b704-57aa4d2da879\") " Dec 09 12:31:34 crc kubenswrapper[4703]: I1209 12:31:34.152407 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db4f7913-939e-4090-b704-57aa4d2da879-combined-ca-bundle\") pod \"db4f7913-939e-4090-b704-57aa4d2da879\" (UID: \"db4f7913-939e-4090-b704-57aa4d2da879\") " Dec 09 12:31:34 crc kubenswrapper[4703]: I1209 12:31:34.159529 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db4f7913-939e-4090-b704-57aa4d2da879-kube-api-access-5l69h" (OuterVolumeSpecName: "kube-api-access-5l69h") pod "db4f7913-939e-4090-b704-57aa4d2da879" (UID: "db4f7913-939e-4090-b704-57aa4d2da879"). InnerVolumeSpecName "kube-api-access-5l69h". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:31:34 crc kubenswrapper[4703]: I1209 12:31:34.184388 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db4f7913-939e-4090-b704-57aa4d2da879-config-data" (OuterVolumeSpecName: "config-data") pod "db4f7913-939e-4090-b704-57aa4d2da879" (UID: "db4f7913-939e-4090-b704-57aa4d2da879"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:31:34 crc kubenswrapper[4703]: I1209 12:31:34.186419 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db4f7913-939e-4090-b704-57aa4d2da879-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "db4f7913-939e-4090-b704-57aa4d2da879" (UID: "db4f7913-939e-4090-b704-57aa4d2da879"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:31:34 crc kubenswrapper[4703]: I1209 12:31:34.248758 4703 generic.go:334] "Generic (PLEG): container finished" podID="db4f7913-939e-4090-b704-57aa4d2da879" containerID="b3d7dc0b4f91c42eb020db90cca47b443cc8348b15a4fdf17a6c11ed00a86730" exitCode=137 Dec 09 12:31:34 crc kubenswrapper[4703]: I1209 12:31:34.248807 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 09 12:31:34 crc kubenswrapper[4703]: I1209 12:31:34.248830 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"db4f7913-939e-4090-b704-57aa4d2da879","Type":"ContainerDied","Data":"b3d7dc0b4f91c42eb020db90cca47b443cc8348b15a4fdf17a6c11ed00a86730"} Dec 09 12:31:34 crc kubenswrapper[4703]: I1209 12:31:34.249000 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"db4f7913-939e-4090-b704-57aa4d2da879","Type":"ContainerDied","Data":"fd8bc34e89d8dac2363f786e430add26614139c6214dccacb6c1550e15014f73"} Dec 09 12:31:34 crc kubenswrapper[4703]: I1209 12:31:34.249023 4703 scope.go:117] "RemoveContainer" containerID="b3d7dc0b4f91c42eb020db90cca47b443cc8348b15a4fdf17a6c11ed00a86730" Dec 09 12:31:34 crc kubenswrapper[4703]: I1209 12:31:34.255178 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5l69h\" (UniqueName: \"kubernetes.io/projected/db4f7913-939e-4090-b704-57aa4d2da879-kube-api-access-5l69h\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:34 crc kubenswrapper[4703]: I1209 12:31:34.255222 4703 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db4f7913-939e-4090-b704-57aa4d2da879-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:34 crc kubenswrapper[4703]: I1209 12:31:34.255239 4703 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db4f7913-939e-4090-b704-57aa4d2da879-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:34 crc kubenswrapper[4703]: I1209 12:31:34.292637 4703 scope.go:117] "RemoveContainer" containerID="b3d7dc0b4f91c42eb020db90cca47b443cc8348b15a4fdf17a6c11ed00a86730" Dec 09 12:31:34 crc kubenswrapper[4703]: E1209 12:31:34.293332 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3d7dc0b4f91c42eb020db90cca47b443cc8348b15a4fdf17a6c11ed00a86730\": container with ID starting with b3d7dc0b4f91c42eb020db90cca47b443cc8348b15a4fdf17a6c11ed00a86730 not found: ID does not exist" containerID="b3d7dc0b4f91c42eb020db90cca47b443cc8348b15a4fdf17a6c11ed00a86730" Dec 09 12:31:34 crc kubenswrapper[4703]: I1209 12:31:34.293369 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3d7dc0b4f91c42eb020db90cca47b443cc8348b15a4fdf17a6c11ed00a86730"} err="failed to get container status \"b3d7dc0b4f91c42eb020db90cca47b443cc8348b15a4fdf17a6c11ed00a86730\": rpc error: code = NotFound desc = could not find container \"b3d7dc0b4f91c42eb020db90cca47b443cc8348b15a4fdf17a6c11ed00a86730\": container with ID starting with b3d7dc0b4f91c42eb020db90cca47b443cc8348b15a4fdf17a6c11ed00a86730 not found: ID does not exist" Dec 09 12:31:34 crc kubenswrapper[4703]: I1209 12:31:34.298033 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 09 12:31:34 crc kubenswrapper[4703]: I1209 12:31:34.327860 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 09 12:31:34 crc kubenswrapper[4703]: I1209 12:31:34.352284 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 09 12:31:34 crc kubenswrapper[4703]: E1209 12:31:34.352867 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db4f7913-939e-4090-b704-57aa4d2da879" containerName="nova-cell1-novncproxy-novncproxy" Dec 09 12:31:34 crc kubenswrapper[4703]: I1209 12:31:34.352892 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="db4f7913-939e-4090-b704-57aa4d2da879" containerName="nova-cell1-novncproxy-novncproxy" Dec 09 12:31:34 crc kubenswrapper[4703]: I1209 12:31:34.353136 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="db4f7913-939e-4090-b704-57aa4d2da879" containerName="nova-cell1-novncproxy-novncproxy" Dec 09 12:31:34 crc kubenswrapper[4703]: I1209 12:31:34.363320 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 09 12:31:34 crc kubenswrapper[4703]: I1209 12:31:34.368553 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Dec 09 12:31:34 crc kubenswrapper[4703]: I1209 12:31:34.368675 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Dec 09 12:31:34 crc kubenswrapper[4703]: I1209 12:31:34.368757 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Dec 09 12:31:34 crc kubenswrapper[4703]: I1209 12:31:34.395998 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 09 12:31:34 crc kubenswrapper[4703]: I1209 12:31:34.473093 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24eb5ce0-a821-4df9-b399-0a05417ef984-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"24eb5ce0-a821-4df9-b399-0a05417ef984\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 12:31:34 crc kubenswrapper[4703]: I1209 12:31:34.473180 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/24eb5ce0-a821-4df9-b399-0a05417ef984-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"24eb5ce0-a821-4df9-b399-0a05417ef984\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 12:31:34 crc kubenswrapper[4703]: I1209 12:31:34.473839 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/24eb5ce0-a821-4df9-b399-0a05417ef984-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"24eb5ce0-a821-4df9-b399-0a05417ef984\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 12:31:34 crc kubenswrapper[4703]: I1209 12:31:34.473910 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qqkd\" (UniqueName: \"kubernetes.io/projected/24eb5ce0-a821-4df9-b399-0a05417ef984-kube-api-access-8qqkd\") pod \"nova-cell1-novncproxy-0\" (UID: \"24eb5ce0-a821-4df9-b399-0a05417ef984\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 12:31:34 crc kubenswrapper[4703]: I1209 12:31:34.473995 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24eb5ce0-a821-4df9-b399-0a05417ef984-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"24eb5ce0-a821-4df9-b399-0a05417ef984\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 12:31:34 crc kubenswrapper[4703]: I1209 12:31:34.576565 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24eb5ce0-a821-4df9-b399-0a05417ef984-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"24eb5ce0-a821-4df9-b399-0a05417ef984\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 12:31:34 crc kubenswrapper[4703]: I1209 12:31:34.576635 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/24eb5ce0-a821-4df9-b399-0a05417ef984-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"24eb5ce0-a821-4df9-b399-0a05417ef984\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 12:31:34 crc kubenswrapper[4703]: I1209 12:31:34.576732 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/24eb5ce0-a821-4df9-b399-0a05417ef984-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"24eb5ce0-a821-4df9-b399-0a05417ef984\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 12:31:34 crc kubenswrapper[4703]: I1209 12:31:34.576772 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qqkd\" (UniqueName: \"kubernetes.io/projected/24eb5ce0-a821-4df9-b399-0a05417ef984-kube-api-access-8qqkd\") pod \"nova-cell1-novncproxy-0\" (UID: \"24eb5ce0-a821-4df9-b399-0a05417ef984\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 12:31:34 crc kubenswrapper[4703]: I1209 12:31:34.576807 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24eb5ce0-a821-4df9-b399-0a05417ef984-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"24eb5ce0-a821-4df9-b399-0a05417ef984\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 12:31:34 crc kubenswrapper[4703]: I1209 12:31:34.582883 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/24eb5ce0-a821-4df9-b399-0a05417ef984-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"24eb5ce0-a821-4df9-b399-0a05417ef984\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 12:31:34 crc kubenswrapper[4703]: I1209 12:31:34.583209 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24eb5ce0-a821-4df9-b399-0a05417ef984-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"24eb5ce0-a821-4df9-b399-0a05417ef984\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 12:31:34 crc kubenswrapper[4703]: I1209 12:31:34.583424 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24eb5ce0-a821-4df9-b399-0a05417ef984-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"24eb5ce0-a821-4df9-b399-0a05417ef984\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 12:31:34 crc kubenswrapper[4703]: I1209 12:31:34.585681 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/24eb5ce0-a821-4df9-b399-0a05417ef984-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"24eb5ce0-a821-4df9-b399-0a05417ef984\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 12:31:34 crc kubenswrapper[4703]: I1209 12:31:34.597358 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qqkd\" (UniqueName: \"kubernetes.io/projected/24eb5ce0-a821-4df9-b399-0a05417ef984-kube-api-access-8qqkd\") pod \"nova-cell1-novncproxy-0\" (UID: \"24eb5ce0-a821-4df9-b399-0a05417ef984\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 12:31:34 crc kubenswrapper[4703]: I1209 12:31:34.689763 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 09 12:31:35 crc kubenswrapper[4703]: I1209 12:31:35.085295 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db4f7913-939e-4090-b704-57aa4d2da879" path="/var/lib/kubelet/pods/db4f7913-939e-4090-b704-57aa4d2da879/volumes" Dec 09 12:31:35 crc kubenswrapper[4703]: I1209 12:31:35.199933 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 09 12:31:35 crc kubenswrapper[4703]: I1209 12:31:35.263480 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"24eb5ce0-a821-4df9-b399-0a05417ef984","Type":"ContainerStarted","Data":"a323f3969f750ac31eccd0d14bb6bf44d1859b9c1ce1c3ef98e8118b44fea2f2"} Dec 09 12:31:35 crc kubenswrapper[4703]: I1209 12:31:35.836225 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 09 12:31:35 crc kubenswrapper[4703]: I1209 12:31:35.836598 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 09 12:31:36 crc kubenswrapper[4703]: I1209 12:31:36.293483 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"24eb5ce0-a821-4df9-b399-0a05417ef984","Type":"ContainerStarted","Data":"d94e25f6019ac9831b110d075f9f8edd1f3859d8a1ccd510e27dbb148703d92c"} Dec 09 12:31:36 crc kubenswrapper[4703]: I1209 12:31:36.333494 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.333459424 podStartE2EDuration="2.333459424s" podCreationTimestamp="2025-12-09 12:31:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:31:36.322278325 +0000 UTC m=+1595.571041854" watchObservedRunningTime="2025-12-09 12:31:36.333459424 +0000 UTC m=+1595.582222943" Dec 09 12:31:36 crc kubenswrapper[4703]: I1209 12:31:36.877524 4703 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="53184350-7fa0-45a1-ba83-b03735f6c261" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.218:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 09 12:31:36 crc kubenswrapper[4703]: I1209 12:31:36.918458 4703 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="53184350-7fa0-45a1-ba83-b03735f6c261" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.218:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 09 12:31:38 crc kubenswrapper[4703]: I1209 12:31:38.070711 4703 scope.go:117] "RemoveContainer" containerID="aec88ccdbd5bc6c7f7f9dd534b57b24318f2c545ead2d59998c5ffb08ae72b46" Dec 09 12:31:38 crc kubenswrapper[4703]: E1209 12:31:38.071395 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 12:31:39 crc kubenswrapper[4703]: I1209 12:31:39.472606 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 09 12:31:39 crc kubenswrapper[4703]: I1209 12:31:39.473027 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 09 12:31:39 crc kubenswrapper[4703]: I1209 12:31:39.479865 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 09 12:31:39 crc kubenswrapper[4703]: I1209 12:31:39.479921 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 09 12:31:39 crc kubenswrapper[4703]: I1209 12:31:39.675886 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qkxln"] Dec 09 12:31:39 crc kubenswrapper[4703]: I1209 12:31:39.678679 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qkxln" Dec 09 12:31:39 crc kubenswrapper[4703]: I1209 12:31:39.690807 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Dec 09 12:31:39 crc kubenswrapper[4703]: I1209 12:31:39.692663 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qkxln"] Dec 09 12:31:39 crc kubenswrapper[4703]: I1209 12:31:39.738913 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07da3beb-8b3f-4928-a664-fb98cf29e31d-catalog-content\") pod \"redhat-marketplace-qkxln\" (UID: \"07da3beb-8b3f-4928-a664-fb98cf29e31d\") " pod="openshift-marketplace/redhat-marketplace-qkxln" Dec 09 12:31:39 crc kubenswrapper[4703]: I1209 12:31:39.738976 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcszt\" (UniqueName: \"kubernetes.io/projected/07da3beb-8b3f-4928-a664-fb98cf29e31d-kube-api-access-qcszt\") pod \"redhat-marketplace-qkxln\" (UID: \"07da3beb-8b3f-4928-a664-fb98cf29e31d\") " pod="openshift-marketplace/redhat-marketplace-qkxln" Dec 09 12:31:39 crc kubenswrapper[4703]: I1209 12:31:39.739012 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07da3beb-8b3f-4928-a664-fb98cf29e31d-utilities\") pod \"redhat-marketplace-qkxln\" (UID: \"07da3beb-8b3f-4928-a664-fb98cf29e31d\") " pod="openshift-marketplace/redhat-marketplace-qkxln" Dec 09 12:31:39 crc kubenswrapper[4703]: I1209 12:31:39.840077 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07da3beb-8b3f-4928-a664-fb98cf29e31d-catalog-content\") pod \"redhat-marketplace-qkxln\" (UID: \"07da3beb-8b3f-4928-a664-fb98cf29e31d\") " pod="openshift-marketplace/redhat-marketplace-qkxln" Dec 09 12:31:39 crc kubenswrapper[4703]: I1209 12:31:39.840183 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcszt\" (UniqueName: \"kubernetes.io/projected/07da3beb-8b3f-4928-a664-fb98cf29e31d-kube-api-access-qcszt\") pod \"redhat-marketplace-qkxln\" (UID: \"07da3beb-8b3f-4928-a664-fb98cf29e31d\") " pod="openshift-marketplace/redhat-marketplace-qkxln" Dec 09 12:31:39 crc kubenswrapper[4703]: I1209 12:31:39.840232 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07da3beb-8b3f-4928-a664-fb98cf29e31d-utilities\") pod \"redhat-marketplace-qkxln\" (UID: \"07da3beb-8b3f-4928-a664-fb98cf29e31d\") " pod="openshift-marketplace/redhat-marketplace-qkxln" Dec 09 12:31:39 crc kubenswrapper[4703]: I1209 12:31:39.840894 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07da3beb-8b3f-4928-a664-fb98cf29e31d-catalog-content\") pod \"redhat-marketplace-qkxln\" (UID: \"07da3beb-8b3f-4928-a664-fb98cf29e31d\") " pod="openshift-marketplace/redhat-marketplace-qkxln" Dec 09 12:31:39 crc kubenswrapper[4703]: I1209 12:31:39.842594 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07da3beb-8b3f-4928-a664-fb98cf29e31d-utilities\") pod \"redhat-marketplace-qkxln\" (UID: \"07da3beb-8b3f-4928-a664-fb98cf29e31d\") " pod="openshift-marketplace/redhat-marketplace-qkxln" Dec 09 12:31:39 crc kubenswrapper[4703]: I1209 12:31:39.864572 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcszt\" (UniqueName: \"kubernetes.io/projected/07da3beb-8b3f-4928-a664-fb98cf29e31d-kube-api-access-qcszt\") pod \"redhat-marketplace-qkxln\" (UID: \"07da3beb-8b3f-4928-a664-fb98cf29e31d\") " pod="openshift-marketplace/redhat-marketplace-qkxln" Dec 09 12:31:40 crc kubenswrapper[4703]: I1209 12:31:40.066536 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qkxln" Dec 09 12:31:40 crc kubenswrapper[4703]: I1209 12:31:40.636403 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qkxln"] Dec 09 12:31:41 crc kubenswrapper[4703]: I1209 12:31:41.358594 4703 generic.go:334] "Generic (PLEG): container finished" podID="07da3beb-8b3f-4928-a664-fb98cf29e31d" containerID="9cd19d7c6e413e6c14687396c207cb9447fa545041347fbce04ce7c9631ce1d7" exitCode=0 Dec 09 12:31:41 crc kubenswrapper[4703]: I1209 12:31:41.359046 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qkxln" event={"ID":"07da3beb-8b3f-4928-a664-fb98cf29e31d","Type":"ContainerDied","Data":"9cd19d7c6e413e6c14687396c207cb9447fa545041347fbce04ce7c9631ce1d7"} Dec 09 12:31:41 crc kubenswrapper[4703]: I1209 12:31:41.359146 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qkxln" event={"ID":"07da3beb-8b3f-4928-a664-fb98cf29e31d","Type":"ContainerStarted","Data":"99b54f4e6d2b2d04261a54fc412d2e2e85a0cbfa3fb51fa132ba9084e1761044"} Dec 09 12:31:42 crc kubenswrapper[4703]: I1209 12:31:42.142700 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 09 12:31:42 crc kubenswrapper[4703]: I1209 12:31:42.381079 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qkxln" event={"ID":"07da3beb-8b3f-4928-a664-fb98cf29e31d","Type":"ContainerStarted","Data":"35a9a652c1836be0f41fe0a763f2f8f621801afb81382aa679af4cc2491ef47c"} Dec 09 12:31:43 crc kubenswrapper[4703]: I1209 12:31:43.396663 4703 generic.go:334] "Generic (PLEG): container finished" podID="07da3beb-8b3f-4928-a664-fb98cf29e31d" containerID="35a9a652c1836be0f41fe0a763f2f8f621801afb81382aa679af4cc2491ef47c" exitCode=0 Dec 09 12:31:43 crc kubenswrapper[4703]: I1209 12:31:43.397016 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qkxln" event={"ID":"07da3beb-8b3f-4928-a664-fb98cf29e31d","Type":"ContainerDied","Data":"35a9a652c1836be0f41fe0a763f2f8f621801afb81382aa679af4cc2491ef47c"} Dec 09 12:31:43 crc kubenswrapper[4703]: I1209 12:31:43.401082 4703 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 09 12:31:44 crc kubenswrapper[4703]: I1209 12:31:44.411731 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qkxln" event={"ID":"07da3beb-8b3f-4928-a664-fb98cf29e31d","Type":"ContainerStarted","Data":"fa596899e0e2eaf0379c7b694dfa0b19b1c43b78715b26c3209b4a395380b55c"} Dec 09 12:31:44 crc kubenswrapper[4703]: I1209 12:31:44.441265 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qkxln" podStartSLOduration=2.977835902 podStartE2EDuration="5.44124302s" podCreationTimestamp="2025-12-09 12:31:39 +0000 UTC" firstStartedPulling="2025-12-09 12:31:41.361355345 +0000 UTC m=+1600.610118864" lastFinishedPulling="2025-12-09 12:31:43.824762463 +0000 UTC m=+1603.073525982" observedRunningTime="2025-12-09 12:31:44.433473265 +0000 UTC m=+1603.682236794" watchObservedRunningTime="2025-12-09 12:31:44.44124302 +0000 UTC m=+1603.690006529" Dec 09 12:31:44 crc kubenswrapper[4703]: I1209 12:31:44.690518 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Dec 09 12:31:44 crc kubenswrapper[4703]: I1209 12:31:44.713774 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Dec 09 12:31:45 crc kubenswrapper[4703]: I1209 12:31:45.446379 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Dec 09 12:31:45 crc kubenswrapper[4703]: I1209 12:31:45.746044 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-znxfc"] Dec 09 12:31:45 crc kubenswrapper[4703]: I1209 12:31:45.755331 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-znxfc" Dec 09 12:31:45 crc kubenswrapper[4703]: I1209 12:31:45.763826 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Dec 09 12:31:45 crc kubenswrapper[4703]: I1209 12:31:45.764102 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Dec 09 12:31:45 crc kubenswrapper[4703]: I1209 12:31:45.765707 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-znxfc"] Dec 09 12:31:45 crc kubenswrapper[4703]: I1209 12:31:45.816320 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cb6dr\" (UniqueName: \"kubernetes.io/projected/3bb06d6a-e31a-4624-bbca-fa4f970d5685-kube-api-access-cb6dr\") pod \"nova-cell1-cell-mapping-znxfc\" (UID: \"3bb06d6a-e31a-4624-bbca-fa4f970d5685\") " pod="openstack/nova-cell1-cell-mapping-znxfc" Dec 09 12:31:45 crc kubenswrapper[4703]: I1209 12:31:45.816432 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bb06d6a-e31a-4624-bbca-fa4f970d5685-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-znxfc\" (UID: \"3bb06d6a-e31a-4624-bbca-fa4f970d5685\") " pod="openstack/nova-cell1-cell-mapping-znxfc" Dec 09 12:31:45 crc kubenswrapper[4703]: I1209 12:31:45.816464 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3bb06d6a-e31a-4624-bbca-fa4f970d5685-scripts\") pod \"nova-cell1-cell-mapping-znxfc\" (UID: \"3bb06d6a-e31a-4624-bbca-fa4f970d5685\") " pod="openstack/nova-cell1-cell-mapping-znxfc" Dec 09 12:31:45 crc kubenswrapper[4703]: I1209 12:31:45.816509 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bb06d6a-e31a-4624-bbca-fa4f970d5685-config-data\") pod \"nova-cell1-cell-mapping-znxfc\" (UID: \"3bb06d6a-e31a-4624-bbca-fa4f970d5685\") " pod="openstack/nova-cell1-cell-mapping-znxfc" Dec 09 12:31:45 crc kubenswrapper[4703]: I1209 12:31:45.839829 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 09 12:31:45 crc kubenswrapper[4703]: I1209 12:31:45.840227 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 09 12:31:45 crc kubenswrapper[4703]: I1209 12:31:45.840680 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 09 12:31:45 crc kubenswrapper[4703]: I1209 12:31:45.845402 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 09 12:31:45 crc kubenswrapper[4703]: I1209 12:31:45.919553 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bb06d6a-e31a-4624-bbca-fa4f970d5685-config-data\") pod \"nova-cell1-cell-mapping-znxfc\" (UID: \"3bb06d6a-e31a-4624-bbca-fa4f970d5685\") " pod="openstack/nova-cell1-cell-mapping-znxfc" Dec 09 12:31:45 crc kubenswrapper[4703]: I1209 12:31:45.919670 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cb6dr\" (UniqueName: \"kubernetes.io/projected/3bb06d6a-e31a-4624-bbca-fa4f970d5685-kube-api-access-cb6dr\") pod \"nova-cell1-cell-mapping-znxfc\" (UID: \"3bb06d6a-e31a-4624-bbca-fa4f970d5685\") " pod="openstack/nova-cell1-cell-mapping-znxfc" Dec 09 12:31:45 crc kubenswrapper[4703]: I1209 12:31:45.919768 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bb06d6a-e31a-4624-bbca-fa4f970d5685-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-znxfc\" (UID: \"3bb06d6a-e31a-4624-bbca-fa4f970d5685\") " pod="openstack/nova-cell1-cell-mapping-znxfc" Dec 09 12:31:45 crc kubenswrapper[4703]: I1209 12:31:45.919805 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3bb06d6a-e31a-4624-bbca-fa4f970d5685-scripts\") pod \"nova-cell1-cell-mapping-znxfc\" (UID: \"3bb06d6a-e31a-4624-bbca-fa4f970d5685\") " pod="openstack/nova-cell1-cell-mapping-znxfc" Dec 09 12:31:45 crc kubenswrapper[4703]: I1209 12:31:45.931308 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bb06d6a-e31a-4624-bbca-fa4f970d5685-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-znxfc\" (UID: \"3bb06d6a-e31a-4624-bbca-fa4f970d5685\") " pod="openstack/nova-cell1-cell-mapping-znxfc" Dec 09 12:31:45 crc kubenswrapper[4703]: I1209 12:31:45.934800 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3bb06d6a-e31a-4624-bbca-fa4f970d5685-scripts\") pod \"nova-cell1-cell-mapping-znxfc\" (UID: \"3bb06d6a-e31a-4624-bbca-fa4f970d5685\") " pod="openstack/nova-cell1-cell-mapping-znxfc" Dec 09 12:31:45 crc kubenswrapper[4703]: I1209 12:31:45.949896 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cb6dr\" (UniqueName: \"kubernetes.io/projected/3bb06d6a-e31a-4624-bbca-fa4f970d5685-kube-api-access-cb6dr\") pod \"nova-cell1-cell-mapping-znxfc\" (UID: \"3bb06d6a-e31a-4624-bbca-fa4f970d5685\") " pod="openstack/nova-cell1-cell-mapping-znxfc" Dec 09 12:31:45 crc kubenswrapper[4703]: I1209 12:31:45.949945 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bb06d6a-e31a-4624-bbca-fa4f970d5685-config-data\") pod \"nova-cell1-cell-mapping-znxfc\" (UID: \"3bb06d6a-e31a-4624-bbca-fa4f970d5685\") " pod="openstack/nova-cell1-cell-mapping-znxfc" Dec 09 12:31:46 crc kubenswrapper[4703]: I1209 12:31:46.083294 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-znxfc" Dec 09 12:31:46 crc kubenswrapper[4703]: I1209 12:31:46.439110 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 09 12:31:46 crc kubenswrapper[4703]: I1209 12:31:46.705793 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 09 12:31:46 crc kubenswrapper[4703]: I1209 12:31:46.737954 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-znxfc"] Dec 09 12:31:47 crc kubenswrapper[4703]: I1209 12:31:47.026308 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5fd9b586ff-sbnjf"] Dec 09 12:31:47 crc kubenswrapper[4703]: I1209 12:31:47.034427 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fd9b586ff-sbnjf" Dec 09 12:31:47 crc kubenswrapper[4703]: I1209 12:31:47.134801 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5fd9b586ff-sbnjf"] Dec 09 12:31:47 crc kubenswrapper[4703]: I1209 12:31:47.168011 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfqz5\" (UniqueName: \"kubernetes.io/projected/3920ccfc-84d1-4ed4-9295-942b370b9eaf-kube-api-access-lfqz5\") pod \"dnsmasq-dns-5fd9b586ff-sbnjf\" (UID: \"3920ccfc-84d1-4ed4-9295-942b370b9eaf\") " pod="openstack/dnsmasq-dns-5fd9b586ff-sbnjf" Dec 09 12:31:47 crc kubenswrapper[4703]: I1209 12:31:47.168108 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3920ccfc-84d1-4ed4-9295-942b370b9eaf-ovsdbserver-nb\") pod \"dnsmasq-dns-5fd9b586ff-sbnjf\" (UID: \"3920ccfc-84d1-4ed4-9295-942b370b9eaf\") " pod="openstack/dnsmasq-dns-5fd9b586ff-sbnjf" Dec 09 12:31:47 crc kubenswrapper[4703]: I1209 12:31:47.206730 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3920ccfc-84d1-4ed4-9295-942b370b9eaf-dns-svc\") pod \"dnsmasq-dns-5fd9b586ff-sbnjf\" (UID: \"3920ccfc-84d1-4ed4-9295-942b370b9eaf\") " pod="openstack/dnsmasq-dns-5fd9b586ff-sbnjf" Dec 09 12:31:47 crc kubenswrapper[4703]: I1209 12:31:47.208084 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3920ccfc-84d1-4ed4-9295-942b370b9eaf-ovsdbserver-sb\") pod \"dnsmasq-dns-5fd9b586ff-sbnjf\" (UID: \"3920ccfc-84d1-4ed4-9295-942b370b9eaf\") " pod="openstack/dnsmasq-dns-5fd9b586ff-sbnjf" Dec 09 12:31:47 crc kubenswrapper[4703]: I1209 12:31:47.208351 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3920ccfc-84d1-4ed4-9295-942b370b9eaf-dns-swift-storage-0\") pod \"dnsmasq-dns-5fd9b586ff-sbnjf\" (UID: \"3920ccfc-84d1-4ed4-9295-942b370b9eaf\") " pod="openstack/dnsmasq-dns-5fd9b586ff-sbnjf" Dec 09 12:31:47 crc kubenswrapper[4703]: I1209 12:31:47.208385 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3920ccfc-84d1-4ed4-9295-942b370b9eaf-config\") pod \"dnsmasq-dns-5fd9b586ff-sbnjf\" (UID: \"3920ccfc-84d1-4ed4-9295-942b370b9eaf\") " pod="openstack/dnsmasq-dns-5fd9b586ff-sbnjf" Dec 09 12:31:47 crc kubenswrapper[4703]: I1209 12:31:47.312418 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3920ccfc-84d1-4ed4-9295-942b370b9eaf-dns-swift-storage-0\") pod \"dnsmasq-dns-5fd9b586ff-sbnjf\" (UID: \"3920ccfc-84d1-4ed4-9295-942b370b9eaf\") " pod="openstack/dnsmasq-dns-5fd9b586ff-sbnjf" Dec 09 12:31:47 crc kubenswrapper[4703]: I1209 12:31:47.312953 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3920ccfc-84d1-4ed4-9295-942b370b9eaf-config\") pod \"dnsmasq-dns-5fd9b586ff-sbnjf\" (UID: \"3920ccfc-84d1-4ed4-9295-942b370b9eaf\") " pod="openstack/dnsmasq-dns-5fd9b586ff-sbnjf" Dec 09 12:31:47 crc kubenswrapper[4703]: I1209 12:31:47.313232 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfqz5\" (UniqueName: \"kubernetes.io/projected/3920ccfc-84d1-4ed4-9295-942b370b9eaf-kube-api-access-lfqz5\") pod \"dnsmasq-dns-5fd9b586ff-sbnjf\" (UID: \"3920ccfc-84d1-4ed4-9295-942b370b9eaf\") " pod="openstack/dnsmasq-dns-5fd9b586ff-sbnjf" Dec 09 12:31:47 crc kubenswrapper[4703]: I1209 12:31:47.313649 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3920ccfc-84d1-4ed4-9295-942b370b9eaf-dns-swift-storage-0\") pod \"dnsmasq-dns-5fd9b586ff-sbnjf\" (UID: \"3920ccfc-84d1-4ed4-9295-942b370b9eaf\") " pod="openstack/dnsmasq-dns-5fd9b586ff-sbnjf" Dec 09 12:31:47 crc kubenswrapper[4703]: I1209 12:31:47.313873 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3920ccfc-84d1-4ed4-9295-942b370b9eaf-ovsdbserver-nb\") pod \"dnsmasq-dns-5fd9b586ff-sbnjf\" (UID: \"3920ccfc-84d1-4ed4-9295-942b370b9eaf\") " pod="openstack/dnsmasq-dns-5fd9b586ff-sbnjf" Dec 09 12:31:47 crc kubenswrapper[4703]: I1209 12:31:47.314451 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3920ccfc-84d1-4ed4-9295-942b370b9eaf-dns-svc\") pod \"dnsmasq-dns-5fd9b586ff-sbnjf\" (UID: \"3920ccfc-84d1-4ed4-9295-942b370b9eaf\") " pod="openstack/dnsmasq-dns-5fd9b586ff-sbnjf" Dec 09 12:31:47 crc kubenswrapper[4703]: I1209 12:31:47.314542 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3920ccfc-84d1-4ed4-9295-942b370b9eaf-ovsdbserver-sb\") pod \"dnsmasq-dns-5fd9b586ff-sbnjf\" (UID: \"3920ccfc-84d1-4ed4-9295-942b370b9eaf\") " pod="openstack/dnsmasq-dns-5fd9b586ff-sbnjf" Dec 09 12:31:47 crc kubenswrapper[4703]: I1209 12:31:47.315137 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3920ccfc-84d1-4ed4-9295-942b370b9eaf-config\") pod \"dnsmasq-dns-5fd9b586ff-sbnjf\" (UID: \"3920ccfc-84d1-4ed4-9295-942b370b9eaf\") " pod="openstack/dnsmasq-dns-5fd9b586ff-sbnjf" Dec 09 12:31:47 crc kubenswrapper[4703]: I1209 12:31:47.324518 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3920ccfc-84d1-4ed4-9295-942b370b9eaf-ovsdbserver-sb\") pod \"dnsmasq-dns-5fd9b586ff-sbnjf\" (UID: \"3920ccfc-84d1-4ed4-9295-942b370b9eaf\") " pod="openstack/dnsmasq-dns-5fd9b586ff-sbnjf" Dec 09 12:31:47 crc kubenswrapper[4703]: I1209 12:31:47.327442 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3920ccfc-84d1-4ed4-9295-942b370b9eaf-ovsdbserver-nb\") pod \"dnsmasq-dns-5fd9b586ff-sbnjf\" (UID: \"3920ccfc-84d1-4ed4-9295-942b370b9eaf\") " pod="openstack/dnsmasq-dns-5fd9b586ff-sbnjf" Dec 09 12:31:47 crc kubenswrapper[4703]: I1209 12:31:47.328913 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3920ccfc-84d1-4ed4-9295-942b370b9eaf-dns-svc\") pod \"dnsmasq-dns-5fd9b586ff-sbnjf\" (UID: \"3920ccfc-84d1-4ed4-9295-942b370b9eaf\") " pod="openstack/dnsmasq-dns-5fd9b586ff-sbnjf" Dec 09 12:31:47 crc kubenswrapper[4703]: I1209 12:31:47.355400 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfqz5\" (UniqueName: \"kubernetes.io/projected/3920ccfc-84d1-4ed4-9295-942b370b9eaf-kube-api-access-lfqz5\") pod \"dnsmasq-dns-5fd9b586ff-sbnjf\" (UID: \"3920ccfc-84d1-4ed4-9295-942b370b9eaf\") " pod="openstack/dnsmasq-dns-5fd9b586ff-sbnjf" Dec 09 12:31:47 crc kubenswrapper[4703]: I1209 12:31:47.409278 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fd9b586ff-sbnjf" Dec 09 12:31:47 crc kubenswrapper[4703]: I1209 12:31:47.465061 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-znxfc" event={"ID":"3bb06d6a-e31a-4624-bbca-fa4f970d5685","Type":"ContainerStarted","Data":"5444d85da0ade88650fe09ff8acb4d4b0e785660fa906a130d8a24089db9f2ea"} Dec 09 12:31:47 crc kubenswrapper[4703]: I1209 12:31:47.465113 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-znxfc" event={"ID":"3bb06d6a-e31a-4624-bbca-fa4f970d5685","Type":"ContainerStarted","Data":"9bea482bfa6f8f195d6a5df823ee1c8ac52e141803b798dd35786d546cbf795b"} Dec 09 12:31:47 crc kubenswrapper[4703]: I1209 12:31:47.492587 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-znxfc" podStartSLOduration=2.492559991 podStartE2EDuration="2.492559991s" podCreationTimestamp="2025-12-09 12:31:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:31:47.489439944 +0000 UTC m=+1606.738203463" watchObservedRunningTime="2025-12-09 12:31:47.492559991 +0000 UTC m=+1606.741323510" Dec 09 12:31:48 crc kubenswrapper[4703]: I1209 12:31:48.068350 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5fd9b586ff-sbnjf"] Dec 09 12:31:48 crc kubenswrapper[4703]: I1209 12:31:48.483178 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fd9b586ff-sbnjf" event={"ID":"3920ccfc-84d1-4ed4-9295-942b370b9eaf","Type":"ContainerStarted","Data":"aba61cdfb0aaa77557047ad1f723d313e9187c4a93208abea502994183402071"} Dec 09 12:31:48 crc kubenswrapper[4703]: I1209 12:31:48.483545 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fd9b586ff-sbnjf" event={"ID":"3920ccfc-84d1-4ed4-9295-942b370b9eaf","Type":"ContainerStarted","Data":"90b95f2f783c2b4a120677cb4481700b9479871be4d3df39e4747e975fb6bf07"} Dec 09 12:31:49 crc kubenswrapper[4703]: I1209 12:31:49.513566 4703 generic.go:334] "Generic (PLEG): container finished" podID="3920ccfc-84d1-4ed4-9295-942b370b9eaf" containerID="aba61cdfb0aaa77557047ad1f723d313e9187c4a93208abea502994183402071" exitCode=0 Dec 09 12:31:49 crc kubenswrapper[4703]: I1209 12:31:49.513618 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fd9b586ff-sbnjf" event={"ID":"3920ccfc-84d1-4ed4-9295-942b370b9eaf","Type":"ContainerDied","Data":"aba61cdfb0aaa77557047ad1f723d313e9187c4a93208abea502994183402071"} Dec 09 12:31:49 crc kubenswrapper[4703]: I1209 12:31:49.863371 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 09 12:31:49 crc kubenswrapper[4703]: I1209 12:31:49.864116 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="53184350-7fa0-45a1-ba83-b03735f6c261" containerName="nova-api-log" containerID="cri-o://7488ed608448613a60ebc602396db673e0ad3ea7a2355cc16b76e77e175fb52b" gracePeriod=30 Dec 09 12:31:49 crc kubenswrapper[4703]: I1209 12:31:49.864563 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="53184350-7fa0-45a1-ba83-b03735f6c261" containerName="nova-api-api" containerID="cri-o://f5e4c85f2571d5e6aaeeb15fb13a0172a43115609aae53de1c9348495cb363de" gracePeriod=30 Dec 09 12:31:50 crc kubenswrapper[4703]: I1209 12:31:50.067981 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qkxln" Dec 09 12:31:50 crc kubenswrapper[4703]: I1209 12:31:50.068036 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qkxln" Dec 09 12:31:50 crc kubenswrapper[4703]: I1209 12:31:50.153793 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qkxln" Dec 09 12:31:50 crc kubenswrapper[4703]: I1209 12:31:50.537217 4703 generic.go:334] "Generic (PLEG): container finished" podID="53184350-7fa0-45a1-ba83-b03735f6c261" containerID="7488ed608448613a60ebc602396db673e0ad3ea7a2355cc16b76e77e175fb52b" exitCode=143 Dec 09 12:31:50 crc kubenswrapper[4703]: I1209 12:31:50.537261 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"53184350-7fa0-45a1-ba83-b03735f6c261","Type":"ContainerDied","Data":"7488ed608448613a60ebc602396db673e0ad3ea7a2355cc16b76e77e175fb52b"} Dec 09 12:31:50 crc kubenswrapper[4703]: I1209 12:31:50.540309 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fd9b586ff-sbnjf" event={"ID":"3920ccfc-84d1-4ed4-9295-942b370b9eaf","Type":"ContainerStarted","Data":"c0aa394d9727d0e4b5133d6aa2c132f0dc9226344579b548e6da63526f98a692"} Dec 09 12:31:50 crc kubenswrapper[4703]: I1209 12:31:50.540476 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5fd9b586ff-sbnjf" Dec 09 12:31:50 crc kubenswrapper[4703]: I1209 12:31:50.629743 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qkxln" Dec 09 12:31:50 crc kubenswrapper[4703]: I1209 12:31:50.657005 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5fd9b586ff-sbnjf" podStartSLOduration=4.6569702280000005 podStartE2EDuration="4.656970228s" podCreationTimestamp="2025-12-09 12:31:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:31:50.607143414 +0000 UTC m=+1609.855906933" watchObservedRunningTime="2025-12-09 12:31:50.656970228 +0000 UTC m=+1609.905733747" Dec 09 12:31:50 crc kubenswrapper[4703]: I1209 12:31:50.706581 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qkxln"] Dec 09 12:31:51 crc kubenswrapper[4703]: I1209 12:31:51.694977 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 09 12:31:51 crc kubenswrapper[4703]: I1209 12:31:51.695992 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8e6f4580-9b21-4b99-a48d-56df09b6863d" containerName="ceilometer-central-agent" containerID="cri-o://d26bdd74a38cb56b4c770f7d6f0286a1ee1e74f6f683727a70b3cdb301a0ec8c" gracePeriod=30 Dec 09 12:31:51 crc kubenswrapper[4703]: I1209 12:31:51.696736 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8e6f4580-9b21-4b99-a48d-56df09b6863d" containerName="proxy-httpd" containerID="cri-o://c8680b06f3e0e798f5d1b1e497f72bcd0d29c85033186fc128d4eb0285a3a0a8" gracePeriod=30 Dec 09 12:31:51 crc kubenswrapper[4703]: I1209 12:31:51.696789 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8e6f4580-9b21-4b99-a48d-56df09b6863d" containerName="sg-core" containerID="cri-o://95f9a0056afa1ac922e8f9ad5739c14e756b393576e27c55ceab2ca4145c81ab" gracePeriod=30 Dec 09 12:31:51 crc kubenswrapper[4703]: I1209 12:31:51.696829 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8e6f4580-9b21-4b99-a48d-56df09b6863d" containerName="ceilometer-notification-agent" containerID="cri-o://875998a873b05c99494028fac72afac183d53648c2f16b0fe9f95f56b7a94dd9" gracePeriod=30 Dec 09 12:31:52 crc kubenswrapper[4703]: I1209 12:31:52.565149 4703 generic.go:334] "Generic (PLEG): container finished" podID="8e6f4580-9b21-4b99-a48d-56df09b6863d" containerID="c8680b06f3e0e798f5d1b1e497f72bcd0d29c85033186fc128d4eb0285a3a0a8" exitCode=0 Dec 09 12:31:52 crc kubenswrapper[4703]: I1209 12:31:52.565366 4703 generic.go:334] "Generic (PLEG): container finished" podID="8e6f4580-9b21-4b99-a48d-56df09b6863d" containerID="95f9a0056afa1ac922e8f9ad5739c14e756b393576e27c55ceab2ca4145c81ab" exitCode=2 Dec 09 12:31:52 crc kubenswrapper[4703]: I1209 12:31:52.565381 4703 generic.go:334] "Generic (PLEG): container finished" podID="8e6f4580-9b21-4b99-a48d-56df09b6863d" containerID="d26bdd74a38cb56b4c770f7d6f0286a1ee1e74f6f683727a70b3cdb301a0ec8c" exitCode=0 Dec 09 12:31:52 crc kubenswrapper[4703]: I1209 12:31:52.565628 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qkxln" podUID="07da3beb-8b3f-4928-a664-fb98cf29e31d" containerName="registry-server" containerID="cri-o://fa596899e0e2eaf0379c7b694dfa0b19b1c43b78715b26c3209b4a395380b55c" gracePeriod=2 Dec 09 12:31:52 crc kubenswrapper[4703]: I1209 12:31:52.565278 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8e6f4580-9b21-4b99-a48d-56df09b6863d","Type":"ContainerDied","Data":"c8680b06f3e0e798f5d1b1e497f72bcd0d29c85033186fc128d4eb0285a3a0a8"} Dec 09 12:31:52 crc kubenswrapper[4703]: I1209 12:31:52.565745 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8e6f4580-9b21-4b99-a48d-56df09b6863d","Type":"ContainerDied","Data":"95f9a0056afa1ac922e8f9ad5739c14e756b393576e27c55ceab2ca4145c81ab"} Dec 09 12:31:52 crc kubenswrapper[4703]: I1209 12:31:52.565763 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8e6f4580-9b21-4b99-a48d-56df09b6863d","Type":"ContainerDied","Data":"d26bdd74a38cb56b4c770f7d6f0286a1ee1e74f6f683727a70b3cdb301a0ec8c"} Dec 09 12:31:53 crc kubenswrapper[4703]: I1209 12:31:53.070169 4703 scope.go:117] "RemoveContainer" containerID="aec88ccdbd5bc6c7f7f9dd534b57b24318f2c545ead2d59998c5ffb08ae72b46" Dec 09 12:31:53 crc kubenswrapper[4703]: E1209 12:31:53.070717 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 12:31:53 crc kubenswrapper[4703]: I1209 12:31:53.453339 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qkxln" Dec 09 12:31:53 crc kubenswrapper[4703]: I1209 12:31:53.556723 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07da3beb-8b3f-4928-a664-fb98cf29e31d-utilities\") pod \"07da3beb-8b3f-4928-a664-fb98cf29e31d\" (UID: \"07da3beb-8b3f-4928-a664-fb98cf29e31d\") " Dec 09 12:31:53 crc kubenswrapper[4703]: I1209 12:31:53.556948 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qcszt\" (UniqueName: \"kubernetes.io/projected/07da3beb-8b3f-4928-a664-fb98cf29e31d-kube-api-access-qcszt\") pod \"07da3beb-8b3f-4928-a664-fb98cf29e31d\" (UID: \"07da3beb-8b3f-4928-a664-fb98cf29e31d\") " Dec 09 12:31:53 crc kubenswrapper[4703]: I1209 12:31:53.556992 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07da3beb-8b3f-4928-a664-fb98cf29e31d-catalog-content\") pod \"07da3beb-8b3f-4928-a664-fb98cf29e31d\" (UID: \"07da3beb-8b3f-4928-a664-fb98cf29e31d\") " Dec 09 12:31:53 crc kubenswrapper[4703]: I1209 12:31:53.558105 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07da3beb-8b3f-4928-a664-fb98cf29e31d-utilities" (OuterVolumeSpecName: "utilities") pod "07da3beb-8b3f-4928-a664-fb98cf29e31d" (UID: "07da3beb-8b3f-4928-a664-fb98cf29e31d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:31:53 crc kubenswrapper[4703]: I1209 12:31:53.569738 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07da3beb-8b3f-4928-a664-fb98cf29e31d-kube-api-access-qcszt" (OuterVolumeSpecName: "kube-api-access-qcszt") pod "07da3beb-8b3f-4928-a664-fb98cf29e31d" (UID: "07da3beb-8b3f-4928-a664-fb98cf29e31d"). InnerVolumeSpecName "kube-api-access-qcszt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:31:53 crc kubenswrapper[4703]: I1209 12:31:53.582754 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07da3beb-8b3f-4928-a664-fb98cf29e31d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "07da3beb-8b3f-4928-a664-fb98cf29e31d" (UID: "07da3beb-8b3f-4928-a664-fb98cf29e31d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:31:53 crc kubenswrapper[4703]: I1209 12:31:53.602217 4703 generic.go:334] "Generic (PLEG): container finished" podID="07da3beb-8b3f-4928-a664-fb98cf29e31d" containerID="fa596899e0e2eaf0379c7b694dfa0b19b1c43b78715b26c3209b4a395380b55c" exitCode=0 Dec 09 12:31:53 crc kubenswrapper[4703]: I1209 12:31:53.602361 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qkxln" event={"ID":"07da3beb-8b3f-4928-a664-fb98cf29e31d","Type":"ContainerDied","Data":"fa596899e0e2eaf0379c7b694dfa0b19b1c43b78715b26c3209b4a395380b55c"} Dec 09 12:31:53 crc kubenswrapper[4703]: I1209 12:31:53.602400 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qkxln" event={"ID":"07da3beb-8b3f-4928-a664-fb98cf29e31d","Type":"ContainerDied","Data":"99b54f4e6d2b2d04261a54fc412d2e2e85a0cbfa3fb51fa132ba9084e1761044"} Dec 09 12:31:53 crc kubenswrapper[4703]: I1209 12:31:53.602427 4703 scope.go:117] "RemoveContainer" containerID="fa596899e0e2eaf0379c7b694dfa0b19b1c43b78715b26c3209b4a395380b55c" Dec 09 12:31:53 crc kubenswrapper[4703]: I1209 12:31:53.602620 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qkxln" Dec 09 12:31:53 crc kubenswrapper[4703]: I1209 12:31:53.610160 4703 generic.go:334] "Generic (PLEG): container finished" podID="53184350-7fa0-45a1-ba83-b03735f6c261" containerID="f5e4c85f2571d5e6aaeeb15fb13a0172a43115609aae53de1c9348495cb363de" exitCode=0 Dec 09 12:31:53 crc kubenswrapper[4703]: I1209 12:31:53.610330 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"53184350-7fa0-45a1-ba83-b03735f6c261","Type":"ContainerDied","Data":"f5e4c85f2571d5e6aaeeb15fb13a0172a43115609aae53de1c9348495cb363de"} Dec 09 12:31:53 crc kubenswrapper[4703]: I1209 12:31:53.610428 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"53184350-7fa0-45a1-ba83-b03735f6c261","Type":"ContainerDied","Data":"e44f70400bf9f5f7110676d035c066ab2bd248c54655ed1c0381f63ef1772708"} Dec 09 12:31:53 crc kubenswrapper[4703]: I1209 12:31:53.610500 4703 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e44f70400bf9f5f7110676d035c066ab2bd248c54655ed1c0381f63ef1772708" Dec 09 12:31:53 crc kubenswrapper[4703]: I1209 12:31:53.662097 4703 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07da3beb-8b3f-4928-a664-fb98cf29e31d-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:53 crc kubenswrapper[4703]: I1209 12:31:53.662142 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qcszt\" (UniqueName: \"kubernetes.io/projected/07da3beb-8b3f-4928-a664-fb98cf29e31d-kube-api-access-qcszt\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:53 crc kubenswrapper[4703]: I1209 12:31:53.662153 4703 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07da3beb-8b3f-4928-a664-fb98cf29e31d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:53 crc kubenswrapper[4703]: I1209 12:31:53.706116 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 09 12:31:53 crc kubenswrapper[4703]: I1209 12:31:53.735474 4703 scope.go:117] "RemoveContainer" containerID="35a9a652c1836be0f41fe0a763f2f8f621801afb81382aa679af4cc2491ef47c" Dec 09 12:31:53 crc kubenswrapper[4703]: I1209 12:31:53.736241 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qkxln"] Dec 09 12:31:53 crc kubenswrapper[4703]: I1209 12:31:53.785263 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qkxln"] Dec 09 12:31:53 crc kubenswrapper[4703]: I1209 12:31:53.785286 4703 scope.go:117] "RemoveContainer" containerID="9cd19d7c6e413e6c14687396c207cb9447fa545041347fbce04ce7c9631ce1d7" Dec 09 12:31:53 crc kubenswrapper[4703]: I1209 12:31:53.788865 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53184350-7fa0-45a1-ba83-b03735f6c261-logs\") pod \"53184350-7fa0-45a1-ba83-b03735f6c261\" (UID: \"53184350-7fa0-45a1-ba83-b03735f6c261\") " Dec 09 12:31:53 crc kubenswrapper[4703]: I1209 12:31:53.789353 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53184350-7fa0-45a1-ba83-b03735f6c261-logs" (OuterVolumeSpecName: "logs") pod "53184350-7fa0-45a1-ba83-b03735f6c261" (UID: "53184350-7fa0-45a1-ba83-b03735f6c261"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:31:53 crc kubenswrapper[4703]: I1209 12:31:53.798063 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53184350-7fa0-45a1-ba83-b03735f6c261-combined-ca-bundle\") pod \"53184350-7fa0-45a1-ba83-b03735f6c261\" (UID: \"53184350-7fa0-45a1-ba83-b03735f6c261\") " Dec 09 12:31:53 crc kubenswrapper[4703]: I1209 12:31:53.798230 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t6czz\" (UniqueName: \"kubernetes.io/projected/53184350-7fa0-45a1-ba83-b03735f6c261-kube-api-access-t6czz\") pod \"53184350-7fa0-45a1-ba83-b03735f6c261\" (UID: \"53184350-7fa0-45a1-ba83-b03735f6c261\") " Dec 09 12:31:53 crc kubenswrapper[4703]: I1209 12:31:53.798541 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53184350-7fa0-45a1-ba83-b03735f6c261-config-data\") pod \"53184350-7fa0-45a1-ba83-b03735f6c261\" (UID: \"53184350-7fa0-45a1-ba83-b03735f6c261\") " Dec 09 12:31:53 crc kubenswrapper[4703]: I1209 12:31:53.821882 4703 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53184350-7fa0-45a1-ba83-b03735f6c261-logs\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:53 crc kubenswrapper[4703]: I1209 12:31:53.822136 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53184350-7fa0-45a1-ba83-b03735f6c261-kube-api-access-t6czz" (OuterVolumeSpecName: "kube-api-access-t6czz") pod "53184350-7fa0-45a1-ba83-b03735f6c261" (UID: "53184350-7fa0-45a1-ba83-b03735f6c261"). InnerVolumeSpecName "kube-api-access-t6czz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:31:53 crc kubenswrapper[4703]: I1209 12:31:53.839440 4703 scope.go:117] "RemoveContainer" containerID="fa596899e0e2eaf0379c7b694dfa0b19b1c43b78715b26c3209b4a395380b55c" Dec 09 12:31:53 crc kubenswrapper[4703]: E1209 12:31:53.840467 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa596899e0e2eaf0379c7b694dfa0b19b1c43b78715b26c3209b4a395380b55c\": container with ID starting with fa596899e0e2eaf0379c7b694dfa0b19b1c43b78715b26c3209b4a395380b55c not found: ID does not exist" containerID="fa596899e0e2eaf0379c7b694dfa0b19b1c43b78715b26c3209b4a395380b55c" Dec 09 12:31:53 crc kubenswrapper[4703]: I1209 12:31:53.840549 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa596899e0e2eaf0379c7b694dfa0b19b1c43b78715b26c3209b4a395380b55c"} err="failed to get container status \"fa596899e0e2eaf0379c7b694dfa0b19b1c43b78715b26c3209b4a395380b55c\": rpc error: code = NotFound desc = could not find container \"fa596899e0e2eaf0379c7b694dfa0b19b1c43b78715b26c3209b4a395380b55c\": container with ID starting with fa596899e0e2eaf0379c7b694dfa0b19b1c43b78715b26c3209b4a395380b55c not found: ID does not exist" Dec 09 12:31:53 crc kubenswrapper[4703]: I1209 12:31:53.840587 4703 scope.go:117] "RemoveContainer" containerID="35a9a652c1836be0f41fe0a763f2f8f621801afb81382aa679af4cc2491ef47c" Dec 09 12:31:53 crc kubenswrapper[4703]: E1209 12:31:53.843914 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35a9a652c1836be0f41fe0a763f2f8f621801afb81382aa679af4cc2491ef47c\": container with ID starting with 35a9a652c1836be0f41fe0a763f2f8f621801afb81382aa679af4cc2491ef47c not found: ID does not exist" containerID="35a9a652c1836be0f41fe0a763f2f8f621801afb81382aa679af4cc2491ef47c" Dec 09 12:31:53 crc kubenswrapper[4703]: I1209 12:31:53.843956 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35a9a652c1836be0f41fe0a763f2f8f621801afb81382aa679af4cc2491ef47c"} err="failed to get container status \"35a9a652c1836be0f41fe0a763f2f8f621801afb81382aa679af4cc2491ef47c\": rpc error: code = NotFound desc = could not find container \"35a9a652c1836be0f41fe0a763f2f8f621801afb81382aa679af4cc2491ef47c\": container with ID starting with 35a9a652c1836be0f41fe0a763f2f8f621801afb81382aa679af4cc2491ef47c not found: ID does not exist" Dec 09 12:31:53 crc kubenswrapper[4703]: I1209 12:31:53.843977 4703 scope.go:117] "RemoveContainer" containerID="9cd19d7c6e413e6c14687396c207cb9447fa545041347fbce04ce7c9631ce1d7" Dec 09 12:31:53 crc kubenswrapper[4703]: E1209 12:31:53.848282 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9cd19d7c6e413e6c14687396c207cb9447fa545041347fbce04ce7c9631ce1d7\": container with ID starting with 9cd19d7c6e413e6c14687396c207cb9447fa545041347fbce04ce7c9631ce1d7 not found: ID does not exist" containerID="9cd19d7c6e413e6c14687396c207cb9447fa545041347fbce04ce7c9631ce1d7" Dec 09 12:31:53 crc kubenswrapper[4703]: I1209 12:31:53.848401 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9cd19d7c6e413e6c14687396c207cb9447fa545041347fbce04ce7c9631ce1d7"} err="failed to get container status \"9cd19d7c6e413e6c14687396c207cb9447fa545041347fbce04ce7c9631ce1d7\": rpc error: code = NotFound desc = could not find container \"9cd19d7c6e413e6c14687396c207cb9447fa545041347fbce04ce7c9631ce1d7\": container with ID starting with 9cd19d7c6e413e6c14687396c207cb9447fa545041347fbce04ce7c9631ce1d7 not found: ID does not exist" Dec 09 12:31:53 crc kubenswrapper[4703]: I1209 12:31:53.884381 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53184350-7fa0-45a1-ba83-b03735f6c261-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "53184350-7fa0-45a1-ba83-b03735f6c261" (UID: "53184350-7fa0-45a1-ba83-b03735f6c261"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:31:53 crc kubenswrapper[4703]: I1209 12:31:53.885369 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53184350-7fa0-45a1-ba83-b03735f6c261-config-data" (OuterVolumeSpecName: "config-data") pod "53184350-7fa0-45a1-ba83-b03735f6c261" (UID: "53184350-7fa0-45a1-ba83-b03735f6c261"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:31:53 crc kubenswrapper[4703]: I1209 12:31:53.924237 4703 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53184350-7fa0-45a1-ba83-b03735f6c261-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:53 crc kubenswrapper[4703]: I1209 12:31:53.924322 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t6czz\" (UniqueName: \"kubernetes.io/projected/53184350-7fa0-45a1-ba83-b03735f6c261-kube-api-access-t6czz\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:53 crc kubenswrapper[4703]: I1209 12:31:53.924341 4703 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53184350-7fa0-45a1-ba83-b03735f6c261-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:54 crc kubenswrapper[4703]: I1209 12:31:54.626489 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 09 12:31:54 crc kubenswrapper[4703]: I1209 12:31:54.678740 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 09 12:31:54 crc kubenswrapper[4703]: I1209 12:31:54.691503 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 09 12:31:54 crc kubenswrapper[4703]: I1209 12:31:54.710377 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 09 12:31:54 crc kubenswrapper[4703]: E1209 12:31:54.711283 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53184350-7fa0-45a1-ba83-b03735f6c261" containerName="nova-api-log" Dec 09 12:31:54 crc kubenswrapper[4703]: I1209 12:31:54.711367 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="53184350-7fa0-45a1-ba83-b03735f6c261" containerName="nova-api-log" Dec 09 12:31:54 crc kubenswrapper[4703]: E1209 12:31:54.711444 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53184350-7fa0-45a1-ba83-b03735f6c261" containerName="nova-api-api" Dec 09 12:31:54 crc kubenswrapper[4703]: I1209 12:31:54.711511 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="53184350-7fa0-45a1-ba83-b03735f6c261" containerName="nova-api-api" Dec 09 12:31:54 crc kubenswrapper[4703]: E1209 12:31:54.711610 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07da3beb-8b3f-4928-a664-fb98cf29e31d" containerName="extract-utilities" Dec 09 12:31:54 crc kubenswrapper[4703]: I1209 12:31:54.711659 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="07da3beb-8b3f-4928-a664-fb98cf29e31d" containerName="extract-utilities" Dec 09 12:31:54 crc kubenswrapper[4703]: E1209 12:31:54.711716 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07da3beb-8b3f-4928-a664-fb98cf29e31d" containerName="registry-server" Dec 09 12:31:54 crc kubenswrapper[4703]: I1209 12:31:54.711762 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="07da3beb-8b3f-4928-a664-fb98cf29e31d" containerName="registry-server" Dec 09 12:31:54 crc kubenswrapper[4703]: E1209 12:31:54.711823 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07da3beb-8b3f-4928-a664-fb98cf29e31d" containerName="extract-content" Dec 09 12:31:54 crc kubenswrapper[4703]: I1209 12:31:54.711868 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="07da3beb-8b3f-4928-a664-fb98cf29e31d" containerName="extract-content" Dec 09 12:31:54 crc kubenswrapper[4703]: I1209 12:31:54.712239 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="53184350-7fa0-45a1-ba83-b03735f6c261" containerName="nova-api-log" Dec 09 12:31:54 crc kubenswrapper[4703]: I1209 12:31:54.712327 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="53184350-7fa0-45a1-ba83-b03735f6c261" containerName="nova-api-api" Dec 09 12:31:54 crc kubenswrapper[4703]: I1209 12:31:54.712402 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="07da3beb-8b3f-4928-a664-fb98cf29e31d" containerName="registry-server" Dec 09 12:31:54 crc kubenswrapper[4703]: I1209 12:31:54.713988 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 09 12:31:54 crc kubenswrapper[4703]: I1209 12:31:54.717984 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Dec 09 12:31:54 crc kubenswrapper[4703]: I1209 12:31:54.718069 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 09 12:31:54 crc kubenswrapper[4703]: I1209 12:31:54.718140 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Dec 09 12:31:54 crc kubenswrapper[4703]: I1209 12:31:54.745743 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 09 12:31:54 crc kubenswrapper[4703]: I1209 12:31:54.846749 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b376cf43-332b-45e8-b384-5b885172504d-public-tls-certs\") pod \"nova-api-0\" (UID: \"b376cf43-332b-45e8-b384-5b885172504d\") " pod="openstack/nova-api-0" Dec 09 12:31:54 crc kubenswrapper[4703]: I1209 12:31:54.847246 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62lq6\" (UniqueName: \"kubernetes.io/projected/b376cf43-332b-45e8-b384-5b885172504d-kube-api-access-62lq6\") pod \"nova-api-0\" (UID: \"b376cf43-332b-45e8-b384-5b885172504d\") " pod="openstack/nova-api-0" Dec 09 12:31:54 crc kubenswrapper[4703]: I1209 12:31:54.847463 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b376cf43-332b-45e8-b384-5b885172504d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b376cf43-332b-45e8-b384-5b885172504d\") " pod="openstack/nova-api-0" Dec 09 12:31:54 crc kubenswrapper[4703]: I1209 12:31:54.847586 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b376cf43-332b-45e8-b384-5b885172504d-config-data\") pod \"nova-api-0\" (UID: \"b376cf43-332b-45e8-b384-5b885172504d\") " pod="openstack/nova-api-0" Dec 09 12:31:54 crc kubenswrapper[4703]: I1209 12:31:54.847745 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b376cf43-332b-45e8-b384-5b885172504d-logs\") pod \"nova-api-0\" (UID: \"b376cf43-332b-45e8-b384-5b885172504d\") " pod="openstack/nova-api-0" Dec 09 12:31:54 crc kubenswrapper[4703]: I1209 12:31:54.847982 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b376cf43-332b-45e8-b384-5b885172504d-internal-tls-certs\") pod \"nova-api-0\" (UID: \"b376cf43-332b-45e8-b384-5b885172504d\") " pod="openstack/nova-api-0" Dec 09 12:31:54 crc kubenswrapper[4703]: I1209 12:31:54.950820 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b376cf43-332b-45e8-b384-5b885172504d-public-tls-certs\") pod \"nova-api-0\" (UID: \"b376cf43-332b-45e8-b384-5b885172504d\") " pod="openstack/nova-api-0" Dec 09 12:31:54 crc kubenswrapper[4703]: I1209 12:31:54.951421 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62lq6\" (UniqueName: \"kubernetes.io/projected/b376cf43-332b-45e8-b384-5b885172504d-kube-api-access-62lq6\") pod \"nova-api-0\" (UID: \"b376cf43-332b-45e8-b384-5b885172504d\") " pod="openstack/nova-api-0" Dec 09 12:31:54 crc kubenswrapper[4703]: I1209 12:31:54.951600 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b376cf43-332b-45e8-b384-5b885172504d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b376cf43-332b-45e8-b384-5b885172504d\") " pod="openstack/nova-api-0" Dec 09 12:31:54 crc kubenswrapper[4703]: I1209 12:31:54.951703 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b376cf43-332b-45e8-b384-5b885172504d-config-data\") pod \"nova-api-0\" (UID: \"b376cf43-332b-45e8-b384-5b885172504d\") " pod="openstack/nova-api-0" Dec 09 12:31:54 crc kubenswrapper[4703]: I1209 12:31:54.951879 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b376cf43-332b-45e8-b384-5b885172504d-logs\") pod \"nova-api-0\" (UID: \"b376cf43-332b-45e8-b384-5b885172504d\") " pod="openstack/nova-api-0" Dec 09 12:31:54 crc kubenswrapper[4703]: I1209 12:31:54.952124 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b376cf43-332b-45e8-b384-5b885172504d-internal-tls-certs\") pod \"nova-api-0\" (UID: \"b376cf43-332b-45e8-b384-5b885172504d\") " pod="openstack/nova-api-0" Dec 09 12:31:54 crc kubenswrapper[4703]: I1209 12:31:54.952432 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b376cf43-332b-45e8-b384-5b885172504d-logs\") pod \"nova-api-0\" (UID: \"b376cf43-332b-45e8-b384-5b885172504d\") " pod="openstack/nova-api-0" Dec 09 12:31:54 crc kubenswrapper[4703]: I1209 12:31:54.958130 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b376cf43-332b-45e8-b384-5b885172504d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b376cf43-332b-45e8-b384-5b885172504d\") " pod="openstack/nova-api-0" Dec 09 12:31:54 crc kubenswrapper[4703]: I1209 12:31:54.958663 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b376cf43-332b-45e8-b384-5b885172504d-public-tls-certs\") pod \"nova-api-0\" (UID: \"b376cf43-332b-45e8-b384-5b885172504d\") " pod="openstack/nova-api-0" Dec 09 12:31:54 crc kubenswrapper[4703]: I1209 12:31:54.968077 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b376cf43-332b-45e8-b384-5b885172504d-config-data\") pod \"nova-api-0\" (UID: \"b376cf43-332b-45e8-b384-5b885172504d\") " pod="openstack/nova-api-0" Dec 09 12:31:54 crc kubenswrapper[4703]: I1209 12:31:54.970758 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b376cf43-332b-45e8-b384-5b885172504d-internal-tls-certs\") pod \"nova-api-0\" (UID: \"b376cf43-332b-45e8-b384-5b885172504d\") " pod="openstack/nova-api-0" Dec 09 12:31:54 crc kubenswrapper[4703]: I1209 12:31:54.979772 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62lq6\" (UniqueName: \"kubernetes.io/projected/b376cf43-332b-45e8-b384-5b885172504d-kube-api-access-62lq6\") pod \"nova-api-0\" (UID: \"b376cf43-332b-45e8-b384-5b885172504d\") " pod="openstack/nova-api-0" Dec 09 12:31:55 crc kubenswrapper[4703]: I1209 12:31:55.057155 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 09 12:31:55 crc kubenswrapper[4703]: I1209 12:31:55.088180 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07da3beb-8b3f-4928-a664-fb98cf29e31d" path="/var/lib/kubelet/pods/07da3beb-8b3f-4928-a664-fb98cf29e31d/volumes" Dec 09 12:31:55 crc kubenswrapper[4703]: I1209 12:31:55.089548 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53184350-7fa0-45a1-ba83-b03735f6c261" path="/var/lib/kubelet/pods/53184350-7fa0-45a1-ba83-b03735f6c261/volumes" Dec 09 12:31:55 crc kubenswrapper[4703]: I1209 12:31:55.621482 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 09 12:31:55 crc kubenswrapper[4703]: I1209 12:31:55.650681 4703 generic.go:334] "Generic (PLEG): container finished" podID="3bb06d6a-e31a-4624-bbca-fa4f970d5685" containerID="5444d85da0ade88650fe09ff8acb4d4b0e785660fa906a130d8a24089db9f2ea" exitCode=0 Dec 09 12:31:55 crc kubenswrapper[4703]: I1209 12:31:55.650727 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-znxfc" event={"ID":"3bb06d6a-e31a-4624-bbca-fa4f970d5685","Type":"ContainerDied","Data":"5444d85da0ade88650fe09ff8acb4d4b0e785660fa906a130d8a24089db9f2ea"} Dec 09 12:31:55 crc kubenswrapper[4703]: I1209 12:31:55.655589 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b376cf43-332b-45e8-b384-5b885172504d","Type":"ContainerStarted","Data":"cb7e6af9217a1cb49b27066c7a9609c42b11404b39c5587e70de5cd0f4dd8f7b"} Dec 09 12:31:56 crc kubenswrapper[4703]: I1209 12:31:56.632756 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 12:31:56 crc kubenswrapper[4703]: I1209 12:31:56.698971 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e6f4580-9b21-4b99-a48d-56df09b6863d-ceilometer-tls-certs\") pod \"8e6f4580-9b21-4b99-a48d-56df09b6863d\" (UID: \"8e6f4580-9b21-4b99-a48d-56df09b6863d\") " Dec 09 12:31:56 crc kubenswrapper[4703]: I1209 12:31:56.699082 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e6f4580-9b21-4b99-a48d-56df09b6863d-scripts\") pod \"8e6f4580-9b21-4b99-a48d-56df09b6863d\" (UID: \"8e6f4580-9b21-4b99-a48d-56df09b6863d\") " Dec 09 12:31:56 crc kubenswrapper[4703]: I1209 12:31:56.699200 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e6f4580-9b21-4b99-a48d-56df09b6863d-combined-ca-bundle\") pod \"8e6f4580-9b21-4b99-a48d-56df09b6863d\" (UID: \"8e6f4580-9b21-4b99-a48d-56df09b6863d\") " Dec 09 12:31:56 crc kubenswrapper[4703]: I1209 12:31:56.699263 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8e6f4580-9b21-4b99-a48d-56df09b6863d-log-httpd\") pod \"8e6f4580-9b21-4b99-a48d-56df09b6863d\" (UID: \"8e6f4580-9b21-4b99-a48d-56df09b6863d\") " Dec 09 12:31:56 crc kubenswrapper[4703]: I1209 12:31:56.699293 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g7b4m\" (UniqueName: \"kubernetes.io/projected/8e6f4580-9b21-4b99-a48d-56df09b6863d-kube-api-access-g7b4m\") pod \"8e6f4580-9b21-4b99-a48d-56df09b6863d\" (UID: \"8e6f4580-9b21-4b99-a48d-56df09b6863d\") " Dec 09 12:31:56 crc kubenswrapper[4703]: I1209 12:31:56.699432 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8e6f4580-9b21-4b99-a48d-56df09b6863d-sg-core-conf-yaml\") pod \"8e6f4580-9b21-4b99-a48d-56df09b6863d\" (UID: \"8e6f4580-9b21-4b99-a48d-56df09b6863d\") " Dec 09 12:31:56 crc kubenswrapper[4703]: I1209 12:31:56.699494 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8e6f4580-9b21-4b99-a48d-56df09b6863d-run-httpd\") pod \"8e6f4580-9b21-4b99-a48d-56df09b6863d\" (UID: \"8e6f4580-9b21-4b99-a48d-56df09b6863d\") " Dec 09 12:31:56 crc kubenswrapper[4703]: I1209 12:31:56.699621 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e6f4580-9b21-4b99-a48d-56df09b6863d-config-data\") pod \"8e6f4580-9b21-4b99-a48d-56df09b6863d\" (UID: \"8e6f4580-9b21-4b99-a48d-56df09b6863d\") " Dec 09 12:31:56 crc kubenswrapper[4703]: I1209 12:31:56.702104 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e6f4580-9b21-4b99-a48d-56df09b6863d-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "8e6f4580-9b21-4b99-a48d-56df09b6863d" (UID: "8e6f4580-9b21-4b99-a48d-56df09b6863d"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:31:56 crc kubenswrapper[4703]: I1209 12:31:56.702819 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e6f4580-9b21-4b99-a48d-56df09b6863d-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "8e6f4580-9b21-4b99-a48d-56df09b6863d" (UID: "8e6f4580-9b21-4b99-a48d-56df09b6863d"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:31:56 crc kubenswrapper[4703]: I1209 12:31:56.751201 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e6f4580-9b21-4b99-a48d-56df09b6863d-kube-api-access-g7b4m" (OuterVolumeSpecName: "kube-api-access-g7b4m") pod "8e6f4580-9b21-4b99-a48d-56df09b6863d" (UID: "8e6f4580-9b21-4b99-a48d-56df09b6863d"). InnerVolumeSpecName "kube-api-access-g7b4m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:31:56 crc kubenswrapper[4703]: I1209 12:31:56.762681 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e6f4580-9b21-4b99-a48d-56df09b6863d-scripts" (OuterVolumeSpecName: "scripts") pod "8e6f4580-9b21-4b99-a48d-56df09b6863d" (UID: "8e6f4580-9b21-4b99-a48d-56df09b6863d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:31:56 crc kubenswrapper[4703]: I1209 12:31:56.776265 4703 generic.go:334] "Generic (PLEG): container finished" podID="8e6f4580-9b21-4b99-a48d-56df09b6863d" containerID="875998a873b05c99494028fac72afac183d53648c2f16b0fe9f95f56b7a94dd9" exitCode=0 Dec 09 12:31:56 crc kubenswrapper[4703]: I1209 12:31:56.776426 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8e6f4580-9b21-4b99-a48d-56df09b6863d","Type":"ContainerDied","Data":"875998a873b05c99494028fac72afac183d53648c2f16b0fe9f95f56b7a94dd9"} Dec 09 12:31:56 crc kubenswrapper[4703]: I1209 12:31:56.776475 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8e6f4580-9b21-4b99-a48d-56df09b6863d","Type":"ContainerDied","Data":"94d4470cecec010545b42d422d4576dc571f3d826d894c4c88d1143f29dfa75c"} Dec 09 12:31:56 crc kubenswrapper[4703]: I1209 12:31:56.776496 4703 scope.go:117] "RemoveContainer" containerID="c8680b06f3e0e798f5d1b1e497f72bcd0d29c85033186fc128d4eb0285a3a0a8" Dec 09 12:31:56 crc kubenswrapper[4703]: I1209 12:31:56.776732 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 12:31:56 crc kubenswrapper[4703]: I1209 12:31:56.802058 4703 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8e6f4580-9b21-4b99-a48d-56df09b6863d-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:56 crc kubenswrapper[4703]: I1209 12:31:56.802093 4703 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e6f4580-9b21-4b99-a48d-56df09b6863d-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:56 crc kubenswrapper[4703]: I1209 12:31:56.802103 4703 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8e6f4580-9b21-4b99-a48d-56df09b6863d-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:56 crc kubenswrapper[4703]: I1209 12:31:56.802111 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g7b4m\" (UniqueName: \"kubernetes.io/projected/8e6f4580-9b21-4b99-a48d-56df09b6863d-kube-api-access-g7b4m\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:56 crc kubenswrapper[4703]: I1209 12:31:56.814896 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b376cf43-332b-45e8-b384-5b885172504d","Type":"ContainerStarted","Data":"48c05d73d53bc800c5ffe8bff928f72da2aa3945b798340d46c0b0062cfdbabd"} Dec 09 12:31:56 crc kubenswrapper[4703]: I1209 12:31:56.814960 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b376cf43-332b-45e8-b384-5b885172504d","Type":"ContainerStarted","Data":"0ee371b4f1d8bd42133291351d0b47b3eae1bd37ac1207c8b7c5665ac5521a83"} Dec 09 12:31:56 crc kubenswrapper[4703]: I1209 12:31:56.838098 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e6f4580-9b21-4b99-a48d-56df09b6863d-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "8e6f4580-9b21-4b99-a48d-56df09b6863d" (UID: "8e6f4580-9b21-4b99-a48d-56df09b6863d"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:31:56 crc kubenswrapper[4703]: I1209 12:31:56.868397 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.868366967 podStartE2EDuration="2.868366967s" podCreationTimestamp="2025-12-09 12:31:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:31:56.836545093 +0000 UTC m=+1616.085308622" watchObservedRunningTime="2025-12-09 12:31:56.868366967 +0000 UTC m=+1616.117130486" Dec 09 12:31:56 crc kubenswrapper[4703]: I1209 12:31:56.880171 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e6f4580-9b21-4b99-a48d-56df09b6863d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8e6f4580-9b21-4b99-a48d-56df09b6863d" (UID: "8e6f4580-9b21-4b99-a48d-56df09b6863d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:31:56 crc kubenswrapper[4703]: I1209 12:31:56.892513 4703 scope.go:117] "RemoveContainer" containerID="95f9a0056afa1ac922e8f9ad5739c14e756b393576e27c55ceab2ca4145c81ab" Dec 09 12:31:56 crc kubenswrapper[4703]: I1209 12:31:56.905178 4703 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e6f4580-9b21-4b99-a48d-56df09b6863d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:56 crc kubenswrapper[4703]: I1209 12:31:56.905225 4703 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8e6f4580-9b21-4b99-a48d-56df09b6863d-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:56 crc kubenswrapper[4703]: I1209 12:31:56.925466 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e6f4580-9b21-4b99-a48d-56df09b6863d-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "8e6f4580-9b21-4b99-a48d-56df09b6863d" (UID: "8e6f4580-9b21-4b99-a48d-56df09b6863d"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:31:56 crc kubenswrapper[4703]: I1209 12:31:56.930423 4703 scope.go:117] "RemoveContainer" containerID="875998a873b05c99494028fac72afac183d53648c2f16b0fe9f95f56b7a94dd9" Dec 09 12:31:57 crc kubenswrapper[4703]: I1209 12:31:57.004975 4703 scope.go:117] "RemoveContainer" containerID="d26bdd74a38cb56b4c770f7d6f0286a1ee1e74f6f683727a70b3cdb301a0ec8c" Dec 09 12:31:57 crc kubenswrapper[4703]: I1209 12:31:57.008403 4703 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e6f4580-9b21-4b99-a48d-56df09b6863d-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:57 crc kubenswrapper[4703]: I1209 12:31:57.031614 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e6f4580-9b21-4b99-a48d-56df09b6863d-config-data" (OuterVolumeSpecName: "config-data") pod "8e6f4580-9b21-4b99-a48d-56df09b6863d" (UID: "8e6f4580-9b21-4b99-a48d-56df09b6863d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:31:57 crc kubenswrapper[4703]: I1209 12:31:57.049041 4703 scope.go:117] "RemoveContainer" containerID="c8680b06f3e0e798f5d1b1e497f72bcd0d29c85033186fc128d4eb0285a3a0a8" Dec 09 12:31:57 crc kubenswrapper[4703]: E1209 12:31:57.051997 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8680b06f3e0e798f5d1b1e497f72bcd0d29c85033186fc128d4eb0285a3a0a8\": container with ID starting with c8680b06f3e0e798f5d1b1e497f72bcd0d29c85033186fc128d4eb0285a3a0a8 not found: ID does not exist" containerID="c8680b06f3e0e798f5d1b1e497f72bcd0d29c85033186fc128d4eb0285a3a0a8" Dec 09 12:31:57 crc kubenswrapper[4703]: I1209 12:31:57.052053 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8680b06f3e0e798f5d1b1e497f72bcd0d29c85033186fc128d4eb0285a3a0a8"} err="failed to get container status \"c8680b06f3e0e798f5d1b1e497f72bcd0d29c85033186fc128d4eb0285a3a0a8\": rpc error: code = NotFound desc = could not find container \"c8680b06f3e0e798f5d1b1e497f72bcd0d29c85033186fc128d4eb0285a3a0a8\": container with ID starting with c8680b06f3e0e798f5d1b1e497f72bcd0d29c85033186fc128d4eb0285a3a0a8 not found: ID does not exist" Dec 09 12:31:57 crc kubenswrapper[4703]: I1209 12:31:57.052081 4703 scope.go:117] "RemoveContainer" containerID="95f9a0056afa1ac922e8f9ad5739c14e756b393576e27c55ceab2ca4145c81ab" Dec 09 12:31:57 crc kubenswrapper[4703]: E1209 12:31:57.053509 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95f9a0056afa1ac922e8f9ad5739c14e756b393576e27c55ceab2ca4145c81ab\": container with ID starting with 95f9a0056afa1ac922e8f9ad5739c14e756b393576e27c55ceab2ca4145c81ab not found: ID does not exist" containerID="95f9a0056afa1ac922e8f9ad5739c14e756b393576e27c55ceab2ca4145c81ab" Dec 09 12:31:57 crc kubenswrapper[4703]: I1209 12:31:57.053618 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95f9a0056afa1ac922e8f9ad5739c14e756b393576e27c55ceab2ca4145c81ab"} err="failed to get container status \"95f9a0056afa1ac922e8f9ad5739c14e756b393576e27c55ceab2ca4145c81ab\": rpc error: code = NotFound desc = could not find container \"95f9a0056afa1ac922e8f9ad5739c14e756b393576e27c55ceab2ca4145c81ab\": container with ID starting with 95f9a0056afa1ac922e8f9ad5739c14e756b393576e27c55ceab2ca4145c81ab not found: ID does not exist" Dec 09 12:31:57 crc kubenswrapper[4703]: I1209 12:31:57.053659 4703 scope.go:117] "RemoveContainer" containerID="875998a873b05c99494028fac72afac183d53648c2f16b0fe9f95f56b7a94dd9" Dec 09 12:31:57 crc kubenswrapper[4703]: E1209 12:31:57.054162 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"875998a873b05c99494028fac72afac183d53648c2f16b0fe9f95f56b7a94dd9\": container with ID starting with 875998a873b05c99494028fac72afac183d53648c2f16b0fe9f95f56b7a94dd9 not found: ID does not exist" containerID="875998a873b05c99494028fac72afac183d53648c2f16b0fe9f95f56b7a94dd9" Dec 09 12:31:57 crc kubenswrapper[4703]: I1209 12:31:57.054214 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"875998a873b05c99494028fac72afac183d53648c2f16b0fe9f95f56b7a94dd9"} err="failed to get container status \"875998a873b05c99494028fac72afac183d53648c2f16b0fe9f95f56b7a94dd9\": rpc error: code = NotFound desc = could not find container \"875998a873b05c99494028fac72afac183d53648c2f16b0fe9f95f56b7a94dd9\": container with ID starting with 875998a873b05c99494028fac72afac183d53648c2f16b0fe9f95f56b7a94dd9 not found: ID does not exist" Dec 09 12:31:57 crc kubenswrapper[4703]: I1209 12:31:57.054243 4703 scope.go:117] "RemoveContainer" containerID="d26bdd74a38cb56b4c770f7d6f0286a1ee1e74f6f683727a70b3cdb301a0ec8c" Dec 09 12:31:57 crc kubenswrapper[4703]: E1209 12:31:57.054748 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d26bdd74a38cb56b4c770f7d6f0286a1ee1e74f6f683727a70b3cdb301a0ec8c\": container with ID starting with d26bdd74a38cb56b4c770f7d6f0286a1ee1e74f6f683727a70b3cdb301a0ec8c not found: ID does not exist" containerID="d26bdd74a38cb56b4c770f7d6f0286a1ee1e74f6f683727a70b3cdb301a0ec8c" Dec 09 12:31:57 crc kubenswrapper[4703]: I1209 12:31:57.054801 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d26bdd74a38cb56b4c770f7d6f0286a1ee1e74f6f683727a70b3cdb301a0ec8c"} err="failed to get container status \"d26bdd74a38cb56b4c770f7d6f0286a1ee1e74f6f683727a70b3cdb301a0ec8c\": rpc error: code = NotFound desc = could not find container \"d26bdd74a38cb56b4c770f7d6f0286a1ee1e74f6f683727a70b3cdb301a0ec8c\": container with ID starting with d26bdd74a38cb56b4c770f7d6f0286a1ee1e74f6f683727a70b3cdb301a0ec8c not found: ID does not exist" Dec 09 12:31:57 crc kubenswrapper[4703]: I1209 12:31:57.117928 4703 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e6f4580-9b21-4b99-a48d-56df09b6863d-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:57 crc kubenswrapper[4703]: I1209 12:31:57.259610 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 09 12:31:57 crc kubenswrapper[4703]: I1209 12:31:57.272918 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 09 12:31:57 crc kubenswrapper[4703]: I1209 12:31:57.286076 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 09 12:31:57 crc kubenswrapper[4703]: E1209 12:31:57.287018 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e6f4580-9b21-4b99-a48d-56df09b6863d" containerName="ceilometer-central-agent" Dec 09 12:31:57 crc kubenswrapper[4703]: I1209 12:31:57.287046 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e6f4580-9b21-4b99-a48d-56df09b6863d" containerName="ceilometer-central-agent" Dec 09 12:31:57 crc kubenswrapper[4703]: E1209 12:31:57.287059 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e6f4580-9b21-4b99-a48d-56df09b6863d" containerName="sg-core" Dec 09 12:31:57 crc kubenswrapper[4703]: I1209 12:31:57.287067 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e6f4580-9b21-4b99-a48d-56df09b6863d" containerName="sg-core" Dec 09 12:31:57 crc kubenswrapper[4703]: E1209 12:31:57.287082 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e6f4580-9b21-4b99-a48d-56df09b6863d" containerName="ceilometer-notification-agent" Dec 09 12:31:57 crc kubenswrapper[4703]: I1209 12:31:57.287090 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e6f4580-9b21-4b99-a48d-56df09b6863d" containerName="ceilometer-notification-agent" Dec 09 12:31:57 crc kubenswrapper[4703]: E1209 12:31:57.287107 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e6f4580-9b21-4b99-a48d-56df09b6863d" containerName="proxy-httpd" Dec 09 12:31:57 crc kubenswrapper[4703]: I1209 12:31:57.287114 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e6f4580-9b21-4b99-a48d-56df09b6863d" containerName="proxy-httpd" Dec 09 12:31:57 crc kubenswrapper[4703]: I1209 12:31:57.287400 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e6f4580-9b21-4b99-a48d-56df09b6863d" containerName="ceilometer-notification-agent" Dec 09 12:31:57 crc kubenswrapper[4703]: I1209 12:31:57.287424 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e6f4580-9b21-4b99-a48d-56df09b6863d" containerName="ceilometer-central-agent" Dec 09 12:31:57 crc kubenswrapper[4703]: I1209 12:31:57.287437 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e6f4580-9b21-4b99-a48d-56df09b6863d" containerName="sg-core" Dec 09 12:31:57 crc kubenswrapper[4703]: I1209 12:31:57.287452 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e6f4580-9b21-4b99-a48d-56df09b6863d" containerName="proxy-httpd" Dec 09 12:31:57 crc kubenswrapper[4703]: I1209 12:31:57.290775 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 12:31:57 crc kubenswrapper[4703]: I1209 12:31:57.294315 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Dec 09 12:31:57 crc kubenswrapper[4703]: I1209 12:31:57.297234 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 09 12:31:57 crc kubenswrapper[4703]: I1209 12:31:57.298006 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 09 12:31:57 crc kubenswrapper[4703]: I1209 12:31:57.307278 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 09 12:31:57 crc kubenswrapper[4703]: I1209 12:31:57.415352 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5fd9b586ff-sbnjf" Dec 09 12:31:57 crc kubenswrapper[4703]: I1209 12:31:57.425155 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/34da11b1-8f55-41e0-b132-09d5c4a64896-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"34da11b1-8f55-41e0-b132-09d5c4a64896\") " pod="openstack/ceilometer-0" Dec 09 12:31:57 crc kubenswrapper[4703]: I1209 12:31:57.425280 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/34da11b1-8f55-41e0-b132-09d5c4a64896-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"34da11b1-8f55-41e0-b132-09d5c4a64896\") " pod="openstack/ceilometer-0" Dec 09 12:31:57 crc kubenswrapper[4703]: I1209 12:31:57.425330 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34da11b1-8f55-41e0-b132-09d5c4a64896-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"34da11b1-8f55-41e0-b132-09d5c4a64896\") " pod="openstack/ceilometer-0" Dec 09 12:31:57 crc kubenswrapper[4703]: I1209 12:31:57.425357 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/34da11b1-8f55-41e0-b132-09d5c4a64896-run-httpd\") pod \"ceilometer-0\" (UID: \"34da11b1-8f55-41e0-b132-09d5c4a64896\") " pod="openstack/ceilometer-0" Dec 09 12:31:57 crc kubenswrapper[4703]: I1209 12:31:57.425418 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/34da11b1-8f55-41e0-b132-09d5c4a64896-log-httpd\") pod \"ceilometer-0\" (UID: \"34da11b1-8f55-41e0-b132-09d5c4a64896\") " pod="openstack/ceilometer-0" Dec 09 12:31:57 crc kubenswrapper[4703]: I1209 12:31:57.425537 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34da11b1-8f55-41e0-b132-09d5c4a64896-scripts\") pod \"ceilometer-0\" (UID: \"34da11b1-8f55-41e0-b132-09d5c4a64896\") " pod="openstack/ceilometer-0" Dec 09 12:31:57 crc kubenswrapper[4703]: I1209 12:31:57.425555 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34da11b1-8f55-41e0-b132-09d5c4a64896-config-data\") pod \"ceilometer-0\" (UID: \"34da11b1-8f55-41e0-b132-09d5c4a64896\") " pod="openstack/ceilometer-0" Dec 09 12:31:57 crc kubenswrapper[4703]: I1209 12:31:57.425647 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fr7hp\" (UniqueName: \"kubernetes.io/projected/34da11b1-8f55-41e0-b132-09d5c4a64896-kube-api-access-fr7hp\") pod \"ceilometer-0\" (UID: \"34da11b1-8f55-41e0-b132-09d5c4a64896\") " pod="openstack/ceilometer-0" Dec 09 12:31:57 crc kubenswrapper[4703]: I1209 12:31:57.512501 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78cd565959-hnbh4"] Dec 09 12:31:57 crc kubenswrapper[4703]: I1209 12:31:57.513130 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-78cd565959-hnbh4" podUID="c1d4c235-81e5-4672-bbb4-876c9e53861d" containerName="dnsmasq-dns" containerID="cri-o://1e3ea8abaa226d43882828880d6850a6f0979ade6397dd87946bf10b409c9a03" gracePeriod=10 Dec 09 12:31:57 crc kubenswrapper[4703]: I1209 12:31:57.533093 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fr7hp\" (UniqueName: \"kubernetes.io/projected/34da11b1-8f55-41e0-b132-09d5c4a64896-kube-api-access-fr7hp\") pod \"ceilometer-0\" (UID: \"34da11b1-8f55-41e0-b132-09d5c4a64896\") " pod="openstack/ceilometer-0" Dec 09 12:31:57 crc kubenswrapper[4703]: I1209 12:31:57.533321 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/34da11b1-8f55-41e0-b132-09d5c4a64896-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"34da11b1-8f55-41e0-b132-09d5c4a64896\") " pod="openstack/ceilometer-0" Dec 09 12:31:57 crc kubenswrapper[4703]: I1209 12:31:57.533369 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/34da11b1-8f55-41e0-b132-09d5c4a64896-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"34da11b1-8f55-41e0-b132-09d5c4a64896\") " pod="openstack/ceilometer-0" Dec 09 12:31:57 crc kubenswrapper[4703]: I1209 12:31:57.533467 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34da11b1-8f55-41e0-b132-09d5c4a64896-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"34da11b1-8f55-41e0-b132-09d5c4a64896\") " pod="openstack/ceilometer-0" Dec 09 12:31:57 crc kubenswrapper[4703]: I1209 12:31:57.533507 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/34da11b1-8f55-41e0-b132-09d5c4a64896-run-httpd\") pod \"ceilometer-0\" (UID: \"34da11b1-8f55-41e0-b132-09d5c4a64896\") " pod="openstack/ceilometer-0" Dec 09 12:31:57 crc kubenswrapper[4703]: I1209 12:31:57.533662 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/34da11b1-8f55-41e0-b132-09d5c4a64896-log-httpd\") pod \"ceilometer-0\" (UID: \"34da11b1-8f55-41e0-b132-09d5c4a64896\") " pod="openstack/ceilometer-0" Dec 09 12:31:57 crc kubenswrapper[4703]: I1209 12:31:57.533740 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34da11b1-8f55-41e0-b132-09d5c4a64896-scripts\") pod \"ceilometer-0\" (UID: \"34da11b1-8f55-41e0-b132-09d5c4a64896\") " pod="openstack/ceilometer-0" Dec 09 12:31:57 crc kubenswrapper[4703]: I1209 12:31:57.533759 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34da11b1-8f55-41e0-b132-09d5c4a64896-config-data\") pod \"ceilometer-0\" (UID: \"34da11b1-8f55-41e0-b132-09d5c4a64896\") " pod="openstack/ceilometer-0" Dec 09 12:31:57 crc kubenswrapper[4703]: I1209 12:31:57.535389 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/34da11b1-8f55-41e0-b132-09d5c4a64896-run-httpd\") pod \"ceilometer-0\" (UID: \"34da11b1-8f55-41e0-b132-09d5c4a64896\") " pod="openstack/ceilometer-0" Dec 09 12:31:57 crc kubenswrapper[4703]: I1209 12:31:57.537394 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-znxfc" Dec 09 12:31:57 crc kubenswrapper[4703]: I1209 12:31:57.540293 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/34da11b1-8f55-41e0-b132-09d5c4a64896-log-httpd\") pod \"ceilometer-0\" (UID: \"34da11b1-8f55-41e0-b132-09d5c4a64896\") " pod="openstack/ceilometer-0" Dec 09 12:31:57 crc kubenswrapper[4703]: I1209 12:31:57.555777 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/34da11b1-8f55-41e0-b132-09d5c4a64896-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"34da11b1-8f55-41e0-b132-09d5c4a64896\") " pod="openstack/ceilometer-0" Dec 09 12:31:57 crc kubenswrapper[4703]: I1209 12:31:57.559220 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34da11b1-8f55-41e0-b132-09d5c4a64896-config-data\") pod \"ceilometer-0\" (UID: \"34da11b1-8f55-41e0-b132-09d5c4a64896\") " pod="openstack/ceilometer-0" Dec 09 12:31:57 crc kubenswrapper[4703]: I1209 12:31:57.561138 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34da11b1-8f55-41e0-b132-09d5c4a64896-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"34da11b1-8f55-41e0-b132-09d5c4a64896\") " pod="openstack/ceilometer-0" Dec 09 12:31:57 crc kubenswrapper[4703]: I1209 12:31:57.568626 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/34da11b1-8f55-41e0-b132-09d5c4a64896-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"34da11b1-8f55-41e0-b132-09d5c4a64896\") " pod="openstack/ceilometer-0" Dec 09 12:31:57 crc kubenswrapper[4703]: I1209 12:31:57.573434 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34da11b1-8f55-41e0-b132-09d5c4a64896-scripts\") pod \"ceilometer-0\" (UID: \"34da11b1-8f55-41e0-b132-09d5c4a64896\") " pod="openstack/ceilometer-0" Dec 09 12:31:57 crc kubenswrapper[4703]: I1209 12:31:57.573815 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fr7hp\" (UniqueName: \"kubernetes.io/projected/34da11b1-8f55-41e0-b132-09d5c4a64896-kube-api-access-fr7hp\") pod \"ceilometer-0\" (UID: \"34da11b1-8f55-41e0-b132-09d5c4a64896\") " pod="openstack/ceilometer-0" Dec 09 12:31:57 crc kubenswrapper[4703]: I1209 12:31:57.634589 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 12:31:57 crc kubenswrapper[4703]: I1209 12:31:57.635221 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bb06d6a-e31a-4624-bbca-fa4f970d5685-config-data\") pod \"3bb06d6a-e31a-4624-bbca-fa4f970d5685\" (UID: \"3bb06d6a-e31a-4624-bbca-fa4f970d5685\") " Dec 09 12:31:57 crc kubenswrapper[4703]: I1209 12:31:57.635526 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3bb06d6a-e31a-4624-bbca-fa4f970d5685-scripts\") pod \"3bb06d6a-e31a-4624-bbca-fa4f970d5685\" (UID: \"3bb06d6a-e31a-4624-bbca-fa4f970d5685\") " Dec 09 12:31:57 crc kubenswrapper[4703]: I1209 12:31:57.635629 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bb06d6a-e31a-4624-bbca-fa4f970d5685-combined-ca-bundle\") pod \"3bb06d6a-e31a-4624-bbca-fa4f970d5685\" (UID: \"3bb06d6a-e31a-4624-bbca-fa4f970d5685\") " Dec 09 12:31:57 crc kubenswrapper[4703]: I1209 12:31:57.635694 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cb6dr\" (UniqueName: \"kubernetes.io/projected/3bb06d6a-e31a-4624-bbca-fa4f970d5685-kube-api-access-cb6dr\") pod \"3bb06d6a-e31a-4624-bbca-fa4f970d5685\" (UID: \"3bb06d6a-e31a-4624-bbca-fa4f970d5685\") " Dec 09 12:31:57 crc kubenswrapper[4703]: I1209 12:31:57.646643 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bb06d6a-e31a-4624-bbca-fa4f970d5685-kube-api-access-cb6dr" (OuterVolumeSpecName: "kube-api-access-cb6dr") pod "3bb06d6a-e31a-4624-bbca-fa4f970d5685" (UID: "3bb06d6a-e31a-4624-bbca-fa4f970d5685"). InnerVolumeSpecName "kube-api-access-cb6dr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:31:57 crc kubenswrapper[4703]: I1209 12:31:57.650386 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bb06d6a-e31a-4624-bbca-fa4f970d5685-scripts" (OuterVolumeSpecName: "scripts") pod "3bb06d6a-e31a-4624-bbca-fa4f970d5685" (UID: "3bb06d6a-e31a-4624-bbca-fa4f970d5685"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:31:57 crc kubenswrapper[4703]: I1209 12:31:57.717500 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bb06d6a-e31a-4624-bbca-fa4f970d5685-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3bb06d6a-e31a-4624-bbca-fa4f970d5685" (UID: "3bb06d6a-e31a-4624-bbca-fa4f970d5685"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:31:57 crc kubenswrapper[4703]: I1209 12:31:57.736462 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bb06d6a-e31a-4624-bbca-fa4f970d5685-config-data" (OuterVolumeSpecName: "config-data") pod "3bb06d6a-e31a-4624-bbca-fa4f970d5685" (UID: "3bb06d6a-e31a-4624-bbca-fa4f970d5685"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:31:57 crc kubenswrapper[4703]: I1209 12:31:57.738610 4703 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3bb06d6a-e31a-4624-bbca-fa4f970d5685-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:57 crc kubenswrapper[4703]: I1209 12:31:57.738650 4703 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bb06d6a-e31a-4624-bbca-fa4f970d5685-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:57 crc kubenswrapper[4703]: I1209 12:31:57.738665 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cb6dr\" (UniqueName: \"kubernetes.io/projected/3bb06d6a-e31a-4624-bbca-fa4f970d5685-kube-api-access-cb6dr\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:57 crc kubenswrapper[4703]: I1209 12:31:57.738679 4703 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bb06d6a-e31a-4624-bbca-fa4f970d5685-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:57 crc kubenswrapper[4703]: I1209 12:31:57.863610 4703 generic.go:334] "Generic (PLEG): container finished" podID="c1d4c235-81e5-4672-bbb4-876c9e53861d" containerID="1e3ea8abaa226d43882828880d6850a6f0979ade6397dd87946bf10b409c9a03" exitCode=0 Dec 09 12:31:57 crc kubenswrapper[4703]: I1209 12:31:57.863967 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78cd565959-hnbh4" event={"ID":"c1d4c235-81e5-4672-bbb4-876c9e53861d","Type":"ContainerDied","Data":"1e3ea8abaa226d43882828880d6850a6f0979ade6397dd87946bf10b409c9a03"} Dec 09 12:31:57 crc kubenswrapper[4703]: E1209 12:31:57.869788 4703 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1d4c235_81e5_4672_bbb4_876c9e53861d.slice/crio-1e3ea8abaa226d43882828880d6850a6f0979ade6397dd87946bf10b409c9a03.scope\": RecentStats: unable to find data in memory cache]" Dec 09 12:31:57 crc kubenswrapper[4703]: I1209 12:31:57.883792 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-znxfc" Dec 09 12:31:57 crc kubenswrapper[4703]: I1209 12:31:57.884233 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-znxfc" event={"ID":"3bb06d6a-e31a-4624-bbca-fa4f970d5685","Type":"ContainerDied","Data":"9bea482bfa6f8f195d6a5df823ee1c8ac52e141803b798dd35786d546cbf795b"} Dec 09 12:31:57 crc kubenswrapper[4703]: I1209 12:31:57.884399 4703 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9bea482bfa6f8f195d6a5df823ee1c8ac52e141803b798dd35786d546cbf795b" Dec 09 12:31:57 crc kubenswrapper[4703]: I1209 12:31:57.941658 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 09 12:31:57 crc kubenswrapper[4703]: I1209 12:31:57.964985 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 09 12:31:57 crc kubenswrapper[4703]: I1209 12:31:57.965293 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="828c53a9-97b5-4039-bca6-9005809584e3" containerName="nova-scheduler-scheduler" containerID="cri-o://d3a38150585efbacae8459515ea5722319dc446bed4affa411f1cc847b2dfa4b" gracePeriod=30 Dec 09 12:31:57 crc kubenswrapper[4703]: I1209 12:31:57.975377 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 09 12:31:57 crc kubenswrapper[4703]: I1209 12:31:57.976081 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="1167f24a-93e8-4578-b9e7-44bca208de2c" containerName="nova-metadata-log" containerID="cri-o://86cdd72d3e398e05be8bfbf68b63f4b3d01609abd7c732bf286f6a8be4e9e44c" gracePeriod=30 Dec 09 12:31:57 crc kubenswrapper[4703]: I1209 12:31:57.976378 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="1167f24a-93e8-4578-b9e7-44bca208de2c" containerName="nova-metadata-metadata" containerID="cri-o://69fce99ba6cf97efb89b40fe942d16d62f986bb754d5974d71ed874867b69023" gracePeriod=30 Dec 09 12:31:58 crc kubenswrapper[4703]: I1209 12:31:58.473877 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78cd565959-hnbh4" Dec 09 12:31:58 crc kubenswrapper[4703]: I1209 12:31:58.573379 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c1d4c235-81e5-4672-bbb4-876c9e53861d-dns-swift-storage-0\") pod \"c1d4c235-81e5-4672-bbb4-876c9e53861d\" (UID: \"c1d4c235-81e5-4672-bbb4-876c9e53861d\") " Dec 09 12:31:58 crc kubenswrapper[4703]: I1209 12:31:58.573552 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pb89q\" (UniqueName: \"kubernetes.io/projected/c1d4c235-81e5-4672-bbb4-876c9e53861d-kube-api-access-pb89q\") pod \"c1d4c235-81e5-4672-bbb4-876c9e53861d\" (UID: \"c1d4c235-81e5-4672-bbb4-876c9e53861d\") " Dec 09 12:31:58 crc kubenswrapper[4703]: I1209 12:31:58.573573 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1d4c235-81e5-4672-bbb4-876c9e53861d-config\") pod \"c1d4c235-81e5-4672-bbb4-876c9e53861d\" (UID: \"c1d4c235-81e5-4672-bbb4-876c9e53861d\") " Dec 09 12:31:58 crc kubenswrapper[4703]: I1209 12:31:58.573764 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c1d4c235-81e5-4672-bbb4-876c9e53861d-ovsdbserver-sb\") pod \"c1d4c235-81e5-4672-bbb4-876c9e53861d\" (UID: \"c1d4c235-81e5-4672-bbb4-876c9e53861d\") " Dec 09 12:31:58 crc kubenswrapper[4703]: I1209 12:31:58.573800 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c1d4c235-81e5-4672-bbb4-876c9e53861d-ovsdbserver-nb\") pod \"c1d4c235-81e5-4672-bbb4-876c9e53861d\" (UID: \"c1d4c235-81e5-4672-bbb4-876c9e53861d\") " Dec 09 12:31:58 crc kubenswrapper[4703]: I1209 12:31:58.573824 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c1d4c235-81e5-4672-bbb4-876c9e53861d-dns-svc\") pod \"c1d4c235-81e5-4672-bbb4-876c9e53861d\" (UID: \"c1d4c235-81e5-4672-bbb4-876c9e53861d\") " Dec 09 12:31:58 crc kubenswrapper[4703]: I1209 12:31:58.897480 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78cd565959-hnbh4" event={"ID":"c1d4c235-81e5-4672-bbb4-876c9e53861d","Type":"ContainerDied","Data":"d5f4ec2946f11b2b5d8de7fd81024cc1dba13ef3a173f3e924fbac728185482d"} Dec 09 12:31:58 crc kubenswrapper[4703]: I1209 12:31:58.897788 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78cd565959-hnbh4" Dec 09 12:31:58 crc kubenswrapper[4703]: I1209 12:31:58.899421 4703 scope.go:117] "RemoveContainer" containerID="1e3ea8abaa226d43882828880d6850a6f0979ade6397dd87946bf10b409c9a03" Dec 09 12:31:58 crc kubenswrapper[4703]: I1209 12:31:58.900379 4703 generic.go:334] "Generic (PLEG): container finished" podID="1167f24a-93e8-4578-b9e7-44bca208de2c" containerID="86cdd72d3e398e05be8bfbf68b63f4b3d01609abd7c732bf286f6a8be4e9e44c" exitCode=143 Dec 09 12:31:58 crc kubenswrapper[4703]: I1209 12:31:58.900779 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1167f24a-93e8-4578-b9e7-44bca208de2c","Type":"ContainerDied","Data":"86cdd72d3e398e05be8bfbf68b63f4b3d01609abd7c732bf286f6a8be4e9e44c"} Dec 09 12:31:58 crc kubenswrapper[4703]: I1209 12:31:58.902424 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b376cf43-332b-45e8-b384-5b885172504d" containerName="nova-api-log" containerID="cri-o://0ee371b4f1d8bd42133291351d0b47b3eae1bd37ac1207c8b7c5665ac5521a83" gracePeriod=30 Dec 09 12:31:58 crc kubenswrapper[4703]: I1209 12:31:58.902549 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b376cf43-332b-45e8-b384-5b885172504d" containerName="nova-api-api" containerID="cri-o://48c05d73d53bc800c5ffe8bff928f72da2aa3945b798340d46c0b0062cfdbabd" gracePeriod=30 Dec 09 12:31:58 crc kubenswrapper[4703]: I1209 12:31:58.931721 4703 scope.go:117] "RemoveContainer" containerID="582d8964df1c7146da1c04557f013ee46915c241f761dcf9849607419fd86c03" Dec 09 12:31:58 crc kubenswrapper[4703]: I1209 12:31:58.981995 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1d4c235-81e5-4672-bbb4-876c9e53861d-kube-api-access-pb89q" (OuterVolumeSpecName: "kube-api-access-pb89q") pod "c1d4c235-81e5-4672-bbb4-876c9e53861d" (UID: "c1d4c235-81e5-4672-bbb4-876c9e53861d"). InnerVolumeSpecName "kube-api-access-pb89q". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:31:58 crc kubenswrapper[4703]: I1209 12:31:58.985121 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pb89q\" (UniqueName: \"kubernetes.io/projected/c1d4c235-81e5-4672-bbb4-876c9e53861d-kube-api-access-pb89q\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:59 crc kubenswrapper[4703]: I1209 12:31:59.017319 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1d4c235-81e5-4672-bbb4-876c9e53861d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c1d4c235-81e5-4672-bbb4-876c9e53861d" (UID: "c1d4c235-81e5-4672-bbb4-876c9e53861d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:31:59 crc kubenswrapper[4703]: I1209 12:31:59.023067 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1d4c235-81e5-4672-bbb4-876c9e53861d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c1d4c235-81e5-4672-bbb4-876c9e53861d" (UID: "c1d4c235-81e5-4672-bbb4-876c9e53861d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:31:59 crc kubenswrapper[4703]: I1209 12:31:59.033929 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1d4c235-81e5-4672-bbb4-876c9e53861d-config" (OuterVolumeSpecName: "config") pod "c1d4c235-81e5-4672-bbb4-876c9e53861d" (UID: "c1d4c235-81e5-4672-bbb4-876c9e53861d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:31:59 crc kubenswrapper[4703]: I1209 12:31:59.035004 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1d4c235-81e5-4672-bbb4-876c9e53861d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c1d4c235-81e5-4672-bbb4-876c9e53861d" (UID: "c1d4c235-81e5-4672-bbb4-876c9e53861d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:31:59 crc kubenswrapper[4703]: I1209 12:31:59.042711 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 09 12:31:59 crc kubenswrapper[4703]: I1209 12:31:59.043420 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1d4c235-81e5-4672-bbb4-876c9e53861d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c1d4c235-81e5-4672-bbb4-876c9e53861d" (UID: "c1d4c235-81e5-4672-bbb4-876c9e53861d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:31:59 crc kubenswrapper[4703]: I1209 12:31:59.106010 4703 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c1d4c235-81e5-4672-bbb4-876c9e53861d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:59 crc kubenswrapper[4703]: I1209 12:31:59.106063 4703 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c1d4c235-81e5-4672-bbb4-876c9e53861d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:59 crc kubenswrapper[4703]: I1209 12:31:59.106094 4703 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c1d4c235-81e5-4672-bbb4-876c9e53861d-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:59 crc kubenswrapper[4703]: I1209 12:31:59.106111 4703 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c1d4c235-81e5-4672-bbb4-876c9e53861d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:59 crc kubenswrapper[4703]: I1209 12:31:59.106124 4703 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1d4c235-81e5-4672-bbb4-876c9e53861d-config\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:59 crc kubenswrapper[4703]: I1209 12:31:59.127122 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e6f4580-9b21-4b99-a48d-56df09b6863d" path="/var/lib/kubelet/pods/8e6f4580-9b21-4b99-a48d-56df09b6863d/volumes" Dec 09 12:31:59 crc kubenswrapper[4703]: I1209 12:31:59.242741 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78cd565959-hnbh4"] Dec 09 12:31:59 crc kubenswrapper[4703]: I1209 12:31:59.253602 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78cd565959-hnbh4"] Dec 09 12:31:59 crc kubenswrapper[4703]: I1209 12:31:59.922527 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"34da11b1-8f55-41e0-b132-09d5c4a64896","Type":"ContainerStarted","Data":"8c57ab86d5be4c06d67c63b01d46e679f56da463b4ca5287364603d68d872d22"} Dec 09 12:31:59 crc kubenswrapper[4703]: I1209 12:31:59.927797 4703 generic.go:334] "Generic (PLEG): container finished" podID="b376cf43-332b-45e8-b384-5b885172504d" containerID="48c05d73d53bc800c5ffe8bff928f72da2aa3945b798340d46c0b0062cfdbabd" exitCode=0 Dec 09 12:31:59 crc kubenswrapper[4703]: I1209 12:31:59.927851 4703 generic.go:334] "Generic (PLEG): container finished" podID="b376cf43-332b-45e8-b384-5b885172504d" containerID="0ee371b4f1d8bd42133291351d0b47b3eae1bd37ac1207c8b7c5665ac5521a83" exitCode=143 Dec 09 12:31:59 crc kubenswrapper[4703]: I1209 12:31:59.927886 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b376cf43-332b-45e8-b384-5b885172504d","Type":"ContainerDied","Data":"48c05d73d53bc800c5ffe8bff928f72da2aa3945b798340d46c0b0062cfdbabd"} Dec 09 12:31:59 crc kubenswrapper[4703]: I1209 12:31:59.927925 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b376cf43-332b-45e8-b384-5b885172504d","Type":"ContainerDied","Data":"0ee371b4f1d8bd42133291351d0b47b3eae1bd37ac1207c8b7c5665ac5521a83"} Dec 09 12:32:00 crc kubenswrapper[4703]: I1209 12:32:00.325137 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 09 12:32:00 crc kubenswrapper[4703]: I1209 12:32:00.437520 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b376cf43-332b-45e8-b384-5b885172504d-public-tls-certs\") pod \"b376cf43-332b-45e8-b384-5b885172504d\" (UID: \"b376cf43-332b-45e8-b384-5b885172504d\") " Dec 09 12:32:00 crc kubenswrapper[4703]: I1209 12:32:00.437623 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b376cf43-332b-45e8-b384-5b885172504d-logs\") pod \"b376cf43-332b-45e8-b384-5b885172504d\" (UID: \"b376cf43-332b-45e8-b384-5b885172504d\") " Dec 09 12:32:00 crc kubenswrapper[4703]: I1209 12:32:00.437664 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b376cf43-332b-45e8-b384-5b885172504d-config-data\") pod \"b376cf43-332b-45e8-b384-5b885172504d\" (UID: \"b376cf43-332b-45e8-b384-5b885172504d\") " Dec 09 12:32:00 crc kubenswrapper[4703]: I1209 12:32:00.437749 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-62lq6\" (UniqueName: \"kubernetes.io/projected/b376cf43-332b-45e8-b384-5b885172504d-kube-api-access-62lq6\") pod \"b376cf43-332b-45e8-b384-5b885172504d\" (UID: \"b376cf43-332b-45e8-b384-5b885172504d\") " Dec 09 12:32:00 crc kubenswrapper[4703]: I1209 12:32:00.437789 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b376cf43-332b-45e8-b384-5b885172504d-combined-ca-bundle\") pod \"b376cf43-332b-45e8-b384-5b885172504d\" (UID: \"b376cf43-332b-45e8-b384-5b885172504d\") " Dec 09 12:32:00 crc kubenswrapper[4703]: I1209 12:32:00.437833 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b376cf43-332b-45e8-b384-5b885172504d-internal-tls-certs\") pod \"b376cf43-332b-45e8-b384-5b885172504d\" (UID: \"b376cf43-332b-45e8-b384-5b885172504d\") " Dec 09 12:32:00 crc kubenswrapper[4703]: I1209 12:32:00.440810 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b376cf43-332b-45e8-b384-5b885172504d-logs" (OuterVolumeSpecName: "logs") pod "b376cf43-332b-45e8-b384-5b885172504d" (UID: "b376cf43-332b-45e8-b384-5b885172504d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:32:00 crc kubenswrapper[4703]: I1209 12:32:00.447310 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b376cf43-332b-45e8-b384-5b885172504d-kube-api-access-62lq6" (OuterVolumeSpecName: "kube-api-access-62lq6") pod "b376cf43-332b-45e8-b384-5b885172504d" (UID: "b376cf43-332b-45e8-b384-5b885172504d"). InnerVolumeSpecName "kube-api-access-62lq6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:32:00 crc kubenswrapper[4703]: I1209 12:32:00.485558 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b376cf43-332b-45e8-b384-5b885172504d-config-data" (OuterVolumeSpecName: "config-data") pod "b376cf43-332b-45e8-b384-5b885172504d" (UID: "b376cf43-332b-45e8-b384-5b885172504d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:32:00 crc kubenswrapper[4703]: I1209 12:32:00.487678 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b376cf43-332b-45e8-b384-5b885172504d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b376cf43-332b-45e8-b384-5b885172504d" (UID: "b376cf43-332b-45e8-b384-5b885172504d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:32:00 crc kubenswrapper[4703]: I1209 12:32:00.523933 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b376cf43-332b-45e8-b384-5b885172504d-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "b376cf43-332b-45e8-b384-5b885172504d" (UID: "b376cf43-332b-45e8-b384-5b885172504d"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:32:00 crc kubenswrapper[4703]: I1209 12:32:00.528033 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b376cf43-332b-45e8-b384-5b885172504d-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "b376cf43-332b-45e8-b384-5b885172504d" (UID: "b376cf43-332b-45e8-b384-5b885172504d"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:32:00 crc kubenswrapper[4703]: I1209 12:32:00.543060 4703 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b376cf43-332b-45e8-b384-5b885172504d-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 09 12:32:00 crc kubenswrapper[4703]: I1209 12:32:00.543129 4703 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b376cf43-332b-45e8-b384-5b885172504d-logs\") on node \"crc\" DevicePath \"\"" Dec 09 12:32:00 crc kubenswrapper[4703]: I1209 12:32:00.543147 4703 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b376cf43-332b-45e8-b384-5b885172504d-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 12:32:00 crc kubenswrapper[4703]: I1209 12:32:00.543160 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-62lq6\" (UniqueName: \"kubernetes.io/projected/b376cf43-332b-45e8-b384-5b885172504d-kube-api-access-62lq6\") on node \"crc\" DevicePath \"\"" Dec 09 12:32:00 crc kubenswrapper[4703]: I1209 12:32:00.543173 4703 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b376cf43-332b-45e8-b384-5b885172504d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:32:00 crc kubenswrapper[4703]: I1209 12:32:00.543185 4703 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b376cf43-332b-45e8-b384-5b885172504d-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 09 12:32:00 crc kubenswrapper[4703]: I1209 12:32:00.946517 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"34da11b1-8f55-41e0-b132-09d5c4a64896","Type":"ContainerStarted","Data":"78e94d4048892383566cf64cfa0d83572099f19984cefe4a6660087d9f262285"} Dec 09 12:32:00 crc kubenswrapper[4703]: I1209 12:32:00.946905 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"34da11b1-8f55-41e0-b132-09d5c4a64896","Type":"ContainerStarted","Data":"2ae0ff772b58f091354de9d463214f251e4bcbca0304c209476d742f897f3847"} Dec 09 12:32:00 crc kubenswrapper[4703]: I1209 12:32:00.951804 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b376cf43-332b-45e8-b384-5b885172504d","Type":"ContainerDied","Data":"cb7e6af9217a1cb49b27066c7a9609c42b11404b39c5587e70de5cd0f4dd8f7b"} Dec 09 12:32:00 crc kubenswrapper[4703]: I1209 12:32:00.951859 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 09 12:32:00 crc kubenswrapper[4703]: I1209 12:32:00.951890 4703 scope.go:117] "RemoveContainer" containerID="48c05d73d53bc800c5ffe8bff928f72da2aa3945b798340d46c0b0062cfdbabd" Dec 09 12:32:00 crc kubenswrapper[4703]: I1209 12:32:00.986276 4703 scope.go:117] "RemoveContainer" containerID="0ee371b4f1d8bd42133291351d0b47b3eae1bd37ac1207c8b7c5665ac5521a83" Dec 09 12:32:00 crc kubenswrapper[4703]: I1209 12:32:00.996741 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 09 12:32:01 crc kubenswrapper[4703]: I1209 12:32:01.016893 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 09 12:32:01 crc kubenswrapper[4703]: I1209 12:32:01.035373 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 09 12:32:01 crc kubenswrapper[4703]: E1209 12:32:01.035965 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b376cf43-332b-45e8-b384-5b885172504d" containerName="nova-api-api" Dec 09 12:32:01 crc kubenswrapper[4703]: I1209 12:32:01.035990 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="b376cf43-332b-45e8-b384-5b885172504d" containerName="nova-api-api" Dec 09 12:32:01 crc kubenswrapper[4703]: E1209 12:32:01.036019 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bb06d6a-e31a-4624-bbca-fa4f970d5685" containerName="nova-manage" Dec 09 12:32:01 crc kubenswrapper[4703]: I1209 12:32:01.036029 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bb06d6a-e31a-4624-bbca-fa4f970d5685" containerName="nova-manage" Dec 09 12:32:01 crc kubenswrapper[4703]: E1209 12:32:01.036047 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1d4c235-81e5-4672-bbb4-876c9e53861d" containerName="dnsmasq-dns" Dec 09 12:32:01 crc kubenswrapper[4703]: I1209 12:32:01.036054 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1d4c235-81e5-4672-bbb4-876c9e53861d" containerName="dnsmasq-dns" Dec 09 12:32:01 crc kubenswrapper[4703]: E1209 12:32:01.036066 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b376cf43-332b-45e8-b384-5b885172504d" containerName="nova-api-log" Dec 09 12:32:01 crc kubenswrapper[4703]: I1209 12:32:01.036071 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="b376cf43-332b-45e8-b384-5b885172504d" containerName="nova-api-log" Dec 09 12:32:01 crc kubenswrapper[4703]: E1209 12:32:01.036080 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1d4c235-81e5-4672-bbb4-876c9e53861d" containerName="init" Dec 09 12:32:01 crc kubenswrapper[4703]: I1209 12:32:01.036085 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1d4c235-81e5-4672-bbb4-876c9e53861d" containerName="init" Dec 09 12:32:01 crc kubenswrapper[4703]: I1209 12:32:01.036320 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="b376cf43-332b-45e8-b384-5b885172504d" containerName="nova-api-api" Dec 09 12:32:01 crc kubenswrapper[4703]: I1209 12:32:01.036345 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bb06d6a-e31a-4624-bbca-fa4f970d5685" containerName="nova-manage" Dec 09 12:32:01 crc kubenswrapper[4703]: I1209 12:32:01.036370 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="b376cf43-332b-45e8-b384-5b885172504d" containerName="nova-api-log" Dec 09 12:32:01 crc kubenswrapper[4703]: I1209 12:32:01.036382 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1d4c235-81e5-4672-bbb4-876c9e53861d" containerName="dnsmasq-dns" Dec 09 12:32:01 crc kubenswrapper[4703]: I1209 12:32:01.037784 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 09 12:32:01 crc kubenswrapper[4703]: I1209 12:32:01.043062 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 09 12:32:01 crc kubenswrapper[4703]: I1209 12:32:01.043362 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Dec 09 12:32:01 crc kubenswrapper[4703]: I1209 12:32:01.043501 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Dec 09 12:32:01 crc kubenswrapper[4703]: I1209 12:32:01.050165 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 09 12:32:01 crc kubenswrapper[4703]: I1209 12:32:01.124879 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b376cf43-332b-45e8-b384-5b885172504d" path="/var/lib/kubelet/pods/b376cf43-332b-45e8-b384-5b885172504d/volumes" Dec 09 12:32:01 crc kubenswrapper[4703]: I1209 12:32:01.126215 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1d4c235-81e5-4672-bbb4-876c9e53861d" path="/var/lib/kubelet/pods/c1d4c235-81e5-4672-bbb4-876c9e53861d/volumes" Dec 09 12:32:01 crc kubenswrapper[4703]: I1209 12:32:01.157381 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d87ed196-d02a-4a6b-95b7-0835983307f4-logs\") pod \"nova-api-0\" (UID: \"d87ed196-d02a-4a6b-95b7-0835983307f4\") " pod="openstack/nova-api-0" Dec 09 12:32:01 crc kubenswrapper[4703]: I1209 12:32:01.157518 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d87ed196-d02a-4a6b-95b7-0835983307f4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d87ed196-d02a-4a6b-95b7-0835983307f4\") " pod="openstack/nova-api-0" Dec 09 12:32:01 crc kubenswrapper[4703]: I1209 12:32:01.157544 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d87ed196-d02a-4a6b-95b7-0835983307f4-internal-tls-certs\") pod \"nova-api-0\" (UID: \"d87ed196-d02a-4a6b-95b7-0835983307f4\") " pod="openstack/nova-api-0" Dec 09 12:32:01 crc kubenswrapper[4703]: I1209 12:32:01.157610 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d87ed196-d02a-4a6b-95b7-0835983307f4-config-data\") pod \"nova-api-0\" (UID: \"d87ed196-d02a-4a6b-95b7-0835983307f4\") " pod="openstack/nova-api-0" Dec 09 12:32:01 crc kubenswrapper[4703]: I1209 12:32:01.157636 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d87ed196-d02a-4a6b-95b7-0835983307f4-public-tls-certs\") pod \"nova-api-0\" (UID: \"d87ed196-d02a-4a6b-95b7-0835983307f4\") " pod="openstack/nova-api-0" Dec 09 12:32:01 crc kubenswrapper[4703]: I1209 12:32:01.157725 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssx4l\" (UniqueName: \"kubernetes.io/projected/d87ed196-d02a-4a6b-95b7-0835983307f4-kube-api-access-ssx4l\") pod \"nova-api-0\" (UID: \"d87ed196-d02a-4a6b-95b7-0835983307f4\") " pod="openstack/nova-api-0" Dec 09 12:32:01 crc kubenswrapper[4703]: I1209 12:32:01.192036 4703 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="1167f24a-93e8-4578-b9e7-44bca208de2c" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.216:8775/\": read tcp 10.217.0.2:37850->10.217.0.216:8775: read: connection reset by peer" Dec 09 12:32:01 crc kubenswrapper[4703]: I1209 12:32:01.192055 4703 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="1167f24a-93e8-4578-b9e7-44bca208de2c" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.216:8775/\": read tcp 10.217.0.2:37862->10.217.0.216:8775: read: connection reset by peer" Dec 09 12:32:01 crc kubenswrapper[4703]: I1209 12:32:01.259434 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d87ed196-d02a-4a6b-95b7-0835983307f4-logs\") pod \"nova-api-0\" (UID: \"d87ed196-d02a-4a6b-95b7-0835983307f4\") " pod="openstack/nova-api-0" Dec 09 12:32:01 crc kubenswrapper[4703]: I1209 12:32:01.259542 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d87ed196-d02a-4a6b-95b7-0835983307f4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d87ed196-d02a-4a6b-95b7-0835983307f4\") " pod="openstack/nova-api-0" Dec 09 12:32:01 crc kubenswrapper[4703]: I1209 12:32:01.259568 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d87ed196-d02a-4a6b-95b7-0835983307f4-internal-tls-certs\") pod \"nova-api-0\" (UID: \"d87ed196-d02a-4a6b-95b7-0835983307f4\") " pod="openstack/nova-api-0" Dec 09 12:32:01 crc kubenswrapper[4703]: I1209 12:32:01.259612 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d87ed196-d02a-4a6b-95b7-0835983307f4-config-data\") pod \"nova-api-0\" (UID: \"d87ed196-d02a-4a6b-95b7-0835983307f4\") " pod="openstack/nova-api-0" Dec 09 12:32:01 crc kubenswrapper[4703]: I1209 12:32:01.259639 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d87ed196-d02a-4a6b-95b7-0835983307f4-public-tls-certs\") pod \"nova-api-0\" (UID: \"d87ed196-d02a-4a6b-95b7-0835983307f4\") " pod="openstack/nova-api-0" Dec 09 12:32:01 crc kubenswrapper[4703]: I1209 12:32:01.259707 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ssx4l\" (UniqueName: \"kubernetes.io/projected/d87ed196-d02a-4a6b-95b7-0835983307f4-kube-api-access-ssx4l\") pod \"nova-api-0\" (UID: \"d87ed196-d02a-4a6b-95b7-0835983307f4\") " pod="openstack/nova-api-0" Dec 09 12:32:01 crc kubenswrapper[4703]: I1209 12:32:01.260011 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d87ed196-d02a-4a6b-95b7-0835983307f4-logs\") pod \"nova-api-0\" (UID: \"d87ed196-d02a-4a6b-95b7-0835983307f4\") " pod="openstack/nova-api-0" Dec 09 12:32:01 crc kubenswrapper[4703]: I1209 12:32:01.268027 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d87ed196-d02a-4a6b-95b7-0835983307f4-internal-tls-certs\") pod \"nova-api-0\" (UID: \"d87ed196-d02a-4a6b-95b7-0835983307f4\") " pod="openstack/nova-api-0" Dec 09 12:32:01 crc kubenswrapper[4703]: I1209 12:32:01.268212 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d87ed196-d02a-4a6b-95b7-0835983307f4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d87ed196-d02a-4a6b-95b7-0835983307f4\") " pod="openstack/nova-api-0" Dec 09 12:32:01 crc kubenswrapper[4703]: I1209 12:32:01.269665 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d87ed196-d02a-4a6b-95b7-0835983307f4-public-tls-certs\") pod \"nova-api-0\" (UID: \"d87ed196-d02a-4a6b-95b7-0835983307f4\") " pod="openstack/nova-api-0" Dec 09 12:32:01 crc kubenswrapper[4703]: I1209 12:32:01.270270 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d87ed196-d02a-4a6b-95b7-0835983307f4-config-data\") pod \"nova-api-0\" (UID: \"d87ed196-d02a-4a6b-95b7-0835983307f4\") " pod="openstack/nova-api-0" Dec 09 12:32:01 crc kubenswrapper[4703]: I1209 12:32:01.284119 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssx4l\" (UniqueName: \"kubernetes.io/projected/d87ed196-d02a-4a6b-95b7-0835983307f4-kube-api-access-ssx4l\") pod \"nova-api-0\" (UID: \"d87ed196-d02a-4a6b-95b7-0835983307f4\") " pod="openstack/nova-api-0" Dec 09 12:32:01 crc kubenswrapper[4703]: I1209 12:32:01.381403 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 09 12:32:01 crc kubenswrapper[4703]: E1209 12:32:01.747273 4703 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d3a38150585efbacae8459515ea5722319dc446bed4affa411f1cc847b2dfa4b" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 09 12:32:01 crc kubenswrapper[4703]: E1209 12:32:01.749883 4703 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d3a38150585efbacae8459515ea5722319dc446bed4affa411f1cc847b2dfa4b" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 09 12:32:01 crc kubenswrapper[4703]: E1209 12:32:01.751232 4703 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d3a38150585efbacae8459515ea5722319dc446bed4affa411f1cc847b2dfa4b" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 09 12:32:01 crc kubenswrapper[4703]: E1209 12:32:01.751272 4703 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="828c53a9-97b5-4039-bca6-9005809584e3" containerName="nova-scheduler-scheduler" Dec 09 12:32:01 crc kubenswrapper[4703]: I1209 12:32:01.909300 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 09 12:32:02 crc kubenswrapper[4703]: I1209 12:32:02.018452 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d87ed196-d02a-4a6b-95b7-0835983307f4","Type":"ContainerStarted","Data":"e26f1cf233aec15692412ca9d7d7abe5609cb69d6cb6b704a160d1cea6fc4a11"} Dec 09 12:32:02 crc kubenswrapper[4703]: I1209 12:32:02.027589 4703 generic.go:334] "Generic (PLEG): container finished" podID="1167f24a-93e8-4578-b9e7-44bca208de2c" containerID="69fce99ba6cf97efb89b40fe942d16d62f986bb754d5974d71ed874867b69023" exitCode=0 Dec 09 12:32:02 crc kubenswrapper[4703]: I1209 12:32:02.027666 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1167f24a-93e8-4578-b9e7-44bca208de2c","Type":"ContainerDied","Data":"69fce99ba6cf97efb89b40fe942d16d62f986bb754d5974d71ed874867b69023"} Dec 09 12:32:03 crc kubenswrapper[4703]: I1209 12:32:03.047545 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"34da11b1-8f55-41e0-b132-09d5c4a64896","Type":"ContainerStarted","Data":"1ce5274040b272fb4f94a21285fb55b64bff71c94f8ed44f728c6387799ca293"} Dec 09 12:32:03 crc kubenswrapper[4703]: I1209 12:32:03.050551 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d87ed196-d02a-4a6b-95b7-0835983307f4","Type":"ContainerStarted","Data":"7d06a41a6823a90e17aa76f90841a8f158257c2c4aaf509343f3d68462bc7fb9"} Dec 09 12:32:03 crc kubenswrapper[4703]: I1209 12:32:03.166903 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 09 12:32:03 crc kubenswrapper[4703]: I1209 12:32:03.261024 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1167f24a-93e8-4578-b9e7-44bca208de2c-logs\") pod \"1167f24a-93e8-4578-b9e7-44bca208de2c\" (UID: \"1167f24a-93e8-4578-b9e7-44bca208de2c\") " Dec 09 12:32:03 crc kubenswrapper[4703]: I1209 12:32:03.261531 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1167f24a-93e8-4578-b9e7-44bca208de2c-nova-metadata-tls-certs\") pod \"1167f24a-93e8-4578-b9e7-44bca208de2c\" (UID: \"1167f24a-93e8-4578-b9e7-44bca208de2c\") " Dec 09 12:32:03 crc kubenswrapper[4703]: I1209 12:32:03.261613 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1167f24a-93e8-4578-b9e7-44bca208de2c-logs" (OuterVolumeSpecName: "logs") pod "1167f24a-93e8-4578-b9e7-44bca208de2c" (UID: "1167f24a-93e8-4578-b9e7-44bca208de2c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:32:03 crc kubenswrapper[4703]: I1209 12:32:03.261897 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nqptj\" (UniqueName: \"kubernetes.io/projected/1167f24a-93e8-4578-b9e7-44bca208de2c-kube-api-access-nqptj\") pod \"1167f24a-93e8-4578-b9e7-44bca208de2c\" (UID: \"1167f24a-93e8-4578-b9e7-44bca208de2c\") " Dec 09 12:32:03 crc kubenswrapper[4703]: I1209 12:32:03.262049 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1167f24a-93e8-4578-b9e7-44bca208de2c-combined-ca-bundle\") pod \"1167f24a-93e8-4578-b9e7-44bca208de2c\" (UID: \"1167f24a-93e8-4578-b9e7-44bca208de2c\") " Dec 09 12:32:03 crc kubenswrapper[4703]: I1209 12:32:03.262149 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1167f24a-93e8-4578-b9e7-44bca208de2c-config-data\") pod \"1167f24a-93e8-4578-b9e7-44bca208de2c\" (UID: \"1167f24a-93e8-4578-b9e7-44bca208de2c\") " Dec 09 12:32:03 crc kubenswrapper[4703]: I1209 12:32:03.262881 4703 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1167f24a-93e8-4578-b9e7-44bca208de2c-logs\") on node \"crc\" DevicePath \"\"" Dec 09 12:32:03 crc kubenswrapper[4703]: I1209 12:32:03.270789 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1167f24a-93e8-4578-b9e7-44bca208de2c-kube-api-access-nqptj" (OuterVolumeSpecName: "kube-api-access-nqptj") pod "1167f24a-93e8-4578-b9e7-44bca208de2c" (UID: "1167f24a-93e8-4578-b9e7-44bca208de2c"). InnerVolumeSpecName "kube-api-access-nqptj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:32:03 crc kubenswrapper[4703]: I1209 12:32:03.336792 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1167f24a-93e8-4578-b9e7-44bca208de2c-config-data" (OuterVolumeSpecName: "config-data") pod "1167f24a-93e8-4578-b9e7-44bca208de2c" (UID: "1167f24a-93e8-4578-b9e7-44bca208de2c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:32:03 crc kubenswrapper[4703]: I1209 12:32:03.367319 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nqptj\" (UniqueName: \"kubernetes.io/projected/1167f24a-93e8-4578-b9e7-44bca208de2c-kube-api-access-nqptj\") on node \"crc\" DevicePath \"\"" Dec 09 12:32:03 crc kubenswrapper[4703]: I1209 12:32:03.367353 4703 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1167f24a-93e8-4578-b9e7-44bca208de2c-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 12:32:03 crc kubenswrapper[4703]: I1209 12:32:03.391399 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1167f24a-93e8-4578-b9e7-44bca208de2c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1167f24a-93e8-4578-b9e7-44bca208de2c" (UID: "1167f24a-93e8-4578-b9e7-44bca208de2c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:32:03 crc kubenswrapper[4703]: I1209 12:32:03.404308 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1167f24a-93e8-4578-b9e7-44bca208de2c-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "1167f24a-93e8-4578-b9e7-44bca208de2c" (UID: "1167f24a-93e8-4578-b9e7-44bca208de2c"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:32:03 crc kubenswrapper[4703]: I1209 12:32:03.470280 4703 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1167f24a-93e8-4578-b9e7-44bca208de2c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:32:03 crc kubenswrapper[4703]: I1209 12:32:03.470336 4703 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1167f24a-93e8-4578-b9e7-44bca208de2c-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 09 12:32:04 crc kubenswrapper[4703]: I1209 12:32:04.067035 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1167f24a-93e8-4578-b9e7-44bca208de2c","Type":"ContainerDied","Data":"1857e1a06c1846c3b5043ea036788e40243ffaa0ffed95e81cd9af7e1db85cf6"} Dec 09 12:32:04 crc kubenswrapper[4703]: I1209 12:32:04.067089 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 09 12:32:04 crc kubenswrapper[4703]: I1209 12:32:04.067115 4703 scope.go:117] "RemoveContainer" containerID="69fce99ba6cf97efb89b40fe942d16d62f986bb754d5974d71ed874867b69023" Dec 09 12:32:04 crc kubenswrapper[4703]: I1209 12:32:04.080362 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d87ed196-d02a-4a6b-95b7-0835983307f4","Type":"ContainerStarted","Data":"d01b73311ee367b98f79b051878c0b18298df28683e7368489240383f81adc2e"} Dec 09 12:32:04 crc kubenswrapper[4703]: I1209 12:32:04.089355 4703 generic.go:334] "Generic (PLEG): container finished" podID="828c53a9-97b5-4039-bca6-9005809584e3" containerID="d3a38150585efbacae8459515ea5722319dc446bed4affa411f1cc847b2dfa4b" exitCode=0 Dec 09 12:32:04 crc kubenswrapper[4703]: I1209 12:32:04.089408 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"828c53a9-97b5-4039-bca6-9005809584e3","Type":"ContainerDied","Data":"d3a38150585efbacae8459515ea5722319dc446bed4affa411f1cc847b2dfa4b"} Dec 09 12:32:04 crc kubenswrapper[4703]: I1209 12:32:04.117764 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=4.117732503 podStartE2EDuration="4.117732503s" podCreationTimestamp="2025-12-09 12:32:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:32:04.110144794 +0000 UTC m=+1623.358908313" watchObservedRunningTime="2025-12-09 12:32:04.117732503 +0000 UTC m=+1623.366496022" Dec 09 12:32:04 crc kubenswrapper[4703]: I1209 12:32:04.134464 4703 scope.go:117] "RemoveContainer" containerID="86cdd72d3e398e05be8bfbf68b63f4b3d01609abd7c732bf286f6a8be4e9e44c" Dec 09 12:32:04 crc kubenswrapper[4703]: I1209 12:32:04.136939 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 09 12:32:04 crc kubenswrapper[4703]: I1209 12:32:04.149463 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 09 12:32:04 crc kubenswrapper[4703]: I1209 12:32:04.184466 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 09 12:32:04 crc kubenswrapper[4703]: E1209 12:32:04.185140 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1167f24a-93e8-4578-b9e7-44bca208de2c" containerName="nova-metadata-metadata" Dec 09 12:32:04 crc kubenswrapper[4703]: I1209 12:32:04.185160 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="1167f24a-93e8-4578-b9e7-44bca208de2c" containerName="nova-metadata-metadata" Dec 09 12:32:04 crc kubenswrapper[4703]: E1209 12:32:04.185220 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1167f24a-93e8-4578-b9e7-44bca208de2c" containerName="nova-metadata-log" Dec 09 12:32:04 crc kubenswrapper[4703]: I1209 12:32:04.185227 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="1167f24a-93e8-4578-b9e7-44bca208de2c" containerName="nova-metadata-log" Dec 09 12:32:04 crc kubenswrapper[4703]: I1209 12:32:04.185508 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="1167f24a-93e8-4578-b9e7-44bca208de2c" containerName="nova-metadata-log" Dec 09 12:32:04 crc kubenswrapper[4703]: I1209 12:32:04.185545 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="1167f24a-93e8-4578-b9e7-44bca208de2c" containerName="nova-metadata-metadata" Dec 09 12:32:04 crc kubenswrapper[4703]: I1209 12:32:04.187172 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 09 12:32:04 crc kubenswrapper[4703]: I1209 12:32:04.194810 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 09 12:32:04 crc kubenswrapper[4703]: I1209 12:32:04.195125 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 09 12:32:04 crc kubenswrapper[4703]: I1209 12:32:04.220280 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 09 12:32:04 crc kubenswrapper[4703]: I1209 12:32:04.290046 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aa203c51-b3d0-4fe3-9471-9e129b93794e-logs\") pod \"nova-metadata-0\" (UID: \"aa203c51-b3d0-4fe3-9471-9e129b93794e\") " pod="openstack/nova-metadata-0" Dec 09 12:32:04 crc kubenswrapper[4703]: I1209 12:32:04.290754 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa203c51-b3d0-4fe3-9471-9e129b93794e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"aa203c51-b3d0-4fe3-9471-9e129b93794e\") " pod="openstack/nova-metadata-0" Dec 09 12:32:04 crc kubenswrapper[4703]: I1209 12:32:04.290877 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa203c51-b3d0-4fe3-9471-9e129b93794e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"aa203c51-b3d0-4fe3-9471-9e129b93794e\") " pod="openstack/nova-metadata-0" Dec 09 12:32:04 crc kubenswrapper[4703]: I1209 12:32:04.290976 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kv9l\" (UniqueName: \"kubernetes.io/projected/aa203c51-b3d0-4fe3-9471-9e129b93794e-kube-api-access-5kv9l\") pod \"nova-metadata-0\" (UID: \"aa203c51-b3d0-4fe3-9471-9e129b93794e\") " pod="openstack/nova-metadata-0" Dec 09 12:32:04 crc kubenswrapper[4703]: I1209 12:32:04.291120 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa203c51-b3d0-4fe3-9471-9e129b93794e-config-data\") pod \"nova-metadata-0\" (UID: \"aa203c51-b3d0-4fe3-9471-9e129b93794e\") " pod="openstack/nova-metadata-0" Dec 09 12:32:04 crc kubenswrapper[4703]: I1209 12:32:04.394055 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aa203c51-b3d0-4fe3-9471-9e129b93794e-logs\") pod \"nova-metadata-0\" (UID: \"aa203c51-b3d0-4fe3-9471-9e129b93794e\") " pod="openstack/nova-metadata-0" Dec 09 12:32:04 crc kubenswrapper[4703]: I1209 12:32:04.394311 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa203c51-b3d0-4fe3-9471-9e129b93794e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"aa203c51-b3d0-4fe3-9471-9e129b93794e\") " pod="openstack/nova-metadata-0" Dec 09 12:32:04 crc kubenswrapper[4703]: I1209 12:32:04.394396 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa203c51-b3d0-4fe3-9471-9e129b93794e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"aa203c51-b3d0-4fe3-9471-9e129b93794e\") " pod="openstack/nova-metadata-0" Dec 09 12:32:04 crc kubenswrapper[4703]: I1209 12:32:04.394436 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kv9l\" (UniqueName: \"kubernetes.io/projected/aa203c51-b3d0-4fe3-9471-9e129b93794e-kube-api-access-5kv9l\") pod \"nova-metadata-0\" (UID: \"aa203c51-b3d0-4fe3-9471-9e129b93794e\") " pod="openstack/nova-metadata-0" Dec 09 12:32:04 crc kubenswrapper[4703]: I1209 12:32:04.394491 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa203c51-b3d0-4fe3-9471-9e129b93794e-config-data\") pod \"nova-metadata-0\" (UID: \"aa203c51-b3d0-4fe3-9471-9e129b93794e\") " pod="openstack/nova-metadata-0" Dec 09 12:32:04 crc kubenswrapper[4703]: I1209 12:32:04.394645 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aa203c51-b3d0-4fe3-9471-9e129b93794e-logs\") pod \"nova-metadata-0\" (UID: \"aa203c51-b3d0-4fe3-9471-9e129b93794e\") " pod="openstack/nova-metadata-0" Dec 09 12:32:04 crc kubenswrapper[4703]: I1209 12:32:04.400405 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa203c51-b3d0-4fe3-9471-9e129b93794e-config-data\") pod \"nova-metadata-0\" (UID: \"aa203c51-b3d0-4fe3-9471-9e129b93794e\") " pod="openstack/nova-metadata-0" Dec 09 12:32:04 crc kubenswrapper[4703]: I1209 12:32:04.401177 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa203c51-b3d0-4fe3-9471-9e129b93794e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"aa203c51-b3d0-4fe3-9471-9e129b93794e\") " pod="openstack/nova-metadata-0" Dec 09 12:32:04 crc kubenswrapper[4703]: I1209 12:32:04.402105 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa203c51-b3d0-4fe3-9471-9e129b93794e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"aa203c51-b3d0-4fe3-9471-9e129b93794e\") " pod="openstack/nova-metadata-0" Dec 09 12:32:04 crc kubenswrapper[4703]: I1209 12:32:04.413914 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kv9l\" (UniqueName: \"kubernetes.io/projected/aa203c51-b3d0-4fe3-9471-9e129b93794e-kube-api-access-5kv9l\") pod \"nova-metadata-0\" (UID: \"aa203c51-b3d0-4fe3-9471-9e129b93794e\") " pod="openstack/nova-metadata-0" Dec 09 12:32:04 crc kubenswrapper[4703]: I1209 12:32:04.478603 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 09 12:32:05 crc kubenswrapper[4703]: I1209 12:32:05.083858 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 09 12:32:05 crc kubenswrapper[4703]: I1209 12:32:05.096415 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1167f24a-93e8-4578-b9e7-44bca208de2c" path="/var/lib/kubelet/pods/1167f24a-93e8-4578-b9e7-44bca208de2c/volumes" Dec 09 12:32:05 crc kubenswrapper[4703]: I1209 12:32:05.113953 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"34da11b1-8f55-41e0-b132-09d5c4a64896","Type":"ContainerStarted","Data":"2b4e4ec7cdedbc488cc35c5df9063c37d869a772a0421d96587612f7a1df1375"} Dec 09 12:32:05 crc kubenswrapper[4703]: I1209 12:32:05.114185 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 09 12:32:05 crc kubenswrapper[4703]: I1209 12:32:05.123029 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 09 12:32:05 crc kubenswrapper[4703]: I1209 12:32:05.123715 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"828c53a9-97b5-4039-bca6-9005809584e3","Type":"ContainerDied","Data":"025bfb8948946c244e5d3a9a903b5688ff65dc46307cf14cd12c8dec859a6bf6"} Dec 09 12:32:05 crc kubenswrapper[4703]: I1209 12:32:05.123766 4703 scope.go:117] "RemoveContainer" containerID="d3a38150585efbacae8459515ea5722319dc446bed4affa411f1cc847b2dfa4b" Dec 09 12:32:05 crc kubenswrapper[4703]: I1209 12:32:05.174872 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.044513438 podStartE2EDuration="8.174840836s" podCreationTimestamp="2025-12-09 12:31:57 +0000 UTC" firstStartedPulling="2025-12-09 12:31:59.052242544 +0000 UTC m=+1618.301006063" lastFinishedPulling="2025-12-09 12:32:04.182569952 +0000 UTC m=+1623.431333461" observedRunningTime="2025-12-09 12:32:05.147103363 +0000 UTC m=+1624.395866882" watchObservedRunningTime="2025-12-09 12:32:05.174840836 +0000 UTC m=+1624.423604355" Dec 09 12:32:05 crc kubenswrapper[4703]: I1209 12:32:05.221901 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vbls\" (UniqueName: \"kubernetes.io/projected/828c53a9-97b5-4039-bca6-9005809584e3-kube-api-access-9vbls\") pod \"828c53a9-97b5-4039-bca6-9005809584e3\" (UID: \"828c53a9-97b5-4039-bca6-9005809584e3\") " Dec 09 12:32:05 crc kubenswrapper[4703]: I1209 12:32:05.222070 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/828c53a9-97b5-4039-bca6-9005809584e3-combined-ca-bundle\") pod \"828c53a9-97b5-4039-bca6-9005809584e3\" (UID: \"828c53a9-97b5-4039-bca6-9005809584e3\") " Dec 09 12:32:05 crc kubenswrapper[4703]: I1209 12:32:05.222247 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/828c53a9-97b5-4039-bca6-9005809584e3-config-data\") pod \"828c53a9-97b5-4039-bca6-9005809584e3\" (UID: \"828c53a9-97b5-4039-bca6-9005809584e3\") " Dec 09 12:32:05 crc kubenswrapper[4703]: I1209 12:32:05.235539 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/828c53a9-97b5-4039-bca6-9005809584e3-kube-api-access-9vbls" (OuterVolumeSpecName: "kube-api-access-9vbls") pod "828c53a9-97b5-4039-bca6-9005809584e3" (UID: "828c53a9-97b5-4039-bca6-9005809584e3"). InnerVolumeSpecName "kube-api-access-9vbls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:32:05 crc kubenswrapper[4703]: I1209 12:32:05.284838 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/828c53a9-97b5-4039-bca6-9005809584e3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "828c53a9-97b5-4039-bca6-9005809584e3" (UID: "828c53a9-97b5-4039-bca6-9005809584e3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:32:05 crc kubenswrapper[4703]: I1209 12:32:05.287960 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/828c53a9-97b5-4039-bca6-9005809584e3-config-data" (OuterVolumeSpecName: "config-data") pod "828c53a9-97b5-4039-bca6-9005809584e3" (UID: "828c53a9-97b5-4039-bca6-9005809584e3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:32:05 crc kubenswrapper[4703]: I1209 12:32:05.293303 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 09 12:32:05 crc kubenswrapper[4703]: I1209 12:32:05.329860 4703 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/828c53a9-97b5-4039-bca6-9005809584e3-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 12:32:05 crc kubenswrapper[4703]: I1209 12:32:05.329904 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9vbls\" (UniqueName: \"kubernetes.io/projected/828c53a9-97b5-4039-bca6-9005809584e3-kube-api-access-9vbls\") on node \"crc\" DevicePath \"\"" Dec 09 12:32:05 crc kubenswrapper[4703]: I1209 12:32:05.329915 4703 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/828c53a9-97b5-4039-bca6-9005809584e3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:32:05 crc kubenswrapper[4703]: I1209 12:32:05.486803 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 09 12:32:05 crc kubenswrapper[4703]: I1209 12:32:05.500059 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Dec 09 12:32:05 crc kubenswrapper[4703]: I1209 12:32:05.516770 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 09 12:32:05 crc kubenswrapper[4703]: E1209 12:32:05.517530 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="828c53a9-97b5-4039-bca6-9005809584e3" containerName="nova-scheduler-scheduler" Dec 09 12:32:05 crc kubenswrapper[4703]: I1209 12:32:05.517563 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="828c53a9-97b5-4039-bca6-9005809584e3" containerName="nova-scheduler-scheduler" Dec 09 12:32:05 crc kubenswrapper[4703]: I1209 12:32:05.517928 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="828c53a9-97b5-4039-bca6-9005809584e3" containerName="nova-scheduler-scheduler" Dec 09 12:32:05 crc kubenswrapper[4703]: I1209 12:32:05.519162 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 09 12:32:05 crc kubenswrapper[4703]: I1209 12:32:05.523690 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 09 12:32:05 crc kubenswrapper[4703]: I1209 12:32:05.529258 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 09 12:32:05 crc kubenswrapper[4703]: I1209 12:32:05.638929 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e4de361-7a9f-42a1-9c5d-35e890f37e8b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7e4de361-7a9f-42a1-9c5d-35e890f37e8b\") " pod="openstack/nova-scheduler-0" Dec 09 12:32:05 crc kubenswrapper[4703]: I1209 12:32:05.639024 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e4de361-7a9f-42a1-9c5d-35e890f37e8b-config-data\") pod \"nova-scheduler-0\" (UID: \"7e4de361-7a9f-42a1-9c5d-35e890f37e8b\") " pod="openstack/nova-scheduler-0" Dec 09 12:32:05 crc kubenswrapper[4703]: I1209 12:32:05.639344 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4zvq\" (UniqueName: \"kubernetes.io/projected/7e4de361-7a9f-42a1-9c5d-35e890f37e8b-kube-api-access-t4zvq\") pod \"nova-scheduler-0\" (UID: \"7e4de361-7a9f-42a1-9c5d-35e890f37e8b\") " pod="openstack/nova-scheduler-0" Dec 09 12:32:05 crc kubenswrapper[4703]: I1209 12:32:05.741879 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4zvq\" (UniqueName: \"kubernetes.io/projected/7e4de361-7a9f-42a1-9c5d-35e890f37e8b-kube-api-access-t4zvq\") pod \"nova-scheduler-0\" (UID: \"7e4de361-7a9f-42a1-9c5d-35e890f37e8b\") " pod="openstack/nova-scheduler-0" Dec 09 12:32:05 crc kubenswrapper[4703]: I1209 12:32:05.742944 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e4de361-7a9f-42a1-9c5d-35e890f37e8b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7e4de361-7a9f-42a1-9c5d-35e890f37e8b\") " pod="openstack/nova-scheduler-0" Dec 09 12:32:05 crc kubenswrapper[4703]: I1209 12:32:05.743074 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e4de361-7a9f-42a1-9c5d-35e890f37e8b-config-data\") pod \"nova-scheduler-0\" (UID: \"7e4de361-7a9f-42a1-9c5d-35e890f37e8b\") " pod="openstack/nova-scheduler-0" Dec 09 12:32:05 crc kubenswrapper[4703]: I1209 12:32:05.749610 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e4de361-7a9f-42a1-9c5d-35e890f37e8b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7e4de361-7a9f-42a1-9c5d-35e890f37e8b\") " pod="openstack/nova-scheduler-0" Dec 09 12:32:05 crc kubenswrapper[4703]: I1209 12:32:05.751771 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e4de361-7a9f-42a1-9c5d-35e890f37e8b-config-data\") pod \"nova-scheduler-0\" (UID: \"7e4de361-7a9f-42a1-9c5d-35e890f37e8b\") " pod="openstack/nova-scheduler-0" Dec 09 12:32:05 crc kubenswrapper[4703]: I1209 12:32:05.762702 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4zvq\" (UniqueName: \"kubernetes.io/projected/7e4de361-7a9f-42a1-9c5d-35e890f37e8b-kube-api-access-t4zvq\") pod \"nova-scheduler-0\" (UID: \"7e4de361-7a9f-42a1-9c5d-35e890f37e8b\") " pod="openstack/nova-scheduler-0" Dec 09 12:32:05 crc kubenswrapper[4703]: I1209 12:32:05.896177 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 09 12:32:06 crc kubenswrapper[4703]: I1209 12:32:06.154271 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"aa203c51-b3d0-4fe3-9471-9e129b93794e","Type":"ContainerStarted","Data":"1f158e24403cbaff1a6ea9174398ba31352702e60697a0a03d783bdbcfb56c40"} Dec 09 12:32:06 crc kubenswrapper[4703]: I1209 12:32:06.154684 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"aa203c51-b3d0-4fe3-9471-9e129b93794e","Type":"ContainerStarted","Data":"464cc72530fd4684c4df0daed0c1f7e410cb7745ac7b50ea6ce5403d84f7f36a"} Dec 09 12:32:06 crc kubenswrapper[4703]: I1209 12:32:06.601380 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 09 12:32:06 crc kubenswrapper[4703]: W1209 12:32:06.610610 4703 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7e4de361_7a9f_42a1_9c5d_35e890f37e8b.slice/crio-861afbba5bef8d0e6ff97db2f38703a86ecb8b40d5c54e6658038e2dc19aa5cd WatchSource:0}: Error finding container 861afbba5bef8d0e6ff97db2f38703a86ecb8b40d5c54e6658038e2dc19aa5cd: Status 404 returned error can't find the container with id 861afbba5bef8d0e6ff97db2f38703a86ecb8b40d5c54e6658038e2dc19aa5cd Dec 09 12:32:07 crc kubenswrapper[4703]: I1209 12:32:07.084091 4703 scope.go:117] "RemoveContainer" containerID="aec88ccdbd5bc6c7f7f9dd534b57b24318f2c545ead2d59998c5ffb08ae72b46" Dec 09 12:32:07 crc kubenswrapper[4703]: E1209 12:32:07.084786 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 12:32:07 crc kubenswrapper[4703]: I1209 12:32:07.096806 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="828c53a9-97b5-4039-bca6-9005809584e3" path="/var/lib/kubelet/pods/828c53a9-97b5-4039-bca6-9005809584e3/volumes" Dec 09 12:32:07 crc kubenswrapper[4703]: I1209 12:32:07.189358 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7e4de361-7a9f-42a1-9c5d-35e890f37e8b","Type":"ContainerStarted","Data":"dc9e619470ad6fb2ef17f10063a008a2f7dfe51f41aa7a66759189944dfe59dd"} Dec 09 12:32:07 crc kubenswrapper[4703]: I1209 12:32:07.190252 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7e4de361-7a9f-42a1-9c5d-35e890f37e8b","Type":"ContainerStarted","Data":"861afbba5bef8d0e6ff97db2f38703a86ecb8b40d5c54e6658038e2dc19aa5cd"} Dec 09 12:32:07 crc kubenswrapper[4703]: I1209 12:32:07.192863 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"aa203c51-b3d0-4fe3-9471-9e129b93794e","Type":"ContainerStarted","Data":"7d84ead9305ff15c3a60dae900d446b121e4e7cdd3d91177f59c017ecaaa6ff9"} Dec 09 12:32:07 crc kubenswrapper[4703]: I1209 12:32:07.216098 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.216072368 podStartE2EDuration="2.216072368s" podCreationTimestamp="2025-12-09 12:32:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:32:07.206753828 +0000 UTC m=+1626.455517347" watchObservedRunningTime="2025-12-09 12:32:07.216072368 +0000 UTC m=+1626.464835887" Dec 09 12:32:07 crc kubenswrapper[4703]: I1209 12:32:07.245109 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.245071395 podStartE2EDuration="3.245071395s" podCreationTimestamp="2025-12-09 12:32:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:32:07.233379432 +0000 UTC m=+1626.482142951" watchObservedRunningTime="2025-12-09 12:32:07.245071395 +0000 UTC m=+1626.493834914" Dec 09 12:32:09 crc kubenswrapper[4703]: I1209 12:32:09.479236 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 09 12:32:09 crc kubenswrapper[4703]: I1209 12:32:09.479639 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 09 12:32:10 crc kubenswrapper[4703]: I1209 12:32:10.896874 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 09 12:32:11 crc kubenswrapper[4703]: I1209 12:32:11.383290 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 09 12:32:11 crc kubenswrapper[4703]: I1209 12:32:11.383391 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 09 12:32:12 crc kubenswrapper[4703]: I1209 12:32:12.403523 4703 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="d87ed196-d02a-4a6b-95b7-0835983307f4" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.225:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 09 12:32:12 crc kubenswrapper[4703]: I1209 12:32:12.403527 4703 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="d87ed196-d02a-4a6b-95b7-0835983307f4" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.225:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 09 12:32:14 crc kubenswrapper[4703]: I1209 12:32:14.479488 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 09 12:32:14 crc kubenswrapper[4703]: I1209 12:32:14.479785 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 09 12:32:15 crc kubenswrapper[4703]: I1209 12:32:15.491397 4703 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="aa203c51-b3d0-4fe3-9471-9e129b93794e" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.226:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 09 12:32:15 crc kubenswrapper[4703]: I1209 12:32:15.491871 4703 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="aa203c51-b3d0-4fe3-9471-9e129b93794e" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.226:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 09 12:32:15 crc kubenswrapper[4703]: I1209 12:32:15.897270 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 09 12:32:15 crc kubenswrapper[4703]: I1209 12:32:15.944613 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 09 12:32:16 crc kubenswrapper[4703]: I1209 12:32:16.358390 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 09 12:32:21 crc kubenswrapper[4703]: I1209 12:32:21.080929 4703 scope.go:117] "RemoveContainer" containerID="aec88ccdbd5bc6c7f7f9dd534b57b24318f2c545ead2d59998c5ffb08ae72b46" Dec 09 12:32:21 crc kubenswrapper[4703]: E1209 12:32:21.082287 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 12:32:21 crc kubenswrapper[4703]: I1209 12:32:21.390526 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 09 12:32:21 crc kubenswrapper[4703]: I1209 12:32:21.391599 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 09 12:32:21 crc kubenswrapper[4703]: I1209 12:32:21.394505 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 09 12:32:21 crc kubenswrapper[4703]: I1209 12:32:21.398872 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 09 12:32:22 crc kubenswrapper[4703]: I1209 12:32:22.401610 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 09 12:32:22 crc kubenswrapper[4703]: I1209 12:32:22.414862 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 09 12:32:24 crc kubenswrapper[4703]: I1209 12:32:24.484762 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 09 12:32:24 crc kubenswrapper[4703]: I1209 12:32:24.485162 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 09 12:32:24 crc kubenswrapper[4703]: I1209 12:32:24.490589 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 09 12:32:24 crc kubenswrapper[4703]: I1209 12:32:24.491136 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 09 12:32:27 crc kubenswrapper[4703]: I1209 12:32:27.642866 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 09 12:32:35 crc kubenswrapper[4703]: I1209 12:32:35.069631 4703 scope.go:117] "RemoveContainer" containerID="aec88ccdbd5bc6c7f7f9dd534b57b24318f2c545ead2d59998c5ffb08ae72b46" Dec 09 12:32:35 crc kubenswrapper[4703]: E1209 12:32:35.070476 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 12:32:40 crc kubenswrapper[4703]: I1209 12:32:40.588778 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-db-sync-6ljb8"] Dec 09 12:32:40 crc kubenswrapper[4703]: I1209 12:32:40.593496 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-6ljb8" Dec 09 12:32:40 crc kubenswrapper[4703]: I1209 12:32:40.596571 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Dec 09 12:32:40 crc kubenswrapper[4703]: I1209 12:32:40.612343 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-db-sync-jhwbv"] Dec 09 12:32:40 crc kubenswrapper[4703]: I1209 12:32:40.627269 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-db-sync-jhwbv"] Dec 09 12:32:40 crc kubenswrapper[4703]: I1209 12:32:40.641805 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-sync-6ljb8"] Dec 09 12:32:40 crc kubenswrapper[4703]: I1209 12:32:40.747695 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kdkz\" (UniqueName: \"kubernetes.io/projected/862e9d91-760e-43af-aba3-a23255b0fd7a-kube-api-access-6kdkz\") pod \"cloudkitty-db-sync-6ljb8\" (UID: \"862e9d91-760e-43af-aba3-a23255b0fd7a\") " pod="openstack/cloudkitty-db-sync-6ljb8" Dec 09 12:32:40 crc kubenswrapper[4703]: I1209 12:32:40.747786 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/862e9d91-760e-43af-aba3-a23255b0fd7a-combined-ca-bundle\") pod \"cloudkitty-db-sync-6ljb8\" (UID: \"862e9d91-760e-43af-aba3-a23255b0fd7a\") " pod="openstack/cloudkitty-db-sync-6ljb8" Dec 09 12:32:40 crc kubenswrapper[4703]: I1209 12:32:40.747895 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/862e9d91-760e-43af-aba3-a23255b0fd7a-config-data\") pod \"cloudkitty-db-sync-6ljb8\" (UID: \"862e9d91-760e-43af-aba3-a23255b0fd7a\") " pod="openstack/cloudkitty-db-sync-6ljb8" Dec 09 12:32:40 crc kubenswrapper[4703]: I1209 12:32:40.747959 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/862e9d91-760e-43af-aba3-a23255b0fd7a-certs\") pod \"cloudkitty-db-sync-6ljb8\" (UID: \"862e9d91-760e-43af-aba3-a23255b0fd7a\") " pod="openstack/cloudkitty-db-sync-6ljb8" Dec 09 12:32:40 crc kubenswrapper[4703]: I1209 12:32:40.748035 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/862e9d91-760e-43af-aba3-a23255b0fd7a-scripts\") pod \"cloudkitty-db-sync-6ljb8\" (UID: \"862e9d91-760e-43af-aba3-a23255b0fd7a\") " pod="openstack/cloudkitty-db-sync-6ljb8" Dec 09 12:32:40 crc kubenswrapper[4703]: I1209 12:32:40.850915 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/862e9d91-760e-43af-aba3-a23255b0fd7a-scripts\") pod \"cloudkitty-db-sync-6ljb8\" (UID: \"862e9d91-760e-43af-aba3-a23255b0fd7a\") " pod="openstack/cloudkitty-db-sync-6ljb8" Dec 09 12:32:40 crc kubenswrapper[4703]: I1209 12:32:40.851105 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6kdkz\" (UniqueName: \"kubernetes.io/projected/862e9d91-760e-43af-aba3-a23255b0fd7a-kube-api-access-6kdkz\") pod \"cloudkitty-db-sync-6ljb8\" (UID: \"862e9d91-760e-43af-aba3-a23255b0fd7a\") " pod="openstack/cloudkitty-db-sync-6ljb8" Dec 09 12:32:40 crc kubenswrapper[4703]: I1209 12:32:40.851173 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/862e9d91-760e-43af-aba3-a23255b0fd7a-combined-ca-bundle\") pod \"cloudkitty-db-sync-6ljb8\" (UID: \"862e9d91-760e-43af-aba3-a23255b0fd7a\") " pod="openstack/cloudkitty-db-sync-6ljb8" Dec 09 12:32:40 crc kubenswrapper[4703]: I1209 12:32:40.851331 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/862e9d91-760e-43af-aba3-a23255b0fd7a-config-data\") pod \"cloudkitty-db-sync-6ljb8\" (UID: \"862e9d91-760e-43af-aba3-a23255b0fd7a\") " pod="openstack/cloudkitty-db-sync-6ljb8" Dec 09 12:32:40 crc kubenswrapper[4703]: I1209 12:32:40.851376 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/862e9d91-760e-43af-aba3-a23255b0fd7a-certs\") pod \"cloudkitty-db-sync-6ljb8\" (UID: \"862e9d91-760e-43af-aba3-a23255b0fd7a\") " pod="openstack/cloudkitty-db-sync-6ljb8" Dec 09 12:32:40 crc kubenswrapper[4703]: I1209 12:32:40.865037 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/862e9d91-760e-43af-aba3-a23255b0fd7a-scripts\") pod \"cloudkitty-db-sync-6ljb8\" (UID: \"862e9d91-760e-43af-aba3-a23255b0fd7a\") " pod="openstack/cloudkitty-db-sync-6ljb8" Dec 09 12:32:40 crc kubenswrapper[4703]: I1209 12:32:40.867131 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/862e9d91-760e-43af-aba3-a23255b0fd7a-config-data\") pod \"cloudkitty-db-sync-6ljb8\" (UID: \"862e9d91-760e-43af-aba3-a23255b0fd7a\") " pod="openstack/cloudkitty-db-sync-6ljb8" Dec 09 12:32:40 crc kubenswrapper[4703]: I1209 12:32:40.867714 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/862e9d91-760e-43af-aba3-a23255b0fd7a-certs\") pod \"cloudkitty-db-sync-6ljb8\" (UID: \"862e9d91-760e-43af-aba3-a23255b0fd7a\") " pod="openstack/cloudkitty-db-sync-6ljb8" Dec 09 12:32:40 crc kubenswrapper[4703]: I1209 12:32:40.876414 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/862e9d91-760e-43af-aba3-a23255b0fd7a-combined-ca-bundle\") pod \"cloudkitty-db-sync-6ljb8\" (UID: \"862e9d91-760e-43af-aba3-a23255b0fd7a\") " pod="openstack/cloudkitty-db-sync-6ljb8" Dec 09 12:32:40 crc kubenswrapper[4703]: I1209 12:32:40.901243 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kdkz\" (UniqueName: \"kubernetes.io/projected/862e9d91-760e-43af-aba3-a23255b0fd7a-kube-api-access-6kdkz\") pod \"cloudkitty-db-sync-6ljb8\" (UID: \"862e9d91-760e-43af-aba3-a23255b0fd7a\") " pod="openstack/cloudkitty-db-sync-6ljb8" Dec 09 12:32:40 crc kubenswrapper[4703]: I1209 12:32:40.929088 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-6ljb8" Dec 09 12:32:41 crc kubenswrapper[4703]: I1209 12:32:41.086218 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8dc2a644-ad99-4857-9913-562b6ed7371f" path="/var/lib/kubelet/pods/8dc2a644-ad99-4857-9913-562b6ed7371f/volumes" Dec 09 12:32:41 crc kubenswrapper[4703]: I1209 12:32:41.579596 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-sync-6ljb8"] Dec 09 12:32:41 crc kubenswrapper[4703]: I1209 12:32:41.627627 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-6ljb8" event={"ID":"862e9d91-760e-43af-aba3-a23255b0fd7a","Type":"ContainerStarted","Data":"b9fa51f5a844b19bebbe01e83b148b307733b6ca85a722b8878dd59e4ef0f4ee"} Dec 09 12:32:41 crc kubenswrapper[4703]: E1209 12:32:41.701301 4703 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 09 12:32:41 crc kubenswrapper[4703]: E1209 12:32:41.701395 4703 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 09 12:32:41 crc kubenswrapper[4703]: E1209 12:32:41.701634 4703 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6kdkz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-6ljb8_openstack(862e9d91-760e-43af-aba3-a23255b0fd7a): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 12:32:41 crc kubenswrapper[4703]: E1209 12:32:41.703383 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:32:42 crc kubenswrapper[4703]: E1209 12:32:42.643346 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:32:42 crc kubenswrapper[4703]: I1209 12:32:42.653449 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 09 12:32:42 crc kubenswrapper[4703]: I1209 12:32:42.961689 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 09 12:32:42 crc kubenswrapper[4703]: I1209 12:32:42.962567 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="34da11b1-8f55-41e0-b132-09d5c4a64896" containerName="proxy-httpd" containerID="cri-o://2b4e4ec7cdedbc488cc35c5df9063c37d869a772a0421d96587612f7a1df1375" gracePeriod=30 Dec 09 12:32:42 crc kubenswrapper[4703]: I1209 12:32:42.962554 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="34da11b1-8f55-41e0-b132-09d5c4a64896" containerName="ceilometer-central-agent" containerID="cri-o://2ae0ff772b58f091354de9d463214f251e4bcbca0304c209476d742f897f3847" gracePeriod=30 Dec 09 12:32:42 crc kubenswrapper[4703]: I1209 12:32:42.962804 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="34da11b1-8f55-41e0-b132-09d5c4a64896" containerName="sg-core" containerID="cri-o://1ce5274040b272fb4f94a21285fb55b64bff71c94f8ed44f728c6387799ca293" gracePeriod=30 Dec 09 12:32:42 crc kubenswrapper[4703]: I1209 12:32:42.962902 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="34da11b1-8f55-41e0-b132-09d5c4a64896" containerName="ceilometer-notification-agent" containerID="cri-o://78e94d4048892383566cf64cfa0d83572099f19984cefe4a6660087d9f262285" gracePeriod=30 Dec 09 12:32:43 crc kubenswrapper[4703]: I1209 12:32:43.657601 4703 generic.go:334] "Generic (PLEG): container finished" podID="34da11b1-8f55-41e0-b132-09d5c4a64896" containerID="2b4e4ec7cdedbc488cc35c5df9063c37d869a772a0421d96587612f7a1df1375" exitCode=0 Dec 09 12:32:43 crc kubenswrapper[4703]: I1209 12:32:43.657940 4703 generic.go:334] "Generic (PLEG): container finished" podID="34da11b1-8f55-41e0-b132-09d5c4a64896" containerID="1ce5274040b272fb4f94a21285fb55b64bff71c94f8ed44f728c6387799ca293" exitCode=2 Dec 09 12:32:43 crc kubenswrapper[4703]: I1209 12:32:43.657956 4703 generic.go:334] "Generic (PLEG): container finished" podID="34da11b1-8f55-41e0-b132-09d5c4a64896" containerID="2ae0ff772b58f091354de9d463214f251e4bcbca0304c209476d742f897f3847" exitCode=0 Dec 09 12:32:43 crc kubenswrapper[4703]: I1209 12:32:43.657751 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"34da11b1-8f55-41e0-b132-09d5c4a64896","Type":"ContainerDied","Data":"2b4e4ec7cdedbc488cc35c5df9063c37d869a772a0421d96587612f7a1df1375"} Dec 09 12:32:43 crc kubenswrapper[4703]: I1209 12:32:43.658010 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"34da11b1-8f55-41e0-b132-09d5c4a64896","Type":"ContainerDied","Data":"1ce5274040b272fb4f94a21285fb55b64bff71c94f8ed44f728c6387799ca293"} Dec 09 12:32:43 crc kubenswrapper[4703]: I1209 12:32:43.658032 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"34da11b1-8f55-41e0-b132-09d5c4a64896","Type":"ContainerDied","Data":"2ae0ff772b58f091354de9d463214f251e4bcbca0304c209476d742f897f3847"} Dec 09 12:32:43 crc kubenswrapper[4703]: I1209 12:32:43.694177 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 09 12:32:47 crc kubenswrapper[4703]: I1209 12:32:47.381637 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 12:32:47 crc kubenswrapper[4703]: I1209 12:32:47.552155 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34da11b1-8f55-41e0-b132-09d5c4a64896-config-data\") pod \"34da11b1-8f55-41e0-b132-09d5c4a64896\" (UID: \"34da11b1-8f55-41e0-b132-09d5c4a64896\") " Dec 09 12:32:47 crc kubenswrapper[4703]: I1209 12:32:47.552227 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/34da11b1-8f55-41e0-b132-09d5c4a64896-ceilometer-tls-certs\") pod \"34da11b1-8f55-41e0-b132-09d5c4a64896\" (UID: \"34da11b1-8f55-41e0-b132-09d5c4a64896\") " Dec 09 12:32:47 crc kubenswrapper[4703]: I1209 12:32:47.552326 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34da11b1-8f55-41e0-b132-09d5c4a64896-combined-ca-bundle\") pod \"34da11b1-8f55-41e0-b132-09d5c4a64896\" (UID: \"34da11b1-8f55-41e0-b132-09d5c4a64896\") " Dec 09 12:32:47 crc kubenswrapper[4703]: I1209 12:32:47.552447 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/34da11b1-8f55-41e0-b132-09d5c4a64896-sg-core-conf-yaml\") pod \"34da11b1-8f55-41e0-b132-09d5c4a64896\" (UID: \"34da11b1-8f55-41e0-b132-09d5c4a64896\") " Dec 09 12:32:47 crc kubenswrapper[4703]: I1209 12:32:47.552519 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/34da11b1-8f55-41e0-b132-09d5c4a64896-run-httpd\") pod \"34da11b1-8f55-41e0-b132-09d5c4a64896\" (UID: \"34da11b1-8f55-41e0-b132-09d5c4a64896\") " Dec 09 12:32:47 crc kubenswrapper[4703]: I1209 12:32:47.552603 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/34da11b1-8f55-41e0-b132-09d5c4a64896-log-httpd\") pod \"34da11b1-8f55-41e0-b132-09d5c4a64896\" (UID: \"34da11b1-8f55-41e0-b132-09d5c4a64896\") " Dec 09 12:32:47 crc kubenswrapper[4703]: I1209 12:32:47.552631 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34da11b1-8f55-41e0-b132-09d5c4a64896-scripts\") pod \"34da11b1-8f55-41e0-b132-09d5c4a64896\" (UID: \"34da11b1-8f55-41e0-b132-09d5c4a64896\") " Dec 09 12:32:47 crc kubenswrapper[4703]: I1209 12:32:47.552669 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fr7hp\" (UniqueName: \"kubernetes.io/projected/34da11b1-8f55-41e0-b132-09d5c4a64896-kube-api-access-fr7hp\") pod \"34da11b1-8f55-41e0-b132-09d5c4a64896\" (UID: \"34da11b1-8f55-41e0-b132-09d5c4a64896\") " Dec 09 12:32:47 crc kubenswrapper[4703]: I1209 12:32:47.553462 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34da11b1-8f55-41e0-b132-09d5c4a64896-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "34da11b1-8f55-41e0-b132-09d5c4a64896" (UID: "34da11b1-8f55-41e0-b132-09d5c4a64896"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:32:47 crc kubenswrapper[4703]: I1209 12:32:47.554067 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34da11b1-8f55-41e0-b132-09d5c4a64896-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "34da11b1-8f55-41e0-b132-09d5c4a64896" (UID: "34da11b1-8f55-41e0-b132-09d5c4a64896"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:32:47 crc kubenswrapper[4703]: I1209 12:32:47.566317 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34da11b1-8f55-41e0-b132-09d5c4a64896-kube-api-access-fr7hp" (OuterVolumeSpecName: "kube-api-access-fr7hp") pod "34da11b1-8f55-41e0-b132-09d5c4a64896" (UID: "34da11b1-8f55-41e0-b132-09d5c4a64896"). InnerVolumeSpecName "kube-api-access-fr7hp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:32:47 crc kubenswrapper[4703]: I1209 12:32:47.566476 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34da11b1-8f55-41e0-b132-09d5c4a64896-scripts" (OuterVolumeSpecName: "scripts") pod "34da11b1-8f55-41e0-b132-09d5c4a64896" (UID: "34da11b1-8f55-41e0-b132-09d5c4a64896"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:32:47 crc kubenswrapper[4703]: I1209 12:32:47.603737 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34da11b1-8f55-41e0-b132-09d5c4a64896-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "34da11b1-8f55-41e0-b132-09d5c4a64896" (UID: "34da11b1-8f55-41e0-b132-09d5c4a64896"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:32:47 crc kubenswrapper[4703]: I1209 12:32:47.655995 4703 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/34da11b1-8f55-41e0-b132-09d5c4a64896-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 09 12:32:47 crc kubenswrapper[4703]: I1209 12:32:47.656339 4703 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34da11b1-8f55-41e0-b132-09d5c4a64896-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 12:32:47 crc kubenswrapper[4703]: I1209 12:32:47.656349 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fr7hp\" (UniqueName: \"kubernetes.io/projected/34da11b1-8f55-41e0-b132-09d5c4a64896-kube-api-access-fr7hp\") on node \"crc\" DevicePath \"\"" Dec 09 12:32:47 crc kubenswrapper[4703]: I1209 12:32:47.656358 4703 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/34da11b1-8f55-41e0-b132-09d5c4a64896-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 09 12:32:47 crc kubenswrapper[4703]: I1209 12:32:47.656367 4703 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/34da11b1-8f55-41e0-b132-09d5c4a64896-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 09 12:32:47 crc kubenswrapper[4703]: I1209 12:32:47.693996 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34da11b1-8f55-41e0-b132-09d5c4a64896-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "34da11b1-8f55-41e0-b132-09d5c4a64896" (UID: "34da11b1-8f55-41e0-b132-09d5c4a64896"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:32:47 crc kubenswrapper[4703]: I1209 12:32:47.719818 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34da11b1-8f55-41e0-b132-09d5c4a64896-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "34da11b1-8f55-41e0-b132-09d5c4a64896" (UID: "34da11b1-8f55-41e0-b132-09d5c4a64896"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:32:47 crc kubenswrapper[4703]: I1209 12:32:47.726234 4703 generic.go:334] "Generic (PLEG): container finished" podID="34da11b1-8f55-41e0-b132-09d5c4a64896" containerID="78e94d4048892383566cf64cfa0d83572099f19984cefe4a6660087d9f262285" exitCode=0 Dec 09 12:32:47 crc kubenswrapper[4703]: I1209 12:32:47.726299 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"34da11b1-8f55-41e0-b132-09d5c4a64896","Type":"ContainerDied","Data":"78e94d4048892383566cf64cfa0d83572099f19984cefe4a6660087d9f262285"} Dec 09 12:32:47 crc kubenswrapper[4703]: I1209 12:32:47.726343 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"34da11b1-8f55-41e0-b132-09d5c4a64896","Type":"ContainerDied","Data":"8c57ab86d5be4c06d67c63b01d46e679f56da463b4ca5287364603d68d872d22"} Dec 09 12:32:47 crc kubenswrapper[4703]: I1209 12:32:47.726367 4703 scope.go:117] "RemoveContainer" containerID="2b4e4ec7cdedbc488cc35c5df9063c37d869a772a0421d96587612f7a1df1375" Dec 09 12:32:47 crc kubenswrapper[4703]: I1209 12:32:47.726390 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 12:32:47 crc kubenswrapper[4703]: I1209 12:32:47.745643 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34da11b1-8f55-41e0-b132-09d5c4a64896-config-data" (OuterVolumeSpecName: "config-data") pod "34da11b1-8f55-41e0-b132-09d5c4a64896" (UID: "34da11b1-8f55-41e0-b132-09d5c4a64896"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:32:47 crc kubenswrapper[4703]: I1209 12:32:47.753328 4703 scope.go:117] "RemoveContainer" containerID="1ce5274040b272fb4f94a21285fb55b64bff71c94f8ed44f728c6387799ca293" Dec 09 12:32:47 crc kubenswrapper[4703]: I1209 12:32:47.758919 4703 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34da11b1-8f55-41e0-b132-09d5c4a64896-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:32:47 crc kubenswrapper[4703]: I1209 12:32:47.758963 4703 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34da11b1-8f55-41e0-b132-09d5c4a64896-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 12:32:47 crc kubenswrapper[4703]: I1209 12:32:47.758977 4703 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/34da11b1-8f55-41e0-b132-09d5c4a64896-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 09 12:32:47 crc kubenswrapper[4703]: I1209 12:32:47.778630 4703 scope.go:117] "RemoveContainer" containerID="78e94d4048892383566cf64cfa0d83572099f19984cefe4a6660087d9f262285" Dec 09 12:32:47 crc kubenswrapper[4703]: I1209 12:32:47.809805 4703 scope.go:117] "RemoveContainer" containerID="2ae0ff772b58f091354de9d463214f251e4bcbca0304c209476d742f897f3847" Dec 09 12:32:47 crc kubenswrapper[4703]: I1209 12:32:47.899485 4703 scope.go:117] "RemoveContainer" containerID="2b4e4ec7cdedbc488cc35c5df9063c37d869a772a0421d96587612f7a1df1375" Dec 09 12:32:47 crc kubenswrapper[4703]: E1209 12:32:47.900107 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b4e4ec7cdedbc488cc35c5df9063c37d869a772a0421d96587612f7a1df1375\": container with ID starting with 2b4e4ec7cdedbc488cc35c5df9063c37d869a772a0421d96587612f7a1df1375 not found: ID does not exist" containerID="2b4e4ec7cdedbc488cc35c5df9063c37d869a772a0421d96587612f7a1df1375" Dec 09 12:32:47 crc kubenswrapper[4703]: I1209 12:32:47.900159 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b4e4ec7cdedbc488cc35c5df9063c37d869a772a0421d96587612f7a1df1375"} err="failed to get container status \"2b4e4ec7cdedbc488cc35c5df9063c37d869a772a0421d96587612f7a1df1375\": rpc error: code = NotFound desc = could not find container \"2b4e4ec7cdedbc488cc35c5df9063c37d869a772a0421d96587612f7a1df1375\": container with ID starting with 2b4e4ec7cdedbc488cc35c5df9063c37d869a772a0421d96587612f7a1df1375 not found: ID does not exist" Dec 09 12:32:47 crc kubenswrapper[4703]: I1209 12:32:47.900289 4703 scope.go:117] "RemoveContainer" containerID="1ce5274040b272fb4f94a21285fb55b64bff71c94f8ed44f728c6387799ca293" Dec 09 12:32:47 crc kubenswrapper[4703]: E1209 12:32:47.900901 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ce5274040b272fb4f94a21285fb55b64bff71c94f8ed44f728c6387799ca293\": container with ID starting with 1ce5274040b272fb4f94a21285fb55b64bff71c94f8ed44f728c6387799ca293 not found: ID does not exist" containerID="1ce5274040b272fb4f94a21285fb55b64bff71c94f8ed44f728c6387799ca293" Dec 09 12:32:47 crc kubenswrapper[4703]: I1209 12:32:47.900936 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ce5274040b272fb4f94a21285fb55b64bff71c94f8ed44f728c6387799ca293"} err="failed to get container status \"1ce5274040b272fb4f94a21285fb55b64bff71c94f8ed44f728c6387799ca293\": rpc error: code = NotFound desc = could not find container \"1ce5274040b272fb4f94a21285fb55b64bff71c94f8ed44f728c6387799ca293\": container with ID starting with 1ce5274040b272fb4f94a21285fb55b64bff71c94f8ed44f728c6387799ca293 not found: ID does not exist" Dec 09 12:32:47 crc kubenswrapper[4703]: I1209 12:32:47.900962 4703 scope.go:117] "RemoveContainer" containerID="78e94d4048892383566cf64cfa0d83572099f19984cefe4a6660087d9f262285" Dec 09 12:32:47 crc kubenswrapper[4703]: E1209 12:32:47.901365 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78e94d4048892383566cf64cfa0d83572099f19984cefe4a6660087d9f262285\": container with ID starting with 78e94d4048892383566cf64cfa0d83572099f19984cefe4a6660087d9f262285 not found: ID does not exist" containerID="78e94d4048892383566cf64cfa0d83572099f19984cefe4a6660087d9f262285" Dec 09 12:32:47 crc kubenswrapper[4703]: I1209 12:32:47.901387 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78e94d4048892383566cf64cfa0d83572099f19984cefe4a6660087d9f262285"} err="failed to get container status \"78e94d4048892383566cf64cfa0d83572099f19984cefe4a6660087d9f262285\": rpc error: code = NotFound desc = could not find container \"78e94d4048892383566cf64cfa0d83572099f19984cefe4a6660087d9f262285\": container with ID starting with 78e94d4048892383566cf64cfa0d83572099f19984cefe4a6660087d9f262285 not found: ID does not exist" Dec 09 12:32:47 crc kubenswrapper[4703]: I1209 12:32:47.901404 4703 scope.go:117] "RemoveContainer" containerID="2ae0ff772b58f091354de9d463214f251e4bcbca0304c209476d742f897f3847" Dec 09 12:32:47 crc kubenswrapper[4703]: E1209 12:32:47.901654 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ae0ff772b58f091354de9d463214f251e4bcbca0304c209476d742f897f3847\": container with ID starting with 2ae0ff772b58f091354de9d463214f251e4bcbca0304c209476d742f897f3847 not found: ID does not exist" containerID="2ae0ff772b58f091354de9d463214f251e4bcbca0304c209476d742f897f3847" Dec 09 12:32:47 crc kubenswrapper[4703]: I1209 12:32:47.901680 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ae0ff772b58f091354de9d463214f251e4bcbca0304c209476d742f897f3847"} err="failed to get container status \"2ae0ff772b58f091354de9d463214f251e4bcbca0304c209476d742f897f3847\": rpc error: code = NotFound desc = could not find container \"2ae0ff772b58f091354de9d463214f251e4bcbca0304c209476d742f897f3847\": container with ID starting with 2ae0ff772b58f091354de9d463214f251e4bcbca0304c209476d742f897f3847 not found: ID does not exist" Dec 09 12:32:48 crc kubenswrapper[4703]: I1209 12:32:48.076701 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 09 12:32:48 crc kubenswrapper[4703]: I1209 12:32:48.098098 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 09 12:32:48 crc kubenswrapper[4703]: I1209 12:32:48.116930 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 09 12:32:48 crc kubenswrapper[4703]: E1209 12:32:48.117644 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34da11b1-8f55-41e0-b132-09d5c4a64896" containerName="sg-core" Dec 09 12:32:48 crc kubenswrapper[4703]: I1209 12:32:48.117670 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="34da11b1-8f55-41e0-b132-09d5c4a64896" containerName="sg-core" Dec 09 12:32:48 crc kubenswrapper[4703]: E1209 12:32:48.117696 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34da11b1-8f55-41e0-b132-09d5c4a64896" containerName="ceilometer-central-agent" Dec 09 12:32:48 crc kubenswrapper[4703]: I1209 12:32:48.117705 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="34da11b1-8f55-41e0-b132-09d5c4a64896" containerName="ceilometer-central-agent" Dec 09 12:32:48 crc kubenswrapper[4703]: E1209 12:32:48.117739 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34da11b1-8f55-41e0-b132-09d5c4a64896" containerName="ceilometer-notification-agent" Dec 09 12:32:48 crc kubenswrapper[4703]: I1209 12:32:48.117748 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="34da11b1-8f55-41e0-b132-09d5c4a64896" containerName="ceilometer-notification-agent" Dec 09 12:32:48 crc kubenswrapper[4703]: E1209 12:32:48.117764 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34da11b1-8f55-41e0-b132-09d5c4a64896" containerName="proxy-httpd" Dec 09 12:32:48 crc kubenswrapper[4703]: I1209 12:32:48.117772 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="34da11b1-8f55-41e0-b132-09d5c4a64896" containerName="proxy-httpd" Dec 09 12:32:48 crc kubenswrapper[4703]: I1209 12:32:48.118038 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="34da11b1-8f55-41e0-b132-09d5c4a64896" containerName="proxy-httpd" Dec 09 12:32:48 crc kubenswrapper[4703]: I1209 12:32:48.118163 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="34da11b1-8f55-41e0-b132-09d5c4a64896" containerName="sg-core" Dec 09 12:32:48 crc kubenswrapper[4703]: I1209 12:32:48.118184 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="34da11b1-8f55-41e0-b132-09d5c4a64896" containerName="ceilometer-central-agent" Dec 09 12:32:48 crc kubenswrapper[4703]: I1209 12:32:48.118235 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="34da11b1-8f55-41e0-b132-09d5c4a64896" containerName="ceilometer-notification-agent" Dec 09 12:32:48 crc kubenswrapper[4703]: I1209 12:32:48.126635 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 12:32:48 crc kubenswrapper[4703]: I1209 12:32:48.134256 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 09 12:32:48 crc kubenswrapper[4703]: I1209 12:32:48.134451 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Dec 09 12:32:48 crc kubenswrapper[4703]: I1209 12:32:48.134452 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 09 12:32:48 crc kubenswrapper[4703]: I1209 12:32:48.147665 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 09 12:32:48 crc kubenswrapper[4703]: I1209 12:32:48.269695 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ce42c586-f397-4f98-be45-f56d36115d7a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ce42c586-f397-4f98-be45-f56d36115d7a\") " pod="openstack/ceilometer-0" Dec 09 12:32:48 crc kubenswrapper[4703]: I1209 12:32:48.269784 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7hbq\" (UniqueName: \"kubernetes.io/projected/ce42c586-f397-4f98-be45-f56d36115d7a-kube-api-access-p7hbq\") pod \"ceilometer-0\" (UID: \"ce42c586-f397-4f98-be45-f56d36115d7a\") " pod="openstack/ceilometer-0" Dec 09 12:32:48 crc kubenswrapper[4703]: I1209 12:32:48.269821 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce42c586-f397-4f98-be45-f56d36115d7a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ce42c586-f397-4f98-be45-f56d36115d7a\") " pod="openstack/ceilometer-0" Dec 09 12:32:48 crc kubenswrapper[4703]: I1209 12:32:48.269912 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce42c586-f397-4f98-be45-f56d36115d7a-scripts\") pod \"ceilometer-0\" (UID: \"ce42c586-f397-4f98-be45-f56d36115d7a\") " pod="openstack/ceilometer-0" Dec 09 12:32:48 crc kubenswrapper[4703]: I1209 12:32:48.269934 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce42c586-f397-4f98-be45-f56d36115d7a-config-data\") pod \"ceilometer-0\" (UID: \"ce42c586-f397-4f98-be45-f56d36115d7a\") " pod="openstack/ceilometer-0" Dec 09 12:32:48 crc kubenswrapper[4703]: I1209 12:32:48.269953 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce42c586-f397-4f98-be45-f56d36115d7a-log-httpd\") pod \"ceilometer-0\" (UID: \"ce42c586-f397-4f98-be45-f56d36115d7a\") " pod="openstack/ceilometer-0" Dec 09 12:32:48 crc kubenswrapper[4703]: I1209 12:32:48.270053 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce42c586-f397-4f98-be45-f56d36115d7a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ce42c586-f397-4f98-be45-f56d36115d7a\") " pod="openstack/ceilometer-0" Dec 09 12:32:48 crc kubenswrapper[4703]: I1209 12:32:48.270079 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce42c586-f397-4f98-be45-f56d36115d7a-run-httpd\") pod \"ceilometer-0\" (UID: \"ce42c586-f397-4f98-be45-f56d36115d7a\") " pod="openstack/ceilometer-0" Dec 09 12:32:48 crc kubenswrapper[4703]: I1209 12:32:48.372077 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ce42c586-f397-4f98-be45-f56d36115d7a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ce42c586-f397-4f98-be45-f56d36115d7a\") " pod="openstack/ceilometer-0" Dec 09 12:32:48 crc kubenswrapper[4703]: I1209 12:32:48.372177 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7hbq\" (UniqueName: \"kubernetes.io/projected/ce42c586-f397-4f98-be45-f56d36115d7a-kube-api-access-p7hbq\") pod \"ceilometer-0\" (UID: \"ce42c586-f397-4f98-be45-f56d36115d7a\") " pod="openstack/ceilometer-0" Dec 09 12:32:48 crc kubenswrapper[4703]: I1209 12:32:48.372240 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce42c586-f397-4f98-be45-f56d36115d7a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ce42c586-f397-4f98-be45-f56d36115d7a\") " pod="openstack/ceilometer-0" Dec 09 12:32:48 crc kubenswrapper[4703]: I1209 12:32:48.372334 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce42c586-f397-4f98-be45-f56d36115d7a-scripts\") pod \"ceilometer-0\" (UID: \"ce42c586-f397-4f98-be45-f56d36115d7a\") " pod="openstack/ceilometer-0" Dec 09 12:32:48 crc kubenswrapper[4703]: I1209 12:32:48.372360 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce42c586-f397-4f98-be45-f56d36115d7a-config-data\") pod \"ceilometer-0\" (UID: \"ce42c586-f397-4f98-be45-f56d36115d7a\") " pod="openstack/ceilometer-0" Dec 09 12:32:48 crc kubenswrapper[4703]: I1209 12:32:48.372381 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce42c586-f397-4f98-be45-f56d36115d7a-log-httpd\") pod \"ceilometer-0\" (UID: \"ce42c586-f397-4f98-be45-f56d36115d7a\") " pod="openstack/ceilometer-0" Dec 09 12:32:48 crc kubenswrapper[4703]: I1209 12:32:48.372476 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce42c586-f397-4f98-be45-f56d36115d7a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ce42c586-f397-4f98-be45-f56d36115d7a\") " pod="openstack/ceilometer-0" Dec 09 12:32:48 crc kubenswrapper[4703]: I1209 12:32:48.372498 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce42c586-f397-4f98-be45-f56d36115d7a-run-httpd\") pod \"ceilometer-0\" (UID: \"ce42c586-f397-4f98-be45-f56d36115d7a\") " pod="openstack/ceilometer-0" Dec 09 12:32:48 crc kubenswrapper[4703]: I1209 12:32:48.373062 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce42c586-f397-4f98-be45-f56d36115d7a-run-httpd\") pod \"ceilometer-0\" (UID: \"ce42c586-f397-4f98-be45-f56d36115d7a\") " pod="openstack/ceilometer-0" Dec 09 12:32:48 crc kubenswrapper[4703]: I1209 12:32:48.374805 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce42c586-f397-4f98-be45-f56d36115d7a-log-httpd\") pod \"ceilometer-0\" (UID: \"ce42c586-f397-4f98-be45-f56d36115d7a\") " pod="openstack/ceilometer-0" Dec 09 12:32:48 crc kubenswrapper[4703]: I1209 12:32:48.383398 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce42c586-f397-4f98-be45-f56d36115d7a-config-data\") pod \"ceilometer-0\" (UID: \"ce42c586-f397-4f98-be45-f56d36115d7a\") " pod="openstack/ceilometer-0" Dec 09 12:32:48 crc kubenswrapper[4703]: I1209 12:32:48.384214 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce42c586-f397-4f98-be45-f56d36115d7a-scripts\") pod \"ceilometer-0\" (UID: \"ce42c586-f397-4f98-be45-f56d36115d7a\") " pod="openstack/ceilometer-0" Dec 09 12:32:48 crc kubenswrapper[4703]: I1209 12:32:48.384725 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ce42c586-f397-4f98-be45-f56d36115d7a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ce42c586-f397-4f98-be45-f56d36115d7a\") " pod="openstack/ceilometer-0" Dec 09 12:32:48 crc kubenswrapper[4703]: I1209 12:32:48.389610 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce42c586-f397-4f98-be45-f56d36115d7a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ce42c586-f397-4f98-be45-f56d36115d7a\") " pod="openstack/ceilometer-0" Dec 09 12:32:48 crc kubenswrapper[4703]: I1209 12:32:48.391081 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce42c586-f397-4f98-be45-f56d36115d7a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ce42c586-f397-4f98-be45-f56d36115d7a\") " pod="openstack/ceilometer-0" Dec 09 12:32:48 crc kubenswrapper[4703]: I1209 12:32:48.400243 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7hbq\" (UniqueName: \"kubernetes.io/projected/ce42c586-f397-4f98-be45-f56d36115d7a-kube-api-access-p7hbq\") pod \"ceilometer-0\" (UID: \"ce42c586-f397-4f98-be45-f56d36115d7a\") " pod="openstack/ceilometer-0" Dec 09 12:32:48 crc kubenswrapper[4703]: I1209 12:32:48.448785 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 12:32:49 crc kubenswrapper[4703]: I1209 12:32:49.069916 4703 scope.go:117] "RemoveContainer" containerID="aec88ccdbd5bc6c7f7f9dd534b57b24318f2c545ead2d59998c5ffb08ae72b46" Dec 09 12:32:49 crc kubenswrapper[4703]: E1209 12:32:49.070549 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 12:32:49 crc kubenswrapper[4703]: I1209 12:32:49.082524 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34da11b1-8f55-41e0-b132-09d5c4a64896" path="/var/lib/kubelet/pods/34da11b1-8f55-41e0-b132-09d5c4a64896/volumes" Dec 09 12:32:49 crc kubenswrapper[4703]: I1209 12:32:49.136789 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 09 12:32:49 crc kubenswrapper[4703]: I1209 12:32:49.170783 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="0b084a1a-44b8-439b-ad26-d1ead9d2f225" containerName="rabbitmq" containerID="cri-o://93c8b05923234bf1df08fc613abeb309450e92de8ba8c1f863d264be47f47d01" gracePeriod=604794 Dec 09 12:32:49 crc kubenswrapper[4703]: I1209 12:32:49.194761 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692" containerName="rabbitmq" containerID="cri-o://d802eddc1bf30be7ca0f224c45b6c4fc51bf6ffb664ee95893f357f53dc9a666" gracePeriod=604795 Dec 09 12:32:49 crc kubenswrapper[4703]: E1209 12:32:49.209614 4703 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 12:32:49 crc kubenswrapper[4703]: E1209 12:32:49.209687 4703 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 12:32:49 crc kubenswrapper[4703]: E1209 12:32:49.209838 4703 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n565h678h645h65h685h657h568h564h8fhcfhc4h58fh66bh564h649h5d5hb5h67dh57fh97h58ch557h4h68h59fh5c8h65ch586h5bch5b8h5c6h7q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p7hbq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(ce42c586-f397-4f98-be45-f56d36115d7a): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 12:32:49 crc kubenswrapper[4703]: I1209 12:32:49.757611 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ce42c586-f397-4f98-be45-f56d36115d7a","Type":"ContainerStarted","Data":"191b56c36b0c0b7868981206b3e84c82e5ac23bac161db910e4c21b9917f3ca0"} Dec 09 12:32:50 crc kubenswrapper[4703]: I1209 12:32:50.772032 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ce42c586-f397-4f98-be45-f56d36115d7a","Type":"ContainerStarted","Data":"ff597c2899928ebe37fc8ddef6539faf9aaa913ff5e9a20736744c922f8f534e"} Dec 09 12:32:50 crc kubenswrapper[4703]: I1209 12:32:50.773063 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ce42c586-f397-4f98-be45-f56d36115d7a","Type":"ContainerStarted","Data":"60172d2d15e380d46ccaaefd5a3f16966d6a5632e519c63ab69a90a9bf303fc4"} Dec 09 12:32:52 crc kubenswrapper[4703]: E1209 12:32:52.202690 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:32:52 crc kubenswrapper[4703]: I1209 12:32:52.798615 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ce42c586-f397-4f98-be45-f56d36115d7a","Type":"ContainerStarted","Data":"1aad6172099210c791f850ebc0d5bab6d8e431e2cb2b58dcf73f515386795fb8"} Dec 09 12:32:52 crc kubenswrapper[4703]: I1209 12:32:52.798856 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 09 12:32:52 crc kubenswrapper[4703]: E1209 12:32:52.801080 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:32:52 crc kubenswrapper[4703]: I1209 12:32:52.930821 4703 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="0b084a1a-44b8-439b-ad26-d1ead9d2f225" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.105:5671: connect: connection refused" Dec 09 12:32:53 crc kubenswrapper[4703]: I1209 12:32:53.355016 4703 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.106:5671: connect: connection refused" Dec 09 12:32:53 crc kubenswrapper[4703]: E1209 12:32:53.812862 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:32:55 crc kubenswrapper[4703]: I1209 12:32:55.840531 4703 generic.go:334] "Generic (PLEG): container finished" podID="8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692" containerID="d802eddc1bf30be7ca0f224c45b6c4fc51bf6ffb664ee95893f357f53dc9a666" exitCode=0 Dec 09 12:32:55 crc kubenswrapper[4703]: I1209 12:32:55.840730 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692","Type":"ContainerDied","Data":"d802eddc1bf30be7ca0f224c45b6c4fc51bf6ffb664ee95893f357f53dc9a666"} Dec 09 12:32:55 crc kubenswrapper[4703]: I1209 12:32:55.845256 4703 generic.go:334] "Generic (PLEG): container finished" podID="0b084a1a-44b8-439b-ad26-d1ead9d2f225" containerID="93c8b05923234bf1df08fc613abeb309450e92de8ba8c1f863d264be47f47d01" exitCode=0 Dec 09 12:32:55 crc kubenswrapper[4703]: I1209 12:32:55.845303 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0b084a1a-44b8-439b-ad26-d1ead9d2f225","Type":"ContainerDied","Data":"93c8b05923234bf1df08fc613abeb309450e92de8ba8c1f863d264be47f47d01"} Dec 09 12:32:56 crc kubenswrapper[4703]: I1209 12:32:56.098802 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:32:56 crc kubenswrapper[4703]: I1209 12:32:56.105468 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 09 12:32:56 crc kubenswrapper[4703]: I1209 12:32:56.192824 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0b084a1a-44b8-439b-ad26-d1ead9d2f225-pod-info\") pod \"0b084a1a-44b8-439b-ad26-d1ead9d2f225\" (UID: \"0b084a1a-44b8-439b-ad26-d1ead9d2f225\") " Dec 09 12:32:56 crc kubenswrapper[4703]: I1209 12:32:56.192903 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692-config-data\") pod \"8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692\" (UID: \"8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692\") " Dec 09 12:32:56 crc kubenswrapper[4703]: I1209 12:32:56.192946 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692-pod-info\") pod \"8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692\" (UID: \"8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692\") " Dec 09 12:32:56 crc kubenswrapper[4703]: I1209 12:32:56.193033 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692-rabbitmq-confd\") pod \"8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692\" (UID: \"8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692\") " Dec 09 12:32:56 crc kubenswrapper[4703]: I1209 12:32:56.193106 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0b084a1a-44b8-439b-ad26-d1ead9d2f225-config-data\") pod \"0b084a1a-44b8-439b-ad26-d1ead9d2f225\" (UID: \"0b084a1a-44b8-439b-ad26-d1ead9d2f225\") " Dec 09 12:32:56 crc kubenswrapper[4703]: I1209 12:32:56.193157 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692-server-conf\") pod \"8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692\" (UID: \"8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692\") " Dec 09 12:32:56 crc kubenswrapper[4703]: I1209 12:32:56.193210 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qlc6p\" (UniqueName: \"kubernetes.io/projected/8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692-kube-api-access-qlc6p\") pod \"8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692\" (UID: \"8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692\") " Dec 09 12:32:56 crc kubenswrapper[4703]: I1209 12:32:56.194805 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5c94eac5-19ef-4f6c-b198-8127baefb4df\") pod \"8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692\" (UID: \"8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692\") " Dec 09 12:32:56 crc kubenswrapper[4703]: I1209 12:32:56.194859 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0b084a1a-44b8-439b-ad26-d1ead9d2f225-plugins-conf\") pod \"0b084a1a-44b8-439b-ad26-d1ead9d2f225\" (UID: \"0b084a1a-44b8-439b-ad26-d1ead9d2f225\") " Dec 09 12:32:56 crc kubenswrapper[4703]: I1209 12:32:56.194971 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0b084a1a-44b8-439b-ad26-d1ead9d2f225-rabbitmq-tls\") pod \"0b084a1a-44b8-439b-ad26-d1ead9d2f225\" (UID: \"0b084a1a-44b8-439b-ad26-d1ead9d2f225\") " Dec 09 12:32:56 crc kubenswrapper[4703]: I1209 12:32:56.194994 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692-rabbitmq-tls\") pod \"8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692\" (UID: \"8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692\") " Dec 09 12:32:56 crc kubenswrapper[4703]: I1209 12:32:56.195035 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692-rabbitmq-plugins\") pod \"8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692\" (UID: \"8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692\") " Dec 09 12:32:56 crc kubenswrapper[4703]: I1209 12:32:56.195062 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0b084a1a-44b8-439b-ad26-d1ead9d2f225-erlang-cookie-secret\") pod \"0b084a1a-44b8-439b-ad26-d1ead9d2f225\" (UID: \"0b084a1a-44b8-439b-ad26-d1ead9d2f225\") " Dec 09 12:32:56 crc kubenswrapper[4703]: I1209 12:32:56.195542 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-71cad072-5587-4a80-81ac-ea30a725ded4\") pod \"0b084a1a-44b8-439b-ad26-d1ead9d2f225\" (UID: \"0b084a1a-44b8-439b-ad26-d1ead9d2f225\") " Dec 09 12:32:56 crc kubenswrapper[4703]: I1209 12:32:56.195588 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0b084a1a-44b8-439b-ad26-d1ead9d2f225-rabbitmq-erlang-cookie\") pod \"0b084a1a-44b8-439b-ad26-d1ead9d2f225\" (UID: \"0b084a1a-44b8-439b-ad26-d1ead9d2f225\") " Dec 09 12:32:56 crc kubenswrapper[4703]: I1209 12:32:56.195625 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692-rabbitmq-erlang-cookie\") pod \"8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692\" (UID: \"8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692\") " Dec 09 12:32:56 crc kubenswrapper[4703]: I1209 12:32:56.195656 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0b084a1a-44b8-439b-ad26-d1ead9d2f225-rabbitmq-confd\") pod \"0b084a1a-44b8-439b-ad26-d1ead9d2f225\" (UID: \"0b084a1a-44b8-439b-ad26-d1ead9d2f225\") " Dec 09 12:32:56 crc kubenswrapper[4703]: I1209 12:32:56.195699 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692-erlang-cookie-secret\") pod \"8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692\" (UID: \"8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692\") " Dec 09 12:32:56 crc kubenswrapper[4703]: I1209 12:32:56.195733 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dv4zt\" (UniqueName: \"kubernetes.io/projected/0b084a1a-44b8-439b-ad26-d1ead9d2f225-kube-api-access-dv4zt\") pod \"0b084a1a-44b8-439b-ad26-d1ead9d2f225\" (UID: \"0b084a1a-44b8-439b-ad26-d1ead9d2f225\") " Dec 09 12:32:56 crc kubenswrapper[4703]: I1209 12:32:56.195753 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692-plugins-conf\") pod \"8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692\" (UID: \"8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692\") " Dec 09 12:32:56 crc kubenswrapper[4703]: I1209 12:32:56.195788 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0b084a1a-44b8-439b-ad26-d1ead9d2f225-rabbitmq-plugins\") pod \"0b084a1a-44b8-439b-ad26-d1ead9d2f225\" (UID: \"0b084a1a-44b8-439b-ad26-d1ead9d2f225\") " Dec 09 12:32:56 crc kubenswrapper[4703]: I1209 12:32:56.195831 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0b084a1a-44b8-439b-ad26-d1ead9d2f225-server-conf\") pod \"0b084a1a-44b8-439b-ad26-d1ead9d2f225\" (UID: \"0b084a1a-44b8-439b-ad26-d1ead9d2f225\") " Dec 09 12:32:56 crc kubenswrapper[4703]: I1209 12:32:56.197937 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692" (UID: "8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:32:56 crc kubenswrapper[4703]: I1209 12:32:56.202557 4703 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Dec 09 12:32:56 crc kubenswrapper[4703]: I1209 12:32:56.205651 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692" (UID: "8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:32:56 crc kubenswrapper[4703]: I1209 12:32:56.207162 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b084a1a-44b8-439b-ad26-d1ead9d2f225-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "0b084a1a-44b8-439b-ad26-d1ead9d2f225" (UID: "0b084a1a-44b8-439b-ad26-d1ead9d2f225"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:32:56 crc kubenswrapper[4703]: I1209 12:32:56.211449 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b084a1a-44b8-439b-ad26-d1ead9d2f225-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "0b084a1a-44b8-439b-ad26-d1ead9d2f225" (UID: "0b084a1a-44b8-439b-ad26-d1ead9d2f225"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:32:56 crc kubenswrapper[4703]: I1209 12:32:56.214299 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692" (UID: "8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:32:56 crc kubenswrapper[4703]: I1209 12:32:56.221096 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692" (UID: "8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:32:56 crc kubenswrapper[4703]: I1209 12:32:56.222642 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b084a1a-44b8-439b-ad26-d1ead9d2f225-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "0b084a1a-44b8-439b-ad26-d1ead9d2f225" (UID: "0b084a1a-44b8-439b-ad26-d1ead9d2f225"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:32:56 crc kubenswrapper[4703]: I1209 12:32:56.227370 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b084a1a-44b8-439b-ad26-d1ead9d2f225-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "0b084a1a-44b8-439b-ad26-d1ead9d2f225" (UID: "0b084a1a-44b8-439b-ad26-d1ead9d2f225"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:32:56 crc kubenswrapper[4703]: I1209 12:32:56.230488 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692" (UID: "8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:32:56 crc kubenswrapper[4703]: I1209 12:32:56.230523 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/0b084a1a-44b8-439b-ad26-d1ead9d2f225-pod-info" (OuterVolumeSpecName: "pod-info") pod "0b084a1a-44b8-439b-ad26-d1ead9d2f225" (UID: "0b084a1a-44b8-439b-ad26-d1ead9d2f225"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Dec 09 12:32:56 crc kubenswrapper[4703]: E1209 12:32:56.230656 4703 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 09 12:32:56 crc kubenswrapper[4703]: E1209 12:32:56.230709 4703 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 09 12:32:56 crc kubenswrapper[4703]: E1209 12:32:56.230886 4703 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6kdkz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-6ljb8_openstack(862e9d91-760e-43af-aba3-a23255b0fd7a): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 12:32:56 crc kubenswrapper[4703]: E1209 12:32:56.233320 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:32:56 crc kubenswrapper[4703]: I1209 12:32:56.257359 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692-kube-api-access-qlc6p" (OuterVolumeSpecName: "kube-api-access-qlc6p") pod "8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692" (UID: "8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692"). InnerVolumeSpecName "kube-api-access-qlc6p". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:32:56 crc kubenswrapper[4703]: I1209 12:32:56.272218 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692-pod-info" (OuterVolumeSpecName: "pod-info") pod "8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692" (UID: "8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Dec 09 12:32:56 crc kubenswrapper[4703]: I1209 12:32:56.274505 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b084a1a-44b8-439b-ad26-d1ead9d2f225-kube-api-access-dv4zt" (OuterVolumeSpecName: "kube-api-access-dv4zt") pod "0b084a1a-44b8-439b-ad26-d1ead9d2f225" (UID: "0b084a1a-44b8-439b-ad26-d1ead9d2f225"). InnerVolumeSpecName "kube-api-access-dv4zt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:32:56 crc kubenswrapper[4703]: I1209 12:32:56.280618 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b084a1a-44b8-439b-ad26-d1ead9d2f225-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "0b084a1a-44b8-439b-ad26-d1ead9d2f225" (UID: "0b084a1a-44b8-439b-ad26-d1ead9d2f225"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:32:56 crc kubenswrapper[4703]: I1209 12:32:56.308600 4703 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0b084a1a-44b8-439b-ad26-d1ead9d2f225-pod-info\") on node \"crc\" DevicePath \"\"" Dec 09 12:32:56 crc kubenswrapper[4703]: I1209 12:32:56.308635 4703 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692-pod-info\") on node \"crc\" DevicePath \"\"" Dec 09 12:32:56 crc kubenswrapper[4703]: I1209 12:32:56.308646 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qlc6p\" (UniqueName: \"kubernetes.io/projected/8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692-kube-api-access-qlc6p\") on node \"crc\" DevicePath \"\"" Dec 09 12:32:56 crc kubenswrapper[4703]: I1209 12:32:56.308657 4703 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0b084a1a-44b8-439b-ad26-d1ead9d2f225-plugins-conf\") on node \"crc\" DevicePath \"\"" Dec 09 12:32:56 crc kubenswrapper[4703]: I1209 12:32:56.308665 4703 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0b084a1a-44b8-439b-ad26-d1ead9d2f225-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Dec 09 12:32:56 crc kubenswrapper[4703]: I1209 12:32:56.308673 4703 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Dec 09 12:32:56 crc kubenswrapper[4703]: I1209 12:32:56.308683 4703 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Dec 09 12:32:56 crc kubenswrapper[4703]: I1209 12:32:56.308691 4703 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0b084a1a-44b8-439b-ad26-d1ead9d2f225-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Dec 09 12:32:56 crc kubenswrapper[4703]: I1209 12:32:56.308702 4703 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0b084a1a-44b8-439b-ad26-d1ead9d2f225-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Dec 09 12:32:56 crc kubenswrapper[4703]: I1209 12:32:56.308710 4703 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Dec 09 12:32:56 crc kubenswrapper[4703]: I1209 12:32:56.308719 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dv4zt\" (UniqueName: \"kubernetes.io/projected/0b084a1a-44b8-439b-ad26-d1ead9d2f225-kube-api-access-dv4zt\") on node \"crc\" DevicePath \"\"" Dec 09 12:32:56 crc kubenswrapper[4703]: I1209 12:32:56.308726 4703 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692-plugins-conf\") on node \"crc\" DevicePath \"\"" Dec 09 12:32:56 crc kubenswrapper[4703]: I1209 12:32:56.308734 4703 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0b084a1a-44b8-439b-ad26-d1ead9d2f225-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Dec 09 12:32:56 crc kubenswrapper[4703]: I1209 12:32:56.322550 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-71cad072-5587-4a80-81ac-ea30a725ded4" (OuterVolumeSpecName: "persistence") pod "0b084a1a-44b8-439b-ad26-d1ead9d2f225" (UID: "0b084a1a-44b8-439b-ad26-d1ead9d2f225"). InnerVolumeSpecName "pvc-71cad072-5587-4a80-81ac-ea30a725ded4". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 09 12:32:56 crc kubenswrapper[4703]: I1209 12:32:56.322735 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b084a1a-44b8-439b-ad26-d1ead9d2f225-config-data" (OuterVolumeSpecName: "config-data") pod "0b084a1a-44b8-439b-ad26-d1ead9d2f225" (UID: "0b084a1a-44b8-439b-ad26-d1ead9d2f225"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:32:56 crc kubenswrapper[4703]: I1209 12:32:56.323092 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5c94eac5-19ef-4f6c-b198-8127baefb4df" (OuterVolumeSpecName: "persistence") pod "8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692" (UID: "8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692"). InnerVolumeSpecName "pvc-5c94eac5-19ef-4f6c-b198-8127baefb4df". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 09 12:32:56 crc kubenswrapper[4703]: I1209 12:32:56.340235 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692-config-data" (OuterVolumeSpecName: "config-data") pod "8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692" (UID: "8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:32:56 crc kubenswrapper[4703]: I1209 12:32:56.422137 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692-server-conf" (OuterVolumeSpecName: "server-conf") pod "8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692" (UID: "8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:32:56 crc kubenswrapper[4703]: I1209 12:32:56.422234 4703 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 12:32:56 crc kubenswrapper[4703]: I1209 12:32:56.422273 4703 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0b084a1a-44b8-439b-ad26-d1ead9d2f225-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 12:32:56 crc kubenswrapper[4703]: I1209 12:32:56.422309 4703 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-5c94eac5-19ef-4f6c-b198-8127baefb4df\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5c94eac5-19ef-4f6c-b198-8127baefb4df\") on node \"crc\" " Dec 09 12:32:56 crc kubenswrapper[4703]: I1209 12:32:56.422333 4703 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-71cad072-5587-4a80-81ac-ea30a725ded4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-71cad072-5587-4a80-81ac-ea30a725ded4\") on node \"crc\" " Dec 09 12:32:56 crc kubenswrapper[4703]: I1209 12:32:56.498405 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b084a1a-44b8-439b-ad26-d1ead9d2f225-server-conf" (OuterVolumeSpecName: "server-conf") pod "0b084a1a-44b8-439b-ad26-d1ead9d2f225" (UID: "0b084a1a-44b8-439b-ad26-d1ead9d2f225"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:32:56 crc kubenswrapper[4703]: I1209 12:32:56.511916 4703 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Dec 09 12:32:56 crc kubenswrapper[4703]: I1209 12:32:56.512228 4703 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-5c94eac5-19ef-4f6c-b198-8127baefb4df" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5c94eac5-19ef-4f6c-b198-8127baefb4df") on node "crc" Dec 09 12:32:56 crc kubenswrapper[4703]: I1209 12:32:56.512858 4703 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Dec 09 12:32:56 crc kubenswrapper[4703]: I1209 12:32:56.513044 4703 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-71cad072-5587-4a80-81ac-ea30a725ded4" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-71cad072-5587-4a80-81ac-ea30a725ded4") on node "crc" Dec 09 12:32:56 crc kubenswrapper[4703]: I1209 12:32:56.527078 4703 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692-server-conf\") on node \"crc\" DevicePath \"\"" Dec 09 12:32:56 crc kubenswrapper[4703]: I1209 12:32:56.527138 4703 reconciler_common.go:293] "Volume detached for volume \"pvc-5c94eac5-19ef-4f6c-b198-8127baefb4df\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5c94eac5-19ef-4f6c-b198-8127baefb4df\") on node \"crc\" DevicePath \"\"" Dec 09 12:32:56 crc kubenswrapper[4703]: I1209 12:32:56.527155 4703 reconciler_common.go:293] "Volume detached for volume \"pvc-71cad072-5587-4a80-81ac-ea30a725ded4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-71cad072-5587-4a80-81ac-ea30a725ded4\") on node \"crc\" DevicePath \"\"" Dec 09 12:32:56 crc kubenswrapper[4703]: I1209 12:32:56.527173 4703 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0b084a1a-44b8-439b-ad26-d1ead9d2f225-server-conf\") on node \"crc\" DevicePath \"\"" Dec 09 12:32:56 crc kubenswrapper[4703]: I1209 12:32:56.555762 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b084a1a-44b8-439b-ad26-d1ead9d2f225-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "0b084a1a-44b8-439b-ad26-d1ead9d2f225" (UID: "0b084a1a-44b8-439b-ad26-d1ead9d2f225"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:32:56 crc kubenswrapper[4703]: I1209 12:32:56.578804 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692" (UID: "8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:32:56 crc kubenswrapper[4703]: I1209 12:32:56.632367 4703 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Dec 09 12:32:56 crc kubenswrapper[4703]: I1209 12:32:56.632408 4703 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0b084a1a-44b8-439b-ad26-d1ead9d2f225-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Dec 09 12:32:56 crc kubenswrapper[4703]: I1209 12:32:56.860907 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692","Type":"ContainerDied","Data":"611d8c263d6dabd06ed4013a8a2f3d9226e26da12d0b00fdf8bb13ca0fdd5e49"} Dec 09 12:32:56 crc kubenswrapper[4703]: I1209 12:32:56.860953 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:32:56 crc kubenswrapper[4703]: I1209 12:32:56.860994 4703 scope.go:117] "RemoveContainer" containerID="d802eddc1bf30be7ca0f224c45b6c4fc51bf6ffb664ee95893f357f53dc9a666" Dec 09 12:32:56 crc kubenswrapper[4703]: I1209 12:32:56.863931 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0b084a1a-44b8-439b-ad26-d1ead9d2f225","Type":"ContainerDied","Data":"ee89ae1dacb0777659f275aa529f052f89b881ae878beb7d9bcaa173c0cecabd"} Dec 09 12:32:56 crc kubenswrapper[4703]: I1209 12:32:56.864000 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 09 12:32:56 crc kubenswrapper[4703]: I1209 12:32:56.884413 4703 scope.go:117] "RemoveContainer" containerID="224e6199fe19659e824e440bbf703885352f49c9d09a917c672c84d49e959a75" Dec 09 12:32:56 crc kubenswrapper[4703]: I1209 12:32:56.975332 4703 scope.go:117] "RemoveContainer" containerID="93c8b05923234bf1df08fc613abeb309450e92de8ba8c1f863d264be47f47d01" Dec 09 12:32:57 crc kubenswrapper[4703]: I1209 12:32:57.000103 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 09 12:32:57 crc kubenswrapper[4703]: I1209 12:32:57.018615 4703 scope.go:117] "RemoveContainer" containerID="7004fbf8033811bf800c6568cb5bc504bf11e5f90793c8b4a4ad5e5198027033" Dec 09 12:32:57 crc kubenswrapper[4703]: I1209 12:32:57.067149 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 09 12:32:57 crc kubenswrapper[4703]: I1209 12:32:57.090029 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b084a1a-44b8-439b-ad26-d1ead9d2f225" path="/var/lib/kubelet/pods/0b084a1a-44b8-439b-ad26-d1ead9d2f225/volumes" Dec 09 12:32:57 crc kubenswrapper[4703]: I1209 12:32:57.091450 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Dec 09 12:32:57 crc kubenswrapper[4703]: E1209 12:32:57.095452 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b084a1a-44b8-439b-ad26-d1ead9d2f225" containerName="rabbitmq" Dec 09 12:32:57 crc kubenswrapper[4703]: I1209 12:32:57.095493 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b084a1a-44b8-439b-ad26-d1ead9d2f225" containerName="rabbitmq" Dec 09 12:32:57 crc kubenswrapper[4703]: E1209 12:32:57.095559 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692" containerName="rabbitmq" Dec 09 12:32:57 crc kubenswrapper[4703]: I1209 12:32:57.095570 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692" containerName="rabbitmq" Dec 09 12:32:57 crc kubenswrapper[4703]: E1209 12:32:57.095588 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692" containerName="setup-container" Dec 09 12:32:57 crc kubenswrapper[4703]: I1209 12:32:57.095596 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692" containerName="setup-container" Dec 09 12:32:57 crc kubenswrapper[4703]: E1209 12:32:57.095619 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b084a1a-44b8-439b-ad26-d1ead9d2f225" containerName="setup-container" Dec 09 12:32:57 crc kubenswrapper[4703]: I1209 12:32:57.095628 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b084a1a-44b8-439b-ad26-d1ead9d2f225" containerName="setup-container" Dec 09 12:32:57 crc kubenswrapper[4703]: I1209 12:32:57.096039 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b084a1a-44b8-439b-ad26-d1ead9d2f225" containerName="rabbitmq" Dec 09 12:32:57 crc kubenswrapper[4703]: I1209 12:32:57.096071 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692" containerName="rabbitmq" Dec 09 12:32:57 crc kubenswrapper[4703]: I1209 12:32:57.098323 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 09 12:32:57 crc kubenswrapper[4703]: I1209 12:32:57.103114 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Dec 09 12:32:57 crc kubenswrapper[4703]: I1209 12:32:57.103153 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Dec 09 12:32:57 crc kubenswrapper[4703]: I1209 12:32:57.103170 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Dec 09 12:32:57 crc kubenswrapper[4703]: I1209 12:32:57.103486 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Dec 09 12:32:57 crc kubenswrapper[4703]: I1209 12:32:57.103494 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Dec 09 12:32:57 crc kubenswrapper[4703]: I1209 12:32:57.103607 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Dec 09 12:32:57 crc kubenswrapper[4703]: I1209 12:32:57.103884 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-5pfx6" Dec 09 12:32:57 crc kubenswrapper[4703]: I1209 12:32:57.107009 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 09 12:32:57 crc kubenswrapper[4703]: I1209 12:32:57.132383 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 09 12:32:57 crc kubenswrapper[4703]: I1209 12:32:57.146725 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 09 12:32:57 crc kubenswrapper[4703]: I1209 12:32:57.162743 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 09 12:32:57 crc kubenswrapper[4703]: I1209 12:32:57.165817 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:32:57 crc kubenswrapper[4703]: I1209 12:32:57.169938 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Dec 09 12:32:57 crc kubenswrapper[4703]: I1209 12:32:57.170111 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Dec 09 12:32:57 crc kubenswrapper[4703]: I1209 12:32:57.170317 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Dec 09 12:32:57 crc kubenswrapper[4703]: I1209 12:32:57.170483 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Dec 09 12:32:57 crc kubenswrapper[4703]: I1209 12:32:57.170502 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Dec 09 12:32:57 crc kubenswrapper[4703]: I1209 12:32:57.170635 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Dec 09 12:32:57 crc kubenswrapper[4703]: I1209 12:32:57.172224 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-brp2m" Dec 09 12:32:57 crc kubenswrapper[4703]: I1209 12:32:57.174660 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 09 12:32:57 crc kubenswrapper[4703]: I1209 12:32:57.264488 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d195f6e9-05a6-430c-b28f-847f7635f1ee-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"d195f6e9-05a6-430c-b28f-847f7635f1ee\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:32:57 crc kubenswrapper[4703]: I1209 12:32:57.264567 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/992a545d-2e79-43b3-819b-bd337432ba58-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"992a545d-2e79-43b3-819b-bd337432ba58\") " pod="openstack/rabbitmq-server-0" Dec 09 12:32:57 crc kubenswrapper[4703]: I1209 12:32:57.264769 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/992a545d-2e79-43b3-819b-bd337432ba58-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"992a545d-2e79-43b3-819b-bd337432ba58\") " pod="openstack/rabbitmq-server-0" Dec 09 12:32:57 crc kubenswrapper[4703]: I1209 12:32:57.264855 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d195f6e9-05a6-430c-b28f-847f7635f1ee-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"d195f6e9-05a6-430c-b28f-847f7635f1ee\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:32:57 crc kubenswrapper[4703]: I1209 12:32:57.264926 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d195f6e9-05a6-430c-b28f-847f7635f1ee-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d195f6e9-05a6-430c-b28f-847f7635f1ee\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:32:57 crc kubenswrapper[4703]: I1209 12:32:57.264954 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d195f6e9-05a6-430c-b28f-847f7635f1ee-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"d195f6e9-05a6-430c-b28f-847f7635f1ee\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:32:57 crc kubenswrapper[4703]: I1209 12:32:57.264989 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d195f6e9-05a6-430c-b28f-847f7635f1ee-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"d195f6e9-05a6-430c-b28f-847f7635f1ee\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:32:57 crc kubenswrapper[4703]: I1209 12:32:57.265043 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldvjq\" (UniqueName: \"kubernetes.io/projected/d195f6e9-05a6-430c-b28f-847f7635f1ee-kube-api-access-ldvjq\") pod \"rabbitmq-cell1-server-0\" (UID: \"d195f6e9-05a6-430c-b28f-847f7635f1ee\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:32:57 crc kubenswrapper[4703]: I1209 12:32:57.265258 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-5c94eac5-19ef-4f6c-b198-8127baefb4df\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5c94eac5-19ef-4f6c-b198-8127baefb4df\") pod \"rabbitmq-cell1-server-0\" (UID: \"d195f6e9-05a6-430c-b28f-847f7635f1ee\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:32:57 crc kubenswrapper[4703]: I1209 12:32:57.265329 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d195f6e9-05a6-430c-b28f-847f7635f1ee-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"d195f6e9-05a6-430c-b28f-847f7635f1ee\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:32:57 crc kubenswrapper[4703]: I1209 12:32:57.265459 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bn6jk\" (UniqueName: \"kubernetes.io/projected/992a545d-2e79-43b3-819b-bd337432ba58-kube-api-access-bn6jk\") pod \"rabbitmq-server-0\" (UID: \"992a545d-2e79-43b3-819b-bd337432ba58\") " pod="openstack/rabbitmq-server-0" Dec 09 12:32:57 crc kubenswrapper[4703]: I1209 12:32:57.265487 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-71cad072-5587-4a80-81ac-ea30a725ded4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-71cad072-5587-4a80-81ac-ea30a725ded4\") pod \"rabbitmq-server-0\" (UID: \"992a545d-2e79-43b3-819b-bd337432ba58\") " pod="openstack/rabbitmq-server-0" Dec 09 12:32:57 crc kubenswrapper[4703]: I1209 12:32:57.265587 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/992a545d-2e79-43b3-819b-bd337432ba58-server-conf\") pod \"rabbitmq-server-0\" (UID: \"992a545d-2e79-43b3-819b-bd337432ba58\") " pod="openstack/rabbitmq-server-0" Dec 09 12:32:57 crc kubenswrapper[4703]: I1209 12:32:57.265722 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/992a545d-2e79-43b3-819b-bd337432ba58-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"992a545d-2e79-43b3-819b-bd337432ba58\") " pod="openstack/rabbitmq-server-0" Dec 09 12:32:57 crc kubenswrapper[4703]: I1209 12:32:57.265773 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d195f6e9-05a6-430c-b28f-847f7635f1ee-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"d195f6e9-05a6-430c-b28f-847f7635f1ee\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:32:57 crc kubenswrapper[4703]: I1209 12:32:57.265913 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/992a545d-2e79-43b3-819b-bd337432ba58-config-data\") pod \"rabbitmq-server-0\" (UID: \"992a545d-2e79-43b3-819b-bd337432ba58\") " pod="openstack/rabbitmq-server-0" Dec 09 12:32:57 crc kubenswrapper[4703]: I1209 12:32:57.265990 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/992a545d-2e79-43b3-819b-bd337432ba58-pod-info\") pod \"rabbitmq-server-0\" (UID: \"992a545d-2e79-43b3-819b-bd337432ba58\") " pod="openstack/rabbitmq-server-0" Dec 09 12:32:57 crc kubenswrapper[4703]: I1209 12:32:57.266050 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d195f6e9-05a6-430c-b28f-847f7635f1ee-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d195f6e9-05a6-430c-b28f-847f7635f1ee\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:32:57 crc kubenswrapper[4703]: I1209 12:32:57.266083 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/992a545d-2e79-43b3-819b-bd337432ba58-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"992a545d-2e79-43b3-819b-bd337432ba58\") " pod="openstack/rabbitmq-server-0" Dec 09 12:32:57 crc kubenswrapper[4703]: I1209 12:32:57.266145 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/992a545d-2e79-43b3-819b-bd337432ba58-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"992a545d-2e79-43b3-819b-bd337432ba58\") " pod="openstack/rabbitmq-server-0" Dec 09 12:32:57 crc kubenswrapper[4703]: I1209 12:32:57.266171 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d195f6e9-05a6-430c-b28f-847f7635f1ee-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"d195f6e9-05a6-430c-b28f-847f7635f1ee\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:32:57 crc kubenswrapper[4703]: I1209 12:32:57.266247 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/992a545d-2e79-43b3-819b-bd337432ba58-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"992a545d-2e79-43b3-819b-bd337432ba58\") " pod="openstack/rabbitmq-server-0" Dec 09 12:32:57 crc kubenswrapper[4703]: I1209 12:32:57.368921 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/992a545d-2e79-43b3-819b-bd337432ba58-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"992a545d-2e79-43b3-819b-bd337432ba58\") " pod="openstack/rabbitmq-server-0" Dec 09 12:32:57 crc kubenswrapper[4703]: I1209 12:32:57.369427 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d195f6e9-05a6-430c-b28f-847f7635f1ee-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"d195f6e9-05a6-430c-b28f-847f7635f1ee\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:32:57 crc kubenswrapper[4703]: I1209 12:32:57.369470 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/992a545d-2e79-43b3-819b-bd337432ba58-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"992a545d-2e79-43b3-819b-bd337432ba58\") " pod="openstack/rabbitmq-server-0" Dec 09 12:32:57 crc kubenswrapper[4703]: I1209 12:32:57.369522 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d195f6e9-05a6-430c-b28f-847f7635f1ee-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"d195f6e9-05a6-430c-b28f-847f7635f1ee\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:32:57 crc kubenswrapper[4703]: I1209 12:32:57.369546 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/992a545d-2e79-43b3-819b-bd337432ba58-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"992a545d-2e79-43b3-819b-bd337432ba58\") " pod="openstack/rabbitmq-server-0" Dec 09 12:32:57 crc kubenswrapper[4703]: I1209 12:32:57.369604 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/992a545d-2e79-43b3-819b-bd337432ba58-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"992a545d-2e79-43b3-819b-bd337432ba58\") " pod="openstack/rabbitmq-server-0" Dec 09 12:32:57 crc kubenswrapper[4703]: I1209 12:32:57.369648 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d195f6e9-05a6-430c-b28f-847f7635f1ee-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"d195f6e9-05a6-430c-b28f-847f7635f1ee\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:32:57 crc kubenswrapper[4703]: I1209 12:32:57.369696 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d195f6e9-05a6-430c-b28f-847f7635f1ee-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d195f6e9-05a6-430c-b28f-847f7635f1ee\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:32:57 crc kubenswrapper[4703]: I1209 12:32:57.369721 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d195f6e9-05a6-430c-b28f-847f7635f1ee-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"d195f6e9-05a6-430c-b28f-847f7635f1ee\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:32:57 crc kubenswrapper[4703]: I1209 12:32:57.369783 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d195f6e9-05a6-430c-b28f-847f7635f1ee-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"d195f6e9-05a6-430c-b28f-847f7635f1ee\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:32:57 crc kubenswrapper[4703]: I1209 12:32:57.369825 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldvjq\" (UniqueName: \"kubernetes.io/projected/d195f6e9-05a6-430c-b28f-847f7635f1ee-kube-api-access-ldvjq\") pod \"rabbitmq-cell1-server-0\" (UID: \"d195f6e9-05a6-430c-b28f-847f7635f1ee\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:32:57 crc kubenswrapper[4703]: I1209 12:32:57.369880 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-5c94eac5-19ef-4f6c-b198-8127baefb4df\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5c94eac5-19ef-4f6c-b198-8127baefb4df\") pod \"rabbitmq-cell1-server-0\" (UID: \"d195f6e9-05a6-430c-b28f-847f7635f1ee\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:32:57 crc kubenswrapper[4703]: I1209 12:32:57.369912 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d195f6e9-05a6-430c-b28f-847f7635f1ee-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"d195f6e9-05a6-430c-b28f-847f7635f1ee\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:32:57 crc kubenswrapper[4703]: I1209 12:32:57.369951 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bn6jk\" (UniqueName: \"kubernetes.io/projected/992a545d-2e79-43b3-819b-bd337432ba58-kube-api-access-bn6jk\") pod \"rabbitmq-server-0\" (UID: \"992a545d-2e79-43b3-819b-bd337432ba58\") " pod="openstack/rabbitmq-server-0" Dec 09 12:32:57 crc kubenswrapper[4703]: I1209 12:32:57.369982 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-71cad072-5587-4a80-81ac-ea30a725ded4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-71cad072-5587-4a80-81ac-ea30a725ded4\") pod \"rabbitmq-server-0\" (UID: \"992a545d-2e79-43b3-819b-bd337432ba58\") " pod="openstack/rabbitmq-server-0" Dec 09 12:32:57 crc kubenswrapper[4703]: I1209 12:32:57.370027 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/992a545d-2e79-43b3-819b-bd337432ba58-server-conf\") pod \"rabbitmq-server-0\" (UID: \"992a545d-2e79-43b3-819b-bd337432ba58\") " pod="openstack/rabbitmq-server-0" Dec 09 12:32:57 crc kubenswrapper[4703]: I1209 12:32:57.370033 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/992a545d-2e79-43b3-819b-bd337432ba58-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"992a545d-2e79-43b3-819b-bd337432ba58\") " pod="openstack/rabbitmq-server-0" Dec 09 12:32:57 crc kubenswrapper[4703]: I1209 12:32:57.370075 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/992a545d-2e79-43b3-819b-bd337432ba58-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"992a545d-2e79-43b3-819b-bd337432ba58\") " pod="openstack/rabbitmq-server-0" Dec 09 12:32:57 crc kubenswrapper[4703]: I1209 12:32:57.370108 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d195f6e9-05a6-430c-b28f-847f7635f1ee-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"d195f6e9-05a6-430c-b28f-847f7635f1ee\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:32:57 crc kubenswrapper[4703]: I1209 12:32:57.370152 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/992a545d-2e79-43b3-819b-bd337432ba58-config-data\") pod \"rabbitmq-server-0\" (UID: \"992a545d-2e79-43b3-819b-bd337432ba58\") " pod="openstack/rabbitmq-server-0" Dec 09 12:32:57 crc kubenswrapper[4703]: I1209 12:32:57.370220 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/992a545d-2e79-43b3-819b-bd337432ba58-pod-info\") pod \"rabbitmq-server-0\" (UID: \"992a545d-2e79-43b3-819b-bd337432ba58\") " pod="openstack/rabbitmq-server-0" Dec 09 12:32:57 crc kubenswrapper[4703]: I1209 12:32:57.370260 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d195f6e9-05a6-430c-b28f-847f7635f1ee-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d195f6e9-05a6-430c-b28f-847f7635f1ee\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:32:57 crc kubenswrapper[4703]: I1209 12:32:57.370289 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/992a545d-2e79-43b3-819b-bd337432ba58-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"992a545d-2e79-43b3-819b-bd337432ba58\") " pod="openstack/rabbitmq-server-0" Dec 09 12:32:57 crc kubenswrapper[4703]: I1209 12:32:57.370734 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/992a545d-2e79-43b3-819b-bd337432ba58-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"992a545d-2e79-43b3-819b-bd337432ba58\") " pod="openstack/rabbitmq-server-0" Dec 09 12:32:57 crc kubenswrapper[4703]: I1209 12:32:57.371918 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/992a545d-2e79-43b3-819b-bd337432ba58-server-conf\") pod \"rabbitmq-server-0\" (UID: \"992a545d-2e79-43b3-819b-bd337432ba58\") " pod="openstack/rabbitmq-server-0" Dec 09 12:32:57 crc kubenswrapper[4703]: I1209 12:32:57.372949 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d195f6e9-05a6-430c-b28f-847f7635f1ee-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"d195f6e9-05a6-430c-b28f-847f7635f1ee\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:32:57 crc kubenswrapper[4703]: I1209 12:32:57.373071 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d195f6e9-05a6-430c-b28f-847f7635f1ee-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d195f6e9-05a6-430c-b28f-847f7635f1ee\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:32:57 crc kubenswrapper[4703]: I1209 12:32:57.373843 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d195f6e9-05a6-430c-b28f-847f7635f1ee-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"d195f6e9-05a6-430c-b28f-847f7635f1ee\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:32:57 crc kubenswrapper[4703]: I1209 12:32:57.373866 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d195f6e9-05a6-430c-b28f-847f7635f1ee-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"d195f6e9-05a6-430c-b28f-847f7635f1ee\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:32:57 crc kubenswrapper[4703]: I1209 12:32:57.374112 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/992a545d-2e79-43b3-819b-bd337432ba58-config-data\") pod \"rabbitmq-server-0\" (UID: \"992a545d-2e79-43b3-819b-bd337432ba58\") " pod="openstack/rabbitmq-server-0" Dec 09 12:32:57 crc kubenswrapper[4703]: I1209 12:32:57.375809 4703 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 09 12:32:57 crc kubenswrapper[4703]: I1209 12:32:57.375884 4703 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-71cad072-5587-4a80-81ac-ea30a725ded4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-71cad072-5587-4a80-81ac-ea30a725ded4\") pod \"rabbitmq-server-0\" (UID: \"992a545d-2e79-43b3-819b-bd337432ba58\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/edeb694bd7f36907ab0d9fdf73908fcc67dae265d9e4f5f826263d7e949d0d97/globalmount\"" pod="openstack/rabbitmq-server-0" Dec 09 12:32:57 crc kubenswrapper[4703]: I1209 12:32:57.377463 4703 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 09 12:32:57 crc kubenswrapper[4703]: I1209 12:32:57.377499 4703 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-5c94eac5-19ef-4f6c-b198-8127baefb4df\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5c94eac5-19ef-4f6c-b198-8127baefb4df\") pod \"rabbitmq-cell1-server-0\" (UID: \"d195f6e9-05a6-430c-b28f-847f7635f1ee\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/8335e5646221fd2e994a320d6d2dc8103f0d7ceb1e7a0b133efe500933756fc8/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:32:57 crc kubenswrapper[4703]: I1209 12:32:57.378634 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/992a545d-2e79-43b3-819b-bd337432ba58-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"992a545d-2e79-43b3-819b-bd337432ba58\") " pod="openstack/rabbitmq-server-0" Dec 09 12:32:57 crc kubenswrapper[4703]: I1209 12:32:57.380037 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d195f6e9-05a6-430c-b28f-847f7635f1ee-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d195f6e9-05a6-430c-b28f-847f7635f1ee\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:32:57 crc kubenswrapper[4703]: I1209 12:32:57.380403 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d195f6e9-05a6-430c-b28f-847f7635f1ee-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"d195f6e9-05a6-430c-b28f-847f7635f1ee\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:32:57 crc kubenswrapper[4703]: I1209 12:32:57.381119 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d195f6e9-05a6-430c-b28f-847f7635f1ee-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"d195f6e9-05a6-430c-b28f-847f7635f1ee\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:32:57 crc kubenswrapper[4703]: I1209 12:32:57.381179 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/992a545d-2e79-43b3-819b-bd337432ba58-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"992a545d-2e79-43b3-819b-bd337432ba58\") " pod="openstack/rabbitmq-server-0" Dec 09 12:32:57 crc kubenswrapper[4703]: I1209 12:32:57.383476 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/992a545d-2e79-43b3-819b-bd337432ba58-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"992a545d-2e79-43b3-819b-bd337432ba58\") " pod="openstack/rabbitmq-server-0" Dec 09 12:32:57 crc kubenswrapper[4703]: I1209 12:32:57.385374 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/992a545d-2e79-43b3-819b-bd337432ba58-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"992a545d-2e79-43b3-819b-bd337432ba58\") " pod="openstack/rabbitmq-server-0" Dec 09 12:32:57 crc kubenswrapper[4703]: I1209 12:32:57.385433 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d195f6e9-05a6-430c-b28f-847f7635f1ee-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"d195f6e9-05a6-430c-b28f-847f7635f1ee\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:32:57 crc kubenswrapper[4703]: I1209 12:32:57.386663 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/992a545d-2e79-43b3-819b-bd337432ba58-pod-info\") pod \"rabbitmq-server-0\" (UID: \"992a545d-2e79-43b3-819b-bd337432ba58\") " pod="openstack/rabbitmq-server-0" Dec 09 12:32:57 crc kubenswrapper[4703]: I1209 12:32:57.393077 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d195f6e9-05a6-430c-b28f-847f7635f1ee-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"d195f6e9-05a6-430c-b28f-847f7635f1ee\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:32:57 crc kubenswrapper[4703]: I1209 12:32:57.395708 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bn6jk\" (UniqueName: \"kubernetes.io/projected/992a545d-2e79-43b3-819b-bd337432ba58-kube-api-access-bn6jk\") pod \"rabbitmq-server-0\" (UID: \"992a545d-2e79-43b3-819b-bd337432ba58\") " pod="openstack/rabbitmq-server-0" Dec 09 12:32:57 crc kubenswrapper[4703]: I1209 12:32:57.425069 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldvjq\" (UniqueName: \"kubernetes.io/projected/d195f6e9-05a6-430c-b28f-847f7635f1ee-kube-api-access-ldvjq\") pod \"rabbitmq-cell1-server-0\" (UID: \"d195f6e9-05a6-430c-b28f-847f7635f1ee\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:32:57 crc kubenswrapper[4703]: I1209 12:32:57.466482 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-71cad072-5587-4a80-81ac-ea30a725ded4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-71cad072-5587-4a80-81ac-ea30a725ded4\") pod \"rabbitmq-server-0\" (UID: \"992a545d-2e79-43b3-819b-bd337432ba58\") " pod="openstack/rabbitmq-server-0" Dec 09 12:32:57 crc kubenswrapper[4703]: I1209 12:32:57.467828 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-5c94eac5-19ef-4f6c-b198-8127baefb4df\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5c94eac5-19ef-4f6c-b198-8127baefb4df\") pod \"rabbitmq-cell1-server-0\" (UID: \"d195f6e9-05a6-430c-b28f-847f7635f1ee\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:32:57 crc kubenswrapper[4703]: I1209 12:32:57.489036 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:32:57 crc kubenswrapper[4703]: I1209 12:32:57.724215 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 09 12:32:58 crc kubenswrapper[4703]: I1209 12:32:58.038282 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 09 12:32:58 crc kubenswrapper[4703]: W1209 12:32:58.769573 4703 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod992a545d_2e79_43b3_819b_bd337432ba58.slice/crio-d68979c53bfbfe84eef287cee035aa715194187de59498974eff883702ead866 WatchSource:0}: Error finding container d68979c53bfbfe84eef287cee035aa715194187de59498974eff883702ead866: Status 404 returned error can't find the container with id d68979c53bfbfe84eef287cee035aa715194187de59498974eff883702ead866 Dec 09 12:32:58 crc kubenswrapper[4703]: I1209 12:32:58.777410 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 09 12:32:58 crc kubenswrapper[4703]: I1209 12:32:58.941211 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d195f6e9-05a6-430c-b28f-847f7635f1ee","Type":"ContainerStarted","Data":"c18b33691d26c9fd877097bfa7c92ac1eb01f693d9edb7ea5d855744640dff7e"} Dec 09 12:32:58 crc kubenswrapper[4703]: I1209 12:32:58.943619 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"992a545d-2e79-43b3-819b-bd337432ba58","Type":"ContainerStarted","Data":"d68979c53bfbfe84eef287cee035aa715194187de59498974eff883702ead866"} Dec 09 12:32:59 crc kubenswrapper[4703]: I1209 12:32:59.089179 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692" path="/var/lib/kubelet/pods/8a4d15bf-fdb7-47b8-b5ce-d2aff4b1f692/volumes" Dec 09 12:32:59 crc kubenswrapper[4703]: I1209 12:32:59.186723 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-dbb88bf8c-j6lhz"] Dec 09 12:32:59 crc kubenswrapper[4703]: I1209 12:32:59.189314 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dbb88bf8c-j6lhz" Dec 09 12:32:59 crc kubenswrapper[4703]: I1209 12:32:59.205506 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Dec 09 12:32:59 crc kubenswrapper[4703]: I1209 12:32:59.254283 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-dbb88bf8c-j6lhz"] Dec 09 12:32:59 crc kubenswrapper[4703]: I1209 12:32:59.293543 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/105d2a1c-4219-4cee-80ae-cf3fc6ee0e0c-dns-swift-storage-0\") pod \"dnsmasq-dns-dbb88bf8c-j6lhz\" (UID: \"105d2a1c-4219-4cee-80ae-cf3fc6ee0e0c\") " pod="openstack/dnsmasq-dns-dbb88bf8c-j6lhz" Dec 09 12:32:59 crc kubenswrapper[4703]: I1209 12:32:59.293616 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/105d2a1c-4219-4cee-80ae-cf3fc6ee0e0c-openstack-edpm-ipam\") pod \"dnsmasq-dns-dbb88bf8c-j6lhz\" (UID: \"105d2a1c-4219-4cee-80ae-cf3fc6ee0e0c\") " pod="openstack/dnsmasq-dns-dbb88bf8c-j6lhz" Dec 09 12:32:59 crc kubenswrapper[4703]: I1209 12:32:59.293658 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/105d2a1c-4219-4cee-80ae-cf3fc6ee0e0c-ovsdbserver-sb\") pod \"dnsmasq-dns-dbb88bf8c-j6lhz\" (UID: \"105d2a1c-4219-4cee-80ae-cf3fc6ee0e0c\") " pod="openstack/dnsmasq-dns-dbb88bf8c-j6lhz" Dec 09 12:32:59 crc kubenswrapper[4703]: I1209 12:32:59.293710 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/105d2a1c-4219-4cee-80ae-cf3fc6ee0e0c-ovsdbserver-nb\") pod \"dnsmasq-dns-dbb88bf8c-j6lhz\" (UID: \"105d2a1c-4219-4cee-80ae-cf3fc6ee0e0c\") " pod="openstack/dnsmasq-dns-dbb88bf8c-j6lhz" Dec 09 12:32:59 crc kubenswrapper[4703]: I1209 12:32:59.293741 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqsx8\" (UniqueName: \"kubernetes.io/projected/105d2a1c-4219-4cee-80ae-cf3fc6ee0e0c-kube-api-access-fqsx8\") pod \"dnsmasq-dns-dbb88bf8c-j6lhz\" (UID: \"105d2a1c-4219-4cee-80ae-cf3fc6ee0e0c\") " pod="openstack/dnsmasq-dns-dbb88bf8c-j6lhz" Dec 09 12:32:59 crc kubenswrapper[4703]: I1209 12:32:59.294038 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/105d2a1c-4219-4cee-80ae-cf3fc6ee0e0c-config\") pod \"dnsmasq-dns-dbb88bf8c-j6lhz\" (UID: \"105d2a1c-4219-4cee-80ae-cf3fc6ee0e0c\") " pod="openstack/dnsmasq-dns-dbb88bf8c-j6lhz" Dec 09 12:32:59 crc kubenswrapper[4703]: I1209 12:32:59.294680 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/105d2a1c-4219-4cee-80ae-cf3fc6ee0e0c-dns-svc\") pod \"dnsmasq-dns-dbb88bf8c-j6lhz\" (UID: \"105d2a1c-4219-4cee-80ae-cf3fc6ee0e0c\") " pod="openstack/dnsmasq-dns-dbb88bf8c-j6lhz" Dec 09 12:32:59 crc kubenswrapper[4703]: I1209 12:32:59.397142 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/105d2a1c-4219-4cee-80ae-cf3fc6ee0e0c-dns-svc\") pod \"dnsmasq-dns-dbb88bf8c-j6lhz\" (UID: \"105d2a1c-4219-4cee-80ae-cf3fc6ee0e0c\") " pod="openstack/dnsmasq-dns-dbb88bf8c-j6lhz" Dec 09 12:32:59 crc kubenswrapper[4703]: I1209 12:32:59.397530 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/105d2a1c-4219-4cee-80ae-cf3fc6ee0e0c-dns-swift-storage-0\") pod \"dnsmasq-dns-dbb88bf8c-j6lhz\" (UID: \"105d2a1c-4219-4cee-80ae-cf3fc6ee0e0c\") " pod="openstack/dnsmasq-dns-dbb88bf8c-j6lhz" Dec 09 12:32:59 crc kubenswrapper[4703]: I1209 12:32:59.397572 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/105d2a1c-4219-4cee-80ae-cf3fc6ee0e0c-openstack-edpm-ipam\") pod \"dnsmasq-dns-dbb88bf8c-j6lhz\" (UID: \"105d2a1c-4219-4cee-80ae-cf3fc6ee0e0c\") " pod="openstack/dnsmasq-dns-dbb88bf8c-j6lhz" Dec 09 12:32:59 crc kubenswrapper[4703]: I1209 12:32:59.397609 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/105d2a1c-4219-4cee-80ae-cf3fc6ee0e0c-ovsdbserver-sb\") pod \"dnsmasq-dns-dbb88bf8c-j6lhz\" (UID: \"105d2a1c-4219-4cee-80ae-cf3fc6ee0e0c\") " pod="openstack/dnsmasq-dns-dbb88bf8c-j6lhz" Dec 09 12:32:59 crc kubenswrapper[4703]: I1209 12:32:59.397665 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/105d2a1c-4219-4cee-80ae-cf3fc6ee0e0c-ovsdbserver-nb\") pod \"dnsmasq-dns-dbb88bf8c-j6lhz\" (UID: \"105d2a1c-4219-4cee-80ae-cf3fc6ee0e0c\") " pod="openstack/dnsmasq-dns-dbb88bf8c-j6lhz" Dec 09 12:32:59 crc kubenswrapper[4703]: I1209 12:32:59.397697 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqsx8\" (UniqueName: \"kubernetes.io/projected/105d2a1c-4219-4cee-80ae-cf3fc6ee0e0c-kube-api-access-fqsx8\") pod \"dnsmasq-dns-dbb88bf8c-j6lhz\" (UID: \"105d2a1c-4219-4cee-80ae-cf3fc6ee0e0c\") " pod="openstack/dnsmasq-dns-dbb88bf8c-j6lhz" Dec 09 12:32:59 crc kubenswrapper[4703]: I1209 12:32:59.397759 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/105d2a1c-4219-4cee-80ae-cf3fc6ee0e0c-config\") pod \"dnsmasq-dns-dbb88bf8c-j6lhz\" (UID: \"105d2a1c-4219-4cee-80ae-cf3fc6ee0e0c\") " pod="openstack/dnsmasq-dns-dbb88bf8c-j6lhz" Dec 09 12:32:59 crc kubenswrapper[4703]: I1209 12:32:59.398840 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/105d2a1c-4219-4cee-80ae-cf3fc6ee0e0c-openstack-edpm-ipam\") pod \"dnsmasq-dns-dbb88bf8c-j6lhz\" (UID: \"105d2a1c-4219-4cee-80ae-cf3fc6ee0e0c\") " pod="openstack/dnsmasq-dns-dbb88bf8c-j6lhz" Dec 09 12:32:59 crc kubenswrapper[4703]: I1209 12:32:59.398925 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/105d2a1c-4219-4cee-80ae-cf3fc6ee0e0c-config\") pod \"dnsmasq-dns-dbb88bf8c-j6lhz\" (UID: \"105d2a1c-4219-4cee-80ae-cf3fc6ee0e0c\") " pod="openstack/dnsmasq-dns-dbb88bf8c-j6lhz" Dec 09 12:32:59 crc kubenswrapper[4703]: I1209 12:32:59.399021 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/105d2a1c-4219-4cee-80ae-cf3fc6ee0e0c-dns-swift-storage-0\") pod \"dnsmasq-dns-dbb88bf8c-j6lhz\" (UID: \"105d2a1c-4219-4cee-80ae-cf3fc6ee0e0c\") " pod="openstack/dnsmasq-dns-dbb88bf8c-j6lhz" Dec 09 12:32:59 crc kubenswrapper[4703]: I1209 12:32:59.399463 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/105d2a1c-4219-4cee-80ae-cf3fc6ee0e0c-ovsdbserver-nb\") pod \"dnsmasq-dns-dbb88bf8c-j6lhz\" (UID: \"105d2a1c-4219-4cee-80ae-cf3fc6ee0e0c\") " pod="openstack/dnsmasq-dns-dbb88bf8c-j6lhz" Dec 09 12:32:59 crc kubenswrapper[4703]: I1209 12:32:59.399621 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/105d2a1c-4219-4cee-80ae-cf3fc6ee0e0c-ovsdbserver-sb\") pod \"dnsmasq-dns-dbb88bf8c-j6lhz\" (UID: \"105d2a1c-4219-4cee-80ae-cf3fc6ee0e0c\") " pod="openstack/dnsmasq-dns-dbb88bf8c-j6lhz" Dec 09 12:32:59 crc kubenswrapper[4703]: I1209 12:32:59.401421 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/105d2a1c-4219-4cee-80ae-cf3fc6ee0e0c-dns-svc\") pod \"dnsmasq-dns-dbb88bf8c-j6lhz\" (UID: \"105d2a1c-4219-4cee-80ae-cf3fc6ee0e0c\") " pod="openstack/dnsmasq-dns-dbb88bf8c-j6lhz" Dec 09 12:32:59 crc kubenswrapper[4703]: I1209 12:32:59.422853 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqsx8\" (UniqueName: \"kubernetes.io/projected/105d2a1c-4219-4cee-80ae-cf3fc6ee0e0c-kube-api-access-fqsx8\") pod \"dnsmasq-dns-dbb88bf8c-j6lhz\" (UID: \"105d2a1c-4219-4cee-80ae-cf3fc6ee0e0c\") " pod="openstack/dnsmasq-dns-dbb88bf8c-j6lhz" Dec 09 12:32:59 crc kubenswrapper[4703]: I1209 12:32:59.532083 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dbb88bf8c-j6lhz" Dec 09 12:33:00 crc kubenswrapper[4703]: I1209 12:33:00.128810 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-dbb88bf8c-j6lhz"] Dec 09 12:33:00 crc kubenswrapper[4703]: I1209 12:33:00.983378 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dbb88bf8c-j6lhz" event={"ID":"105d2a1c-4219-4cee-80ae-cf3fc6ee0e0c","Type":"ContainerStarted","Data":"1f2c35844a8bd1ff059f2e4394fece2c01bc12a75bf42d138a7ebcf23e1f036e"} Dec 09 12:33:00 crc kubenswrapper[4703]: I1209 12:33:00.985982 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d195f6e9-05a6-430c-b28f-847f7635f1ee","Type":"ContainerStarted","Data":"ed2ef6e8c758f50fe6596fa6248d04c5c5cb441bb731a8cc3e8b4676b90172ed"} Dec 09 12:33:01 crc kubenswrapper[4703]: I1209 12:33:01.999431 4703 generic.go:334] "Generic (PLEG): container finished" podID="105d2a1c-4219-4cee-80ae-cf3fc6ee0e0c" containerID="92878cd1830e00f9d866ce58a6340f0e961c002dde33867fbd6d00bb5b0c4663" exitCode=0 Dec 09 12:33:02 crc kubenswrapper[4703]: I1209 12:33:01.999528 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dbb88bf8c-j6lhz" event={"ID":"105d2a1c-4219-4cee-80ae-cf3fc6ee0e0c","Type":"ContainerDied","Data":"92878cd1830e00f9d866ce58a6340f0e961c002dde33867fbd6d00bb5b0c4663"} Dec 09 12:33:02 crc kubenswrapper[4703]: I1209 12:33:02.004979 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"992a545d-2e79-43b3-819b-bd337432ba58","Type":"ContainerStarted","Data":"c76a0832ad86d599e581c824516c98babc6ba039bcca952884dd41993388b165"} Dec 09 12:33:03 crc kubenswrapper[4703]: I1209 12:33:03.014816 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dbb88bf8c-j6lhz" event={"ID":"105d2a1c-4219-4cee-80ae-cf3fc6ee0e0c","Type":"ContainerStarted","Data":"e609e9a490f1bdeebe583af978318cd02cb5b98f634b68a06510a0881f92bf22"} Dec 09 12:33:03 crc kubenswrapper[4703]: I1209 12:33:03.044590 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-dbb88bf8c-j6lhz" podStartSLOduration=4.044566574 podStartE2EDuration="4.044566574s" podCreationTimestamp="2025-12-09 12:32:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:33:03.034675949 +0000 UTC m=+1682.283439478" watchObservedRunningTime="2025-12-09 12:33:03.044566574 +0000 UTC m=+1682.293330093" Dec 09 12:33:03 crc kubenswrapper[4703]: I1209 12:33:03.069745 4703 scope.go:117] "RemoveContainer" containerID="aec88ccdbd5bc6c7f7f9dd534b57b24318f2c545ead2d59998c5ffb08ae72b46" Dec 09 12:33:03 crc kubenswrapper[4703]: E1209 12:33:03.070022 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 12:33:04 crc kubenswrapper[4703]: I1209 12:33:04.024836 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-dbb88bf8c-j6lhz" Dec 09 12:33:06 crc kubenswrapper[4703]: I1209 12:33:06.077994 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 09 12:33:06 crc kubenswrapper[4703]: E1209 12:33:06.217727 4703 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 12:33:06 crc kubenswrapper[4703]: E1209 12:33:06.218264 4703 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 12:33:06 crc kubenswrapper[4703]: E1209 12:33:06.218495 4703 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n565h678h645h65h685h657h568h564h8fhcfhc4h58fh66bh564h649h5d5hb5h67dh57fh97h58ch557h4h68h59fh5c8h65ch586h5bch5b8h5c6h7q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p7hbq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(ce42c586-f397-4f98-be45-f56d36115d7a): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 12:33:06 crc kubenswrapper[4703]: E1209 12:33:06.219958 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:33:07 crc kubenswrapper[4703]: E1209 12:33:07.060679 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:33:09 crc kubenswrapper[4703]: I1209 12:33:09.533442 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-dbb88bf8c-j6lhz" Dec 09 12:33:09 crc kubenswrapper[4703]: I1209 12:33:09.622735 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5fd9b586ff-sbnjf"] Dec 09 12:33:09 crc kubenswrapper[4703]: I1209 12:33:09.623343 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5fd9b586ff-sbnjf" podUID="3920ccfc-84d1-4ed4-9295-942b370b9eaf" containerName="dnsmasq-dns" containerID="cri-o://c0aa394d9727d0e4b5133d6aa2c132f0dc9226344579b548e6da63526f98a692" gracePeriod=10 Dec 09 12:33:09 crc kubenswrapper[4703]: I1209 12:33:09.970217 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85f64749dc-xznm5"] Dec 09 12:33:09 crc kubenswrapper[4703]: I1209 12:33:09.974033 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85f64749dc-xznm5" Dec 09 12:33:09 crc kubenswrapper[4703]: I1209 12:33:09.998660 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85f64749dc-xznm5"] Dec 09 12:33:10 crc kubenswrapper[4703]: I1209 12:33:10.077540 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/1769eb76-1a06-4ce4-accf-7a0b91a759c8-openstack-edpm-ipam\") pod \"dnsmasq-dns-85f64749dc-xznm5\" (UID: \"1769eb76-1a06-4ce4-accf-7a0b91a759c8\") " pod="openstack/dnsmasq-dns-85f64749dc-xznm5" Dec 09 12:33:10 crc kubenswrapper[4703]: I1209 12:33:10.077611 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1769eb76-1a06-4ce4-accf-7a0b91a759c8-config\") pod \"dnsmasq-dns-85f64749dc-xznm5\" (UID: \"1769eb76-1a06-4ce4-accf-7a0b91a759c8\") " pod="openstack/dnsmasq-dns-85f64749dc-xznm5" Dec 09 12:33:10 crc kubenswrapper[4703]: I1209 12:33:10.077643 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1769eb76-1a06-4ce4-accf-7a0b91a759c8-ovsdbserver-nb\") pod \"dnsmasq-dns-85f64749dc-xznm5\" (UID: \"1769eb76-1a06-4ce4-accf-7a0b91a759c8\") " pod="openstack/dnsmasq-dns-85f64749dc-xznm5" Dec 09 12:33:10 crc kubenswrapper[4703]: I1209 12:33:10.077690 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1769eb76-1a06-4ce4-accf-7a0b91a759c8-ovsdbserver-sb\") pod \"dnsmasq-dns-85f64749dc-xznm5\" (UID: \"1769eb76-1a06-4ce4-accf-7a0b91a759c8\") " pod="openstack/dnsmasq-dns-85f64749dc-xznm5" Dec 09 12:33:10 crc kubenswrapper[4703]: I1209 12:33:10.077747 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1769eb76-1a06-4ce4-accf-7a0b91a759c8-dns-svc\") pod \"dnsmasq-dns-85f64749dc-xznm5\" (UID: \"1769eb76-1a06-4ce4-accf-7a0b91a759c8\") " pod="openstack/dnsmasq-dns-85f64749dc-xznm5" Dec 09 12:33:10 crc kubenswrapper[4703]: I1209 12:33:10.077860 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfr9j\" (UniqueName: \"kubernetes.io/projected/1769eb76-1a06-4ce4-accf-7a0b91a759c8-kube-api-access-vfr9j\") pod \"dnsmasq-dns-85f64749dc-xznm5\" (UID: \"1769eb76-1a06-4ce4-accf-7a0b91a759c8\") " pod="openstack/dnsmasq-dns-85f64749dc-xznm5" Dec 09 12:33:10 crc kubenswrapper[4703]: I1209 12:33:10.077913 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1769eb76-1a06-4ce4-accf-7a0b91a759c8-dns-swift-storage-0\") pod \"dnsmasq-dns-85f64749dc-xznm5\" (UID: \"1769eb76-1a06-4ce4-accf-7a0b91a759c8\") " pod="openstack/dnsmasq-dns-85f64749dc-xznm5" Dec 09 12:33:10 crc kubenswrapper[4703]: I1209 12:33:10.113904 4703 generic.go:334] "Generic (PLEG): container finished" podID="3920ccfc-84d1-4ed4-9295-942b370b9eaf" containerID="c0aa394d9727d0e4b5133d6aa2c132f0dc9226344579b548e6da63526f98a692" exitCode=0 Dec 09 12:33:10 crc kubenswrapper[4703]: I1209 12:33:10.113983 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fd9b586ff-sbnjf" event={"ID":"3920ccfc-84d1-4ed4-9295-942b370b9eaf","Type":"ContainerDied","Data":"c0aa394d9727d0e4b5133d6aa2c132f0dc9226344579b548e6da63526f98a692"} Dec 09 12:33:10 crc kubenswrapper[4703]: I1209 12:33:10.179707 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1769eb76-1a06-4ce4-accf-7a0b91a759c8-dns-svc\") pod \"dnsmasq-dns-85f64749dc-xznm5\" (UID: \"1769eb76-1a06-4ce4-accf-7a0b91a759c8\") " pod="openstack/dnsmasq-dns-85f64749dc-xznm5" Dec 09 12:33:10 crc kubenswrapper[4703]: I1209 12:33:10.179892 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfr9j\" (UniqueName: \"kubernetes.io/projected/1769eb76-1a06-4ce4-accf-7a0b91a759c8-kube-api-access-vfr9j\") pod \"dnsmasq-dns-85f64749dc-xznm5\" (UID: \"1769eb76-1a06-4ce4-accf-7a0b91a759c8\") " pod="openstack/dnsmasq-dns-85f64749dc-xznm5" Dec 09 12:33:10 crc kubenswrapper[4703]: I1209 12:33:10.179937 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1769eb76-1a06-4ce4-accf-7a0b91a759c8-dns-swift-storage-0\") pod \"dnsmasq-dns-85f64749dc-xznm5\" (UID: \"1769eb76-1a06-4ce4-accf-7a0b91a759c8\") " pod="openstack/dnsmasq-dns-85f64749dc-xznm5" Dec 09 12:33:10 crc kubenswrapper[4703]: I1209 12:33:10.180059 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/1769eb76-1a06-4ce4-accf-7a0b91a759c8-openstack-edpm-ipam\") pod \"dnsmasq-dns-85f64749dc-xznm5\" (UID: \"1769eb76-1a06-4ce4-accf-7a0b91a759c8\") " pod="openstack/dnsmasq-dns-85f64749dc-xznm5" Dec 09 12:33:10 crc kubenswrapper[4703]: I1209 12:33:10.180763 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1769eb76-1a06-4ce4-accf-7a0b91a759c8-dns-svc\") pod \"dnsmasq-dns-85f64749dc-xznm5\" (UID: \"1769eb76-1a06-4ce4-accf-7a0b91a759c8\") " pod="openstack/dnsmasq-dns-85f64749dc-xznm5" Dec 09 12:33:10 crc kubenswrapper[4703]: I1209 12:33:10.181446 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1769eb76-1a06-4ce4-accf-7a0b91a759c8-config\") pod \"dnsmasq-dns-85f64749dc-xznm5\" (UID: \"1769eb76-1a06-4ce4-accf-7a0b91a759c8\") " pod="openstack/dnsmasq-dns-85f64749dc-xznm5" Dec 09 12:33:10 crc kubenswrapper[4703]: I1209 12:33:10.181513 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1769eb76-1a06-4ce4-accf-7a0b91a759c8-ovsdbserver-nb\") pod \"dnsmasq-dns-85f64749dc-xznm5\" (UID: \"1769eb76-1a06-4ce4-accf-7a0b91a759c8\") " pod="openstack/dnsmasq-dns-85f64749dc-xznm5" Dec 09 12:33:10 crc kubenswrapper[4703]: I1209 12:33:10.181555 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1769eb76-1a06-4ce4-accf-7a0b91a759c8-ovsdbserver-sb\") pod \"dnsmasq-dns-85f64749dc-xznm5\" (UID: \"1769eb76-1a06-4ce4-accf-7a0b91a759c8\") " pod="openstack/dnsmasq-dns-85f64749dc-xznm5" Dec 09 12:33:10 crc kubenswrapper[4703]: I1209 12:33:10.182653 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1769eb76-1a06-4ce4-accf-7a0b91a759c8-dns-swift-storage-0\") pod \"dnsmasq-dns-85f64749dc-xznm5\" (UID: \"1769eb76-1a06-4ce4-accf-7a0b91a759c8\") " pod="openstack/dnsmasq-dns-85f64749dc-xznm5" Dec 09 12:33:10 crc kubenswrapper[4703]: I1209 12:33:10.182984 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1769eb76-1a06-4ce4-accf-7a0b91a759c8-ovsdbserver-sb\") pod \"dnsmasq-dns-85f64749dc-xznm5\" (UID: \"1769eb76-1a06-4ce4-accf-7a0b91a759c8\") " pod="openstack/dnsmasq-dns-85f64749dc-xznm5" Dec 09 12:33:10 crc kubenswrapper[4703]: I1209 12:33:10.185741 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1769eb76-1a06-4ce4-accf-7a0b91a759c8-config\") pod \"dnsmasq-dns-85f64749dc-xznm5\" (UID: \"1769eb76-1a06-4ce4-accf-7a0b91a759c8\") " pod="openstack/dnsmasq-dns-85f64749dc-xznm5" Dec 09 12:33:10 crc kubenswrapper[4703]: I1209 12:33:10.186078 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1769eb76-1a06-4ce4-accf-7a0b91a759c8-ovsdbserver-nb\") pod \"dnsmasq-dns-85f64749dc-xznm5\" (UID: \"1769eb76-1a06-4ce4-accf-7a0b91a759c8\") " pod="openstack/dnsmasq-dns-85f64749dc-xznm5" Dec 09 12:33:10 crc kubenswrapper[4703]: I1209 12:33:10.186675 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/1769eb76-1a06-4ce4-accf-7a0b91a759c8-openstack-edpm-ipam\") pod \"dnsmasq-dns-85f64749dc-xznm5\" (UID: \"1769eb76-1a06-4ce4-accf-7a0b91a759c8\") " pod="openstack/dnsmasq-dns-85f64749dc-xznm5" Dec 09 12:33:10 crc kubenswrapper[4703]: I1209 12:33:10.222848 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfr9j\" (UniqueName: \"kubernetes.io/projected/1769eb76-1a06-4ce4-accf-7a0b91a759c8-kube-api-access-vfr9j\") pod \"dnsmasq-dns-85f64749dc-xznm5\" (UID: \"1769eb76-1a06-4ce4-accf-7a0b91a759c8\") " pod="openstack/dnsmasq-dns-85f64749dc-xznm5" Dec 09 12:33:10 crc kubenswrapper[4703]: I1209 12:33:10.309263 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85f64749dc-xznm5" Dec 09 12:33:10 crc kubenswrapper[4703]: I1209 12:33:10.456960 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fd9b586ff-sbnjf" Dec 09 12:33:10 crc kubenswrapper[4703]: I1209 12:33:10.594168 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lfqz5\" (UniqueName: \"kubernetes.io/projected/3920ccfc-84d1-4ed4-9295-942b370b9eaf-kube-api-access-lfqz5\") pod \"3920ccfc-84d1-4ed4-9295-942b370b9eaf\" (UID: \"3920ccfc-84d1-4ed4-9295-942b370b9eaf\") " Dec 09 12:33:10 crc kubenswrapper[4703]: I1209 12:33:10.594793 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3920ccfc-84d1-4ed4-9295-942b370b9eaf-dns-svc\") pod \"3920ccfc-84d1-4ed4-9295-942b370b9eaf\" (UID: \"3920ccfc-84d1-4ed4-9295-942b370b9eaf\") " Dec 09 12:33:10 crc kubenswrapper[4703]: I1209 12:33:10.594872 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3920ccfc-84d1-4ed4-9295-942b370b9eaf-config\") pod \"3920ccfc-84d1-4ed4-9295-942b370b9eaf\" (UID: \"3920ccfc-84d1-4ed4-9295-942b370b9eaf\") " Dec 09 12:33:10 crc kubenswrapper[4703]: I1209 12:33:10.595007 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3920ccfc-84d1-4ed4-9295-942b370b9eaf-ovsdbserver-nb\") pod \"3920ccfc-84d1-4ed4-9295-942b370b9eaf\" (UID: \"3920ccfc-84d1-4ed4-9295-942b370b9eaf\") " Dec 09 12:33:10 crc kubenswrapper[4703]: I1209 12:33:10.595087 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3920ccfc-84d1-4ed4-9295-942b370b9eaf-dns-swift-storage-0\") pod \"3920ccfc-84d1-4ed4-9295-942b370b9eaf\" (UID: \"3920ccfc-84d1-4ed4-9295-942b370b9eaf\") " Dec 09 12:33:10 crc kubenswrapper[4703]: I1209 12:33:10.595269 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3920ccfc-84d1-4ed4-9295-942b370b9eaf-ovsdbserver-sb\") pod \"3920ccfc-84d1-4ed4-9295-942b370b9eaf\" (UID: \"3920ccfc-84d1-4ed4-9295-942b370b9eaf\") " Dec 09 12:33:10 crc kubenswrapper[4703]: I1209 12:33:10.603785 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3920ccfc-84d1-4ed4-9295-942b370b9eaf-kube-api-access-lfqz5" (OuterVolumeSpecName: "kube-api-access-lfqz5") pod "3920ccfc-84d1-4ed4-9295-942b370b9eaf" (UID: "3920ccfc-84d1-4ed4-9295-942b370b9eaf"). InnerVolumeSpecName "kube-api-access-lfqz5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:33:10 crc kubenswrapper[4703]: I1209 12:33:10.673419 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3920ccfc-84d1-4ed4-9295-942b370b9eaf-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "3920ccfc-84d1-4ed4-9295-942b370b9eaf" (UID: "3920ccfc-84d1-4ed4-9295-942b370b9eaf"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:33:10 crc kubenswrapper[4703]: I1209 12:33:10.676434 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3920ccfc-84d1-4ed4-9295-942b370b9eaf-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3920ccfc-84d1-4ed4-9295-942b370b9eaf" (UID: "3920ccfc-84d1-4ed4-9295-942b370b9eaf"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:33:10 crc kubenswrapper[4703]: I1209 12:33:10.694112 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3920ccfc-84d1-4ed4-9295-942b370b9eaf-config" (OuterVolumeSpecName: "config") pod "3920ccfc-84d1-4ed4-9295-942b370b9eaf" (UID: "3920ccfc-84d1-4ed4-9295-942b370b9eaf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:33:10 crc kubenswrapper[4703]: I1209 12:33:10.698760 4703 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3920ccfc-84d1-4ed4-9295-942b370b9eaf-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 09 12:33:10 crc kubenswrapper[4703]: I1209 12:33:10.698818 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lfqz5\" (UniqueName: \"kubernetes.io/projected/3920ccfc-84d1-4ed4-9295-942b370b9eaf-kube-api-access-lfqz5\") on node \"crc\" DevicePath \"\"" Dec 09 12:33:10 crc kubenswrapper[4703]: I1209 12:33:10.698838 4703 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3920ccfc-84d1-4ed4-9295-942b370b9eaf-config\") on node \"crc\" DevicePath \"\"" Dec 09 12:33:10 crc kubenswrapper[4703]: I1209 12:33:10.698849 4703 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3920ccfc-84d1-4ed4-9295-942b370b9eaf-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 09 12:33:10 crc kubenswrapper[4703]: I1209 12:33:10.703051 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3920ccfc-84d1-4ed4-9295-942b370b9eaf-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3920ccfc-84d1-4ed4-9295-942b370b9eaf" (UID: "3920ccfc-84d1-4ed4-9295-942b370b9eaf"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:33:10 crc kubenswrapper[4703]: I1209 12:33:10.703354 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3920ccfc-84d1-4ed4-9295-942b370b9eaf-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3920ccfc-84d1-4ed4-9295-942b370b9eaf" (UID: "3920ccfc-84d1-4ed4-9295-942b370b9eaf"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:33:10 crc kubenswrapper[4703]: I1209 12:33:10.801322 4703 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3920ccfc-84d1-4ed4-9295-942b370b9eaf-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 09 12:33:10 crc kubenswrapper[4703]: I1209 12:33:10.801370 4703 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3920ccfc-84d1-4ed4-9295-942b370b9eaf-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 09 12:33:10 crc kubenswrapper[4703]: I1209 12:33:10.855585 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85f64749dc-xznm5"] Dec 09 12:33:11 crc kubenswrapper[4703]: E1209 12:33:11.082137 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:33:11 crc kubenswrapper[4703]: I1209 12:33:11.152357 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fd9b586ff-sbnjf" event={"ID":"3920ccfc-84d1-4ed4-9295-942b370b9eaf","Type":"ContainerDied","Data":"90b95f2f783c2b4a120677cb4481700b9479871be4d3df39e4747e975fb6bf07"} Dec 09 12:33:11 crc kubenswrapper[4703]: I1209 12:33:11.152459 4703 scope.go:117] "RemoveContainer" containerID="c0aa394d9727d0e4b5133d6aa2c132f0dc9226344579b548e6da63526f98a692" Dec 09 12:33:11 crc kubenswrapper[4703]: I1209 12:33:11.152458 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fd9b586ff-sbnjf" Dec 09 12:33:11 crc kubenswrapper[4703]: I1209 12:33:11.161500 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85f64749dc-xznm5" event={"ID":"1769eb76-1a06-4ce4-accf-7a0b91a759c8","Type":"ContainerStarted","Data":"53a14b41db1c86fafbbf4f853ab7a21acb7f529aa21868b0796c5e1426f05956"} Dec 09 12:33:11 crc kubenswrapper[4703]: I1209 12:33:11.201000 4703 scope.go:117] "RemoveContainer" containerID="aba61cdfb0aaa77557047ad1f723d313e9187c4a93208abea502994183402071" Dec 09 12:33:11 crc kubenswrapper[4703]: I1209 12:33:11.206643 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5fd9b586ff-sbnjf"] Dec 09 12:33:11 crc kubenswrapper[4703]: I1209 12:33:11.219361 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5fd9b586ff-sbnjf"] Dec 09 12:33:12 crc kubenswrapper[4703]: I1209 12:33:12.181054 4703 generic.go:334] "Generic (PLEG): container finished" podID="1769eb76-1a06-4ce4-accf-7a0b91a759c8" containerID="1ddcf702749a2b3c8f9124c90f2f39297a694049ac33af373612778aae452a70" exitCode=0 Dec 09 12:33:12 crc kubenswrapper[4703]: I1209 12:33:12.181268 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85f64749dc-xznm5" event={"ID":"1769eb76-1a06-4ce4-accf-7a0b91a759c8","Type":"ContainerDied","Data":"1ddcf702749a2b3c8f9124c90f2f39297a694049ac33af373612778aae452a70"} Dec 09 12:33:13 crc kubenswrapper[4703]: I1209 12:33:13.089439 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3920ccfc-84d1-4ed4-9295-942b370b9eaf" path="/var/lib/kubelet/pods/3920ccfc-84d1-4ed4-9295-942b370b9eaf/volumes" Dec 09 12:33:13 crc kubenswrapper[4703]: I1209 12:33:13.197890 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85f64749dc-xznm5" event={"ID":"1769eb76-1a06-4ce4-accf-7a0b91a759c8","Type":"ContainerStarted","Data":"7292503189dfd91ed436c6de1e8b8d1781949ebd9eae3245848f4ab4a719f0ef"} Dec 09 12:33:13 crc kubenswrapper[4703]: I1209 12:33:13.198170 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-85f64749dc-xznm5" Dec 09 12:33:13 crc kubenswrapper[4703]: I1209 12:33:13.221478 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-85f64749dc-xznm5" podStartSLOduration=4.221456063 podStartE2EDuration="4.221456063s" podCreationTimestamp="2025-12-09 12:33:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:33:13.219298475 +0000 UTC m=+1692.468062004" watchObservedRunningTime="2025-12-09 12:33:13.221456063 +0000 UTC m=+1692.470219582" Dec 09 12:33:14 crc kubenswrapper[4703]: I1209 12:33:14.070091 4703 scope.go:117] "RemoveContainer" containerID="aec88ccdbd5bc6c7f7f9dd534b57b24318f2c545ead2d59998c5ffb08ae72b46" Dec 09 12:33:14 crc kubenswrapper[4703]: E1209 12:33:14.070747 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 12:33:19 crc kubenswrapper[4703]: I1209 12:33:19.425143 4703 scope.go:117] "RemoveContainer" containerID="f01ccc31dd3ac96a8b8d86143102d3fdb78e32cde1edbe82ee282feaebacf713" Dec 09 12:33:19 crc kubenswrapper[4703]: I1209 12:33:19.458219 4703 scope.go:117] "RemoveContainer" containerID="3b3f2b61c96f9048f951cf177867d2aeeafff9cdcf473e38b29623a95aaa3a3f" Dec 09 12:33:20 crc kubenswrapper[4703]: E1209 12:33:20.072341 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:33:20 crc kubenswrapper[4703]: I1209 12:33:20.312234 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-85f64749dc-xznm5" Dec 09 12:33:20 crc kubenswrapper[4703]: I1209 12:33:20.415698 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-dbb88bf8c-j6lhz"] Dec 09 12:33:20 crc kubenswrapper[4703]: I1209 12:33:20.419232 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-dbb88bf8c-j6lhz" podUID="105d2a1c-4219-4cee-80ae-cf3fc6ee0e0c" containerName="dnsmasq-dns" containerID="cri-o://e609e9a490f1bdeebe583af978318cd02cb5b98f634b68a06510a0881f92bf22" gracePeriod=10 Dec 09 12:33:21 crc kubenswrapper[4703]: I1209 12:33:21.173980 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dbb88bf8c-j6lhz" Dec 09 12:33:21 crc kubenswrapper[4703]: I1209 12:33:21.296568 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/105d2a1c-4219-4cee-80ae-cf3fc6ee0e0c-ovsdbserver-sb\") pod \"105d2a1c-4219-4cee-80ae-cf3fc6ee0e0c\" (UID: \"105d2a1c-4219-4cee-80ae-cf3fc6ee0e0c\") " Dec 09 12:33:21 crc kubenswrapper[4703]: I1209 12:33:21.296688 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/105d2a1c-4219-4cee-80ae-cf3fc6ee0e0c-dns-svc\") pod \"105d2a1c-4219-4cee-80ae-cf3fc6ee0e0c\" (UID: \"105d2a1c-4219-4cee-80ae-cf3fc6ee0e0c\") " Dec 09 12:33:21 crc kubenswrapper[4703]: I1209 12:33:21.296723 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsx8\" (UniqueName: \"kubernetes.io/projected/105d2a1c-4219-4cee-80ae-cf3fc6ee0e0c-kube-api-access-fqsx8\") pod \"105d2a1c-4219-4cee-80ae-cf3fc6ee0e0c\" (UID: \"105d2a1c-4219-4cee-80ae-cf3fc6ee0e0c\") " Dec 09 12:33:21 crc kubenswrapper[4703]: I1209 12:33:21.296895 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/105d2a1c-4219-4cee-80ae-cf3fc6ee0e0c-config\") pod \"105d2a1c-4219-4cee-80ae-cf3fc6ee0e0c\" (UID: \"105d2a1c-4219-4cee-80ae-cf3fc6ee0e0c\") " Dec 09 12:33:21 crc kubenswrapper[4703]: I1209 12:33:21.297305 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/105d2a1c-4219-4cee-80ae-cf3fc6ee0e0c-openstack-edpm-ipam\") pod \"105d2a1c-4219-4cee-80ae-cf3fc6ee0e0c\" (UID: \"105d2a1c-4219-4cee-80ae-cf3fc6ee0e0c\") " Dec 09 12:33:21 crc kubenswrapper[4703]: I1209 12:33:21.297343 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/105d2a1c-4219-4cee-80ae-cf3fc6ee0e0c-ovsdbserver-nb\") pod \"105d2a1c-4219-4cee-80ae-cf3fc6ee0e0c\" (UID: \"105d2a1c-4219-4cee-80ae-cf3fc6ee0e0c\") " Dec 09 12:33:21 crc kubenswrapper[4703]: I1209 12:33:21.297615 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/105d2a1c-4219-4cee-80ae-cf3fc6ee0e0c-dns-swift-storage-0\") pod \"105d2a1c-4219-4cee-80ae-cf3fc6ee0e0c\" (UID: \"105d2a1c-4219-4cee-80ae-cf3fc6ee0e0c\") " Dec 09 12:33:21 crc kubenswrapper[4703]: I1209 12:33:21.320483 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/105d2a1c-4219-4cee-80ae-cf3fc6ee0e0c-kube-api-access-fqsx8" (OuterVolumeSpecName: "kube-api-access-fqsx8") pod "105d2a1c-4219-4cee-80ae-cf3fc6ee0e0c" (UID: "105d2a1c-4219-4cee-80ae-cf3fc6ee0e0c"). InnerVolumeSpecName "kube-api-access-fqsx8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:33:21 crc kubenswrapper[4703]: I1209 12:33:21.335736 4703 generic.go:334] "Generic (PLEG): container finished" podID="105d2a1c-4219-4cee-80ae-cf3fc6ee0e0c" containerID="e609e9a490f1bdeebe583af978318cd02cb5b98f634b68a06510a0881f92bf22" exitCode=0 Dec 09 12:33:21 crc kubenswrapper[4703]: I1209 12:33:21.335914 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dbb88bf8c-j6lhz" event={"ID":"105d2a1c-4219-4cee-80ae-cf3fc6ee0e0c","Type":"ContainerDied","Data":"e609e9a490f1bdeebe583af978318cd02cb5b98f634b68a06510a0881f92bf22"} Dec 09 12:33:21 crc kubenswrapper[4703]: I1209 12:33:21.336030 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dbb88bf8c-j6lhz" event={"ID":"105d2a1c-4219-4cee-80ae-cf3fc6ee0e0c","Type":"ContainerDied","Data":"1f2c35844a8bd1ff059f2e4394fece2c01bc12a75bf42d138a7ebcf23e1f036e"} Dec 09 12:33:21 crc kubenswrapper[4703]: I1209 12:33:21.336137 4703 scope.go:117] "RemoveContainer" containerID="e609e9a490f1bdeebe583af978318cd02cb5b98f634b68a06510a0881f92bf22" Dec 09 12:33:21 crc kubenswrapper[4703]: I1209 12:33:21.336403 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dbb88bf8c-j6lhz" Dec 09 12:33:21 crc kubenswrapper[4703]: I1209 12:33:21.398038 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/105d2a1c-4219-4cee-80ae-cf3fc6ee0e0c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "105d2a1c-4219-4cee-80ae-cf3fc6ee0e0c" (UID: "105d2a1c-4219-4cee-80ae-cf3fc6ee0e0c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:33:21 crc kubenswrapper[4703]: I1209 12:33:21.401915 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/105d2a1c-4219-4cee-80ae-cf3fc6ee0e0c-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "105d2a1c-4219-4cee-80ae-cf3fc6ee0e0c" (UID: "105d2a1c-4219-4cee-80ae-cf3fc6ee0e0c"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:33:21 crc kubenswrapper[4703]: I1209 12:33:21.402473 4703 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/105d2a1c-4219-4cee-80ae-cf3fc6ee0e0c-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Dec 09 12:33:21 crc kubenswrapper[4703]: I1209 12:33:21.402502 4703 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/105d2a1c-4219-4cee-80ae-cf3fc6ee0e0c-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 09 12:33:21 crc kubenswrapper[4703]: I1209 12:33:21.402517 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsx8\" (UniqueName: \"kubernetes.io/projected/105d2a1c-4219-4cee-80ae-cf3fc6ee0e0c-kube-api-access-fqsx8\") on node \"crc\" DevicePath \"\"" Dec 09 12:33:21 crc kubenswrapper[4703]: I1209 12:33:21.409392 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/105d2a1c-4219-4cee-80ae-cf3fc6ee0e0c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "105d2a1c-4219-4cee-80ae-cf3fc6ee0e0c" (UID: "105d2a1c-4219-4cee-80ae-cf3fc6ee0e0c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:33:21 crc kubenswrapper[4703]: I1209 12:33:21.415994 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/105d2a1c-4219-4cee-80ae-cf3fc6ee0e0c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "105d2a1c-4219-4cee-80ae-cf3fc6ee0e0c" (UID: "105d2a1c-4219-4cee-80ae-cf3fc6ee0e0c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:33:21 crc kubenswrapper[4703]: I1209 12:33:21.421836 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/105d2a1c-4219-4cee-80ae-cf3fc6ee0e0c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "105d2a1c-4219-4cee-80ae-cf3fc6ee0e0c" (UID: "105d2a1c-4219-4cee-80ae-cf3fc6ee0e0c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:33:21 crc kubenswrapper[4703]: I1209 12:33:21.424576 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/105d2a1c-4219-4cee-80ae-cf3fc6ee0e0c-config" (OuterVolumeSpecName: "config") pod "105d2a1c-4219-4cee-80ae-cf3fc6ee0e0c" (UID: "105d2a1c-4219-4cee-80ae-cf3fc6ee0e0c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:33:21 crc kubenswrapper[4703]: I1209 12:33:21.505490 4703 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/105d2a1c-4219-4cee-80ae-cf3fc6ee0e0c-config\") on node \"crc\" DevicePath \"\"" Dec 09 12:33:21 crc kubenswrapper[4703]: I1209 12:33:21.505531 4703 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/105d2a1c-4219-4cee-80ae-cf3fc6ee0e0c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 09 12:33:21 crc kubenswrapper[4703]: I1209 12:33:21.505546 4703 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/105d2a1c-4219-4cee-80ae-cf3fc6ee0e0c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 09 12:33:21 crc kubenswrapper[4703]: I1209 12:33:21.505556 4703 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/105d2a1c-4219-4cee-80ae-cf3fc6ee0e0c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 09 12:33:21 crc kubenswrapper[4703]: I1209 12:33:21.524721 4703 scope.go:117] "RemoveContainer" containerID="92878cd1830e00f9d866ce58a6340f0e961c002dde33867fbd6d00bb5b0c4663" Dec 09 12:33:21 crc kubenswrapper[4703]: I1209 12:33:21.599415 4703 scope.go:117] "RemoveContainer" containerID="e609e9a490f1bdeebe583af978318cd02cb5b98f634b68a06510a0881f92bf22" Dec 09 12:33:21 crc kubenswrapper[4703]: E1209 12:33:21.600476 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e609e9a490f1bdeebe583af978318cd02cb5b98f634b68a06510a0881f92bf22\": container with ID starting with e609e9a490f1bdeebe583af978318cd02cb5b98f634b68a06510a0881f92bf22 not found: ID does not exist" containerID="e609e9a490f1bdeebe583af978318cd02cb5b98f634b68a06510a0881f92bf22" Dec 09 12:33:21 crc kubenswrapper[4703]: I1209 12:33:21.600541 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e609e9a490f1bdeebe583af978318cd02cb5b98f634b68a06510a0881f92bf22"} err="failed to get container status \"e609e9a490f1bdeebe583af978318cd02cb5b98f634b68a06510a0881f92bf22\": rpc error: code = NotFound desc = could not find container \"e609e9a490f1bdeebe583af978318cd02cb5b98f634b68a06510a0881f92bf22\": container with ID starting with e609e9a490f1bdeebe583af978318cd02cb5b98f634b68a06510a0881f92bf22 not found: ID does not exist" Dec 09 12:33:21 crc kubenswrapper[4703]: I1209 12:33:21.600594 4703 scope.go:117] "RemoveContainer" containerID="92878cd1830e00f9d866ce58a6340f0e961c002dde33867fbd6d00bb5b0c4663" Dec 09 12:33:21 crc kubenswrapper[4703]: E1209 12:33:21.600962 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92878cd1830e00f9d866ce58a6340f0e961c002dde33867fbd6d00bb5b0c4663\": container with ID starting with 92878cd1830e00f9d866ce58a6340f0e961c002dde33867fbd6d00bb5b0c4663 not found: ID does not exist" containerID="92878cd1830e00f9d866ce58a6340f0e961c002dde33867fbd6d00bb5b0c4663" Dec 09 12:33:21 crc kubenswrapper[4703]: I1209 12:33:21.601021 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92878cd1830e00f9d866ce58a6340f0e961c002dde33867fbd6d00bb5b0c4663"} err="failed to get container status \"92878cd1830e00f9d866ce58a6340f0e961c002dde33867fbd6d00bb5b0c4663\": rpc error: code = NotFound desc = could not find container \"92878cd1830e00f9d866ce58a6340f0e961c002dde33867fbd6d00bb5b0c4663\": container with ID starting with 92878cd1830e00f9d866ce58a6340f0e961c002dde33867fbd6d00bb5b0c4663 not found: ID does not exist" Dec 09 12:33:21 crc kubenswrapper[4703]: I1209 12:33:21.679985 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-dbb88bf8c-j6lhz"] Dec 09 12:33:21 crc kubenswrapper[4703]: I1209 12:33:21.692316 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-dbb88bf8c-j6lhz"] Dec 09 12:33:23 crc kubenswrapper[4703]: I1209 12:33:23.084910 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="105d2a1c-4219-4cee-80ae-cf3fc6ee0e0c" path="/var/lib/kubelet/pods/105d2a1c-4219-4cee-80ae-cf3fc6ee0e0c/volumes" Dec 09 12:33:25 crc kubenswrapper[4703]: E1209 12:33:25.197519 4703 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 09 12:33:25 crc kubenswrapper[4703]: E1209 12:33:25.198180 4703 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 09 12:33:25 crc kubenswrapper[4703]: E1209 12:33:25.198977 4703 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6kdkz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-6ljb8_openstack(862e9d91-760e-43af-aba3-a23255b0fd7a): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 12:33:25 crc kubenswrapper[4703]: E1209 12:33:25.200851 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:33:26 crc kubenswrapper[4703]: I1209 12:33:26.070088 4703 scope.go:117] "RemoveContainer" containerID="aec88ccdbd5bc6c7f7f9dd534b57b24318f2c545ead2d59998c5ffb08ae72b46" Dec 09 12:33:26 crc kubenswrapper[4703]: E1209 12:33:26.070550 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 12:33:33 crc kubenswrapper[4703]: I1209 12:33:33.169995 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9nm9n"] Dec 09 12:33:33 crc kubenswrapper[4703]: E1209 12:33:33.187290 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="105d2a1c-4219-4cee-80ae-cf3fc6ee0e0c" containerName="dnsmasq-dns" Dec 09 12:33:33 crc kubenswrapper[4703]: I1209 12:33:33.187345 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="105d2a1c-4219-4cee-80ae-cf3fc6ee0e0c" containerName="dnsmasq-dns" Dec 09 12:33:33 crc kubenswrapper[4703]: E1209 12:33:33.187405 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="105d2a1c-4219-4cee-80ae-cf3fc6ee0e0c" containerName="init" Dec 09 12:33:33 crc kubenswrapper[4703]: I1209 12:33:33.187415 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="105d2a1c-4219-4cee-80ae-cf3fc6ee0e0c" containerName="init" Dec 09 12:33:33 crc kubenswrapper[4703]: E1209 12:33:33.187441 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3920ccfc-84d1-4ed4-9295-942b370b9eaf" containerName="dnsmasq-dns" Dec 09 12:33:33 crc kubenswrapper[4703]: I1209 12:33:33.187450 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="3920ccfc-84d1-4ed4-9295-942b370b9eaf" containerName="dnsmasq-dns" Dec 09 12:33:33 crc kubenswrapper[4703]: E1209 12:33:33.187508 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3920ccfc-84d1-4ed4-9295-942b370b9eaf" containerName="init" Dec 09 12:33:33 crc kubenswrapper[4703]: I1209 12:33:33.187514 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="3920ccfc-84d1-4ed4-9295-942b370b9eaf" containerName="init" Dec 09 12:33:33 crc kubenswrapper[4703]: I1209 12:33:33.188123 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="3920ccfc-84d1-4ed4-9295-942b370b9eaf" containerName="dnsmasq-dns" Dec 09 12:33:33 crc kubenswrapper[4703]: I1209 12:33:33.188172 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="105d2a1c-4219-4cee-80ae-cf3fc6ee0e0c" containerName="dnsmasq-dns" Dec 09 12:33:33 crc kubenswrapper[4703]: I1209 12:33:33.190169 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9nm9n" Dec 09 12:33:33 crc kubenswrapper[4703]: I1209 12:33:33.194883 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 09 12:33:33 crc kubenswrapper[4703]: I1209 12:33:33.195256 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 09 12:33:33 crc kubenswrapper[4703]: I1209 12:33:33.195257 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-8xnzm" Dec 09 12:33:33 crc kubenswrapper[4703]: I1209 12:33:33.195268 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 09 12:33:33 crc kubenswrapper[4703]: I1209 12:33:33.201104 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9nm9n"] Dec 09 12:33:33 crc kubenswrapper[4703]: I1209 12:33:33.326850 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k25j9\" (UniqueName: \"kubernetes.io/projected/b62f9f99-a686-48b2-90b3-5ccdcf42a687-kube-api-access-k25j9\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-9nm9n\" (UID: \"b62f9f99-a686-48b2-90b3-5ccdcf42a687\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9nm9n" Dec 09 12:33:33 crc kubenswrapper[4703]: I1209 12:33:33.327406 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b62f9f99-a686-48b2-90b3-5ccdcf42a687-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-9nm9n\" (UID: \"b62f9f99-a686-48b2-90b3-5ccdcf42a687\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9nm9n" Dec 09 12:33:33 crc kubenswrapper[4703]: I1209 12:33:33.327707 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b62f9f99-a686-48b2-90b3-5ccdcf42a687-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-9nm9n\" (UID: \"b62f9f99-a686-48b2-90b3-5ccdcf42a687\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9nm9n" Dec 09 12:33:33 crc kubenswrapper[4703]: I1209 12:33:33.327940 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b62f9f99-a686-48b2-90b3-5ccdcf42a687-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-9nm9n\" (UID: \"b62f9f99-a686-48b2-90b3-5ccdcf42a687\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9nm9n" Dec 09 12:33:33 crc kubenswrapper[4703]: I1209 12:33:33.430173 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b62f9f99-a686-48b2-90b3-5ccdcf42a687-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-9nm9n\" (UID: \"b62f9f99-a686-48b2-90b3-5ccdcf42a687\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9nm9n" Dec 09 12:33:33 crc kubenswrapper[4703]: I1209 12:33:33.430380 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b62f9f99-a686-48b2-90b3-5ccdcf42a687-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-9nm9n\" (UID: \"b62f9f99-a686-48b2-90b3-5ccdcf42a687\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9nm9n" Dec 09 12:33:33 crc kubenswrapper[4703]: I1209 12:33:33.431984 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b62f9f99-a686-48b2-90b3-5ccdcf42a687-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-9nm9n\" (UID: \"b62f9f99-a686-48b2-90b3-5ccdcf42a687\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9nm9n" Dec 09 12:33:33 crc kubenswrapper[4703]: I1209 12:33:33.432089 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k25j9\" (UniqueName: \"kubernetes.io/projected/b62f9f99-a686-48b2-90b3-5ccdcf42a687-kube-api-access-k25j9\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-9nm9n\" (UID: \"b62f9f99-a686-48b2-90b3-5ccdcf42a687\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9nm9n" Dec 09 12:33:33 crc kubenswrapper[4703]: I1209 12:33:33.438908 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b62f9f99-a686-48b2-90b3-5ccdcf42a687-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-9nm9n\" (UID: \"b62f9f99-a686-48b2-90b3-5ccdcf42a687\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9nm9n" Dec 09 12:33:33 crc kubenswrapper[4703]: I1209 12:33:33.439057 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b62f9f99-a686-48b2-90b3-5ccdcf42a687-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-9nm9n\" (UID: \"b62f9f99-a686-48b2-90b3-5ccdcf42a687\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9nm9n" Dec 09 12:33:33 crc kubenswrapper[4703]: I1209 12:33:33.441852 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b62f9f99-a686-48b2-90b3-5ccdcf42a687-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-9nm9n\" (UID: \"b62f9f99-a686-48b2-90b3-5ccdcf42a687\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9nm9n" Dec 09 12:33:33 crc kubenswrapper[4703]: I1209 12:33:33.450504 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k25j9\" (UniqueName: \"kubernetes.io/projected/b62f9f99-a686-48b2-90b3-5ccdcf42a687-kube-api-access-k25j9\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-9nm9n\" (UID: \"b62f9f99-a686-48b2-90b3-5ccdcf42a687\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9nm9n" Dec 09 12:33:33 crc kubenswrapper[4703]: I1209 12:33:33.486633 4703 generic.go:334] "Generic (PLEG): container finished" podID="992a545d-2e79-43b3-819b-bd337432ba58" containerID="c76a0832ad86d599e581c824516c98babc6ba039bcca952884dd41993388b165" exitCode=0 Dec 09 12:33:33 crc kubenswrapper[4703]: I1209 12:33:33.486730 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"992a545d-2e79-43b3-819b-bd337432ba58","Type":"ContainerDied","Data":"c76a0832ad86d599e581c824516c98babc6ba039bcca952884dd41993388b165"} Dec 09 12:33:33 crc kubenswrapper[4703]: I1209 12:33:33.492508 4703 generic.go:334] "Generic (PLEG): container finished" podID="d195f6e9-05a6-430c-b28f-847f7635f1ee" containerID="ed2ef6e8c758f50fe6596fa6248d04c5c5cb441bb731a8cc3e8b4676b90172ed" exitCode=0 Dec 09 12:33:33 crc kubenswrapper[4703]: I1209 12:33:33.492552 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d195f6e9-05a6-430c-b28f-847f7635f1ee","Type":"ContainerDied","Data":"ed2ef6e8c758f50fe6596fa6248d04c5c5cb441bb731a8cc3e8b4676b90172ed"} Dec 09 12:33:33 crc kubenswrapper[4703]: I1209 12:33:33.519407 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9nm9n" Dec 09 12:33:34 crc kubenswrapper[4703]: E1209 12:33:34.210226 4703 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 12:33:34 crc kubenswrapper[4703]: E1209 12:33:34.211230 4703 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 12:33:34 crc kubenswrapper[4703]: E1209 12:33:34.211446 4703 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n565h678h645h65h685h657h568h564h8fhcfhc4h58fh66bh564h649h5d5hb5h67dh57fh97h58ch557h4h68h59fh5c8h65ch586h5bch5b8h5c6h7q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p7hbq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(ce42c586-f397-4f98-be45-f56d36115d7a): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 12:33:34 crc kubenswrapper[4703]: E1209 12:33:34.213385 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:33:34 crc kubenswrapper[4703]: W1209 12:33:34.247729 4703 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb62f9f99_a686_48b2_90b3_5ccdcf42a687.slice/crio-8fb1d41d9049ca20d38af3894653c12ced64bd48c5979c945d795675634b3f0c WatchSource:0}: Error finding container 8fb1d41d9049ca20d38af3894653c12ced64bd48c5979c945d795675634b3f0c: Status 404 returned error can't find the container with id 8fb1d41d9049ca20d38af3894653c12ced64bd48c5979c945d795675634b3f0c Dec 09 12:33:34 crc kubenswrapper[4703]: I1209 12:33:34.248601 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9nm9n"] Dec 09 12:33:34 crc kubenswrapper[4703]: I1209 12:33:34.504290 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9nm9n" event={"ID":"b62f9f99-a686-48b2-90b3-5ccdcf42a687","Type":"ContainerStarted","Data":"8fb1d41d9049ca20d38af3894653c12ced64bd48c5979c945d795675634b3f0c"} Dec 09 12:33:34 crc kubenswrapper[4703]: I1209 12:33:34.507742 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d195f6e9-05a6-430c-b28f-847f7635f1ee","Type":"ContainerStarted","Data":"411eec4b321cf707f883b0ad256d3a0392db1b0c53d0c1000af0207b87673f94"} Dec 09 12:33:34 crc kubenswrapper[4703]: I1209 12:33:34.508131 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:33:34 crc kubenswrapper[4703]: I1209 12:33:34.510930 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"992a545d-2e79-43b3-819b-bd337432ba58","Type":"ContainerStarted","Data":"5ba0f64d90c62dc20984db5b898ab1384ee020e28c480461bf9318b02eaf6b45"} Dec 09 12:33:34 crc kubenswrapper[4703]: I1209 12:33:34.511587 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Dec 09 12:33:34 crc kubenswrapper[4703]: I1209 12:33:34.549400 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=38.549377219 podStartE2EDuration="38.549377219s" podCreationTimestamp="2025-12-09 12:32:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:33:34.543125052 +0000 UTC m=+1713.791888571" watchObservedRunningTime="2025-12-09 12:33:34.549377219 +0000 UTC m=+1713.798140728" Dec 09 12:33:34 crc kubenswrapper[4703]: I1209 12:33:34.581167 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.58114088 podStartE2EDuration="38.58114088s" podCreationTimestamp="2025-12-09 12:32:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:33:34.578913831 +0000 UTC m=+1713.827677360" watchObservedRunningTime="2025-12-09 12:33:34.58114088 +0000 UTC m=+1713.829904409" Dec 09 12:33:36 crc kubenswrapper[4703]: E1209 12:33:36.074103 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:33:40 crc kubenswrapper[4703]: I1209 12:33:40.071398 4703 scope.go:117] "RemoveContainer" containerID="aec88ccdbd5bc6c7f7f9dd534b57b24318f2c545ead2d59998c5ffb08ae72b46" Dec 09 12:33:40 crc kubenswrapper[4703]: E1209 12:33:40.072724 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 12:33:44 crc kubenswrapper[4703]: I1209 12:33:44.681367 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9nm9n" event={"ID":"b62f9f99-a686-48b2-90b3-5ccdcf42a687","Type":"ContainerStarted","Data":"362087cc206586f3016c4ac89eb3f76ab9919f2740e8217b5664b53f763e3324"} Dec 09 12:33:44 crc kubenswrapper[4703]: I1209 12:33:44.710964 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9nm9n" podStartSLOduration=2.248046372 podStartE2EDuration="11.710930837s" podCreationTimestamp="2025-12-09 12:33:33 +0000 UTC" firstStartedPulling="2025-12-09 12:33:34.251570643 +0000 UTC m=+1713.500334162" lastFinishedPulling="2025-12-09 12:33:43.714455108 +0000 UTC m=+1722.963218627" observedRunningTime="2025-12-09 12:33:44.705376619 +0000 UTC m=+1723.954140138" watchObservedRunningTime="2025-12-09 12:33:44.710930837 +0000 UTC m=+1723.959694356" Dec 09 12:33:46 crc kubenswrapper[4703]: E1209 12:33:46.071949 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:33:47 crc kubenswrapper[4703]: I1209 12:33:47.494414 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:33:47 crc kubenswrapper[4703]: I1209 12:33:47.727470 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Dec 09 12:33:51 crc kubenswrapper[4703]: E1209 12:33:51.080905 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:33:54 crc kubenswrapper[4703]: I1209 12:33:54.070399 4703 scope.go:117] "RemoveContainer" containerID="aec88ccdbd5bc6c7f7f9dd534b57b24318f2c545ead2d59998c5ffb08ae72b46" Dec 09 12:33:54 crc kubenswrapper[4703]: E1209 12:33:54.070841 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 12:33:56 crc kubenswrapper[4703]: I1209 12:33:56.815539 4703 generic.go:334] "Generic (PLEG): container finished" podID="b62f9f99-a686-48b2-90b3-5ccdcf42a687" containerID="362087cc206586f3016c4ac89eb3f76ab9919f2740e8217b5664b53f763e3324" exitCode=0 Dec 09 12:33:56 crc kubenswrapper[4703]: I1209 12:33:56.815664 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9nm9n" event={"ID":"b62f9f99-a686-48b2-90b3-5ccdcf42a687","Type":"ContainerDied","Data":"362087cc206586f3016c4ac89eb3f76ab9919f2740e8217b5664b53f763e3324"} Dec 09 12:33:58 crc kubenswrapper[4703]: I1209 12:33:58.563786 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9nm9n" Dec 09 12:33:58 crc kubenswrapper[4703]: I1209 12:33:58.745584 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b62f9f99-a686-48b2-90b3-5ccdcf42a687-repo-setup-combined-ca-bundle\") pod \"b62f9f99-a686-48b2-90b3-5ccdcf42a687\" (UID: \"b62f9f99-a686-48b2-90b3-5ccdcf42a687\") " Dec 09 12:33:58 crc kubenswrapper[4703]: I1209 12:33:58.745668 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b62f9f99-a686-48b2-90b3-5ccdcf42a687-inventory\") pod \"b62f9f99-a686-48b2-90b3-5ccdcf42a687\" (UID: \"b62f9f99-a686-48b2-90b3-5ccdcf42a687\") " Dec 09 12:33:58 crc kubenswrapper[4703]: I1209 12:33:58.745888 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k25j9\" (UniqueName: \"kubernetes.io/projected/b62f9f99-a686-48b2-90b3-5ccdcf42a687-kube-api-access-k25j9\") pod \"b62f9f99-a686-48b2-90b3-5ccdcf42a687\" (UID: \"b62f9f99-a686-48b2-90b3-5ccdcf42a687\") " Dec 09 12:33:58 crc kubenswrapper[4703]: I1209 12:33:58.745928 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b62f9f99-a686-48b2-90b3-5ccdcf42a687-ssh-key\") pod \"b62f9f99-a686-48b2-90b3-5ccdcf42a687\" (UID: \"b62f9f99-a686-48b2-90b3-5ccdcf42a687\") " Dec 09 12:33:58 crc kubenswrapper[4703]: I1209 12:33:58.753911 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b62f9f99-a686-48b2-90b3-5ccdcf42a687-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "b62f9f99-a686-48b2-90b3-5ccdcf42a687" (UID: "b62f9f99-a686-48b2-90b3-5ccdcf42a687"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:33:58 crc kubenswrapper[4703]: I1209 12:33:58.755559 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b62f9f99-a686-48b2-90b3-5ccdcf42a687-kube-api-access-k25j9" (OuterVolumeSpecName: "kube-api-access-k25j9") pod "b62f9f99-a686-48b2-90b3-5ccdcf42a687" (UID: "b62f9f99-a686-48b2-90b3-5ccdcf42a687"). InnerVolumeSpecName "kube-api-access-k25j9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:33:58 crc kubenswrapper[4703]: I1209 12:33:58.780534 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b62f9f99-a686-48b2-90b3-5ccdcf42a687-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "b62f9f99-a686-48b2-90b3-5ccdcf42a687" (UID: "b62f9f99-a686-48b2-90b3-5ccdcf42a687"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:33:58 crc kubenswrapper[4703]: I1209 12:33:58.784486 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b62f9f99-a686-48b2-90b3-5ccdcf42a687-inventory" (OuterVolumeSpecName: "inventory") pod "b62f9f99-a686-48b2-90b3-5ccdcf42a687" (UID: "b62f9f99-a686-48b2-90b3-5ccdcf42a687"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:33:58 crc kubenswrapper[4703]: I1209 12:33:58.849327 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k25j9\" (UniqueName: \"kubernetes.io/projected/b62f9f99-a686-48b2-90b3-5ccdcf42a687-kube-api-access-k25j9\") on node \"crc\" DevicePath \"\"" Dec 09 12:33:58 crc kubenswrapper[4703]: I1209 12:33:58.849772 4703 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b62f9f99-a686-48b2-90b3-5ccdcf42a687-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 09 12:33:58 crc kubenswrapper[4703]: I1209 12:33:58.849784 4703 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b62f9f99-a686-48b2-90b3-5ccdcf42a687-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:33:58 crc kubenswrapper[4703]: I1209 12:33:58.849796 4703 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b62f9f99-a686-48b2-90b3-5ccdcf42a687-inventory\") on node \"crc\" DevicePath \"\"" Dec 09 12:33:58 crc kubenswrapper[4703]: I1209 12:33:58.851784 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9nm9n" event={"ID":"b62f9f99-a686-48b2-90b3-5ccdcf42a687","Type":"ContainerDied","Data":"8fb1d41d9049ca20d38af3894653c12ced64bd48c5979c945d795675634b3f0c"} Dec 09 12:33:58 crc kubenswrapper[4703]: I1209 12:33:58.851838 4703 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8fb1d41d9049ca20d38af3894653c12ced64bd48c5979c945d795675634b3f0c" Dec 09 12:33:58 crc kubenswrapper[4703]: I1209 12:33:58.851907 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9nm9n" Dec 09 12:33:58 crc kubenswrapper[4703]: I1209 12:33:58.936175 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-fr24m"] Dec 09 12:33:58 crc kubenswrapper[4703]: E1209 12:33:58.936999 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b62f9f99-a686-48b2-90b3-5ccdcf42a687" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 09 12:33:58 crc kubenswrapper[4703]: I1209 12:33:58.937033 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="b62f9f99-a686-48b2-90b3-5ccdcf42a687" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 09 12:33:58 crc kubenswrapper[4703]: I1209 12:33:58.937343 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="b62f9f99-a686-48b2-90b3-5ccdcf42a687" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 09 12:33:58 crc kubenswrapper[4703]: I1209 12:33:58.938606 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-fr24m" Dec 09 12:33:58 crc kubenswrapper[4703]: I1209 12:33:58.941902 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 09 12:33:58 crc kubenswrapper[4703]: I1209 12:33:58.942988 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 09 12:33:58 crc kubenswrapper[4703]: I1209 12:33:58.943258 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-8xnzm" Dec 09 12:33:58 crc kubenswrapper[4703]: I1209 12:33:58.943463 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 09 12:33:58 crc kubenswrapper[4703]: I1209 12:33:58.975348 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-fr24m"] Dec 09 12:33:59 crc kubenswrapper[4703]: I1209 12:33:59.054022 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ea7666b8-7519-49e0-b20d-5aa60df946a4-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-fr24m\" (UID: \"ea7666b8-7519-49e0-b20d-5aa60df946a4\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-fr24m" Dec 09 12:33:59 crc kubenswrapper[4703]: I1209 12:33:59.054224 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ch54\" (UniqueName: \"kubernetes.io/projected/ea7666b8-7519-49e0-b20d-5aa60df946a4-kube-api-access-5ch54\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-fr24m\" (UID: \"ea7666b8-7519-49e0-b20d-5aa60df946a4\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-fr24m" Dec 09 12:33:59 crc kubenswrapper[4703]: I1209 12:33:59.054294 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ea7666b8-7519-49e0-b20d-5aa60df946a4-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-fr24m\" (UID: \"ea7666b8-7519-49e0-b20d-5aa60df946a4\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-fr24m" Dec 09 12:33:59 crc kubenswrapper[4703]: I1209 12:33:59.157347 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ea7666b8-7519-49e0-b20d-5aa60df946a4-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-fr24m\" (UID: \"ea7666b8-7519-49e0-b20d-5aa60df946a4\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-fr24m" Dec 09 12:33:59 crc kubenswrapper[4703]: I1209 12:33:59.157493 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ch54\" (UniqueName: \"kubernetes.io/projected/ea7666b8-7519-49e0-b20d-5aa60df946a4-kube-api-access-5ch54\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-fr24m\" (UID: \"ea7666b8-7519-49e0-b20d-5aa60df946a4\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-fr24m" Dec 09 12:33:59 crc kubenswrapper[4703]: I1209 12:33:59.157545 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ea7666b8-7519-49e0-b20d-5aa60df946a4-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-fr24m\" (UID: \"ea7666b8-7519-49e0-b20d-5aa60df946a4\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-fr24m" Dec 09 12:33:59 crc kubenswrapper[4703]: I1209 12:33:59.162830 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ea7666b8-7519-49e0-b20d-5aa60df946a4-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-fr24m\" (UID: \"ea7666b8-7519-49e0-b20d-5aa60df946a4\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-fr24m" Dec 09 12:33:59 crc kubenswrapper[4703]: I1209 12:33:59.166999 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ea7666b8-7519-49e0-b20d-5aa60df946a4-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-fr24m\" (UID: \"ea7666b8-7519-49e0-b20d-5aa60df946a4\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-fr24m" Dec 09 12:33:59 crc kubenswrapper[4703]: I1209 12:33:59.176628 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ch54\" (UniqueName: \"kubernetes.io/projected/ea7666b8-7519-49e0-b20d-5aa60df946a4-kube-api-access-5ch54\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-fr24m\" (UID: \"ea7666b8-7519-49e0-b20d-5aa60df946a4\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-fr24m" Dec 09 12:33:59 crc kubenswrapper[4703]: I1209 12:33:59.271836 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-fr24m" Dec 09 12:33:59 crc kubenswrapper[4703]: I1209 12:33:59.813136 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-fr24m"] Dec 09 12:33:59 crc kubenswrapper[4703]: I1209 12:33:59.867791 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-fr24m" event={"ID":"ea7666b8-7519-49e0-b20d-5aa60df946a4","Type":"ContainerStarted","Data":"3a9643cd10f86c5d6da1968a46e2e0e4a748ad42177d35381ff4d35a7b4c5ab3"} Dec 09 12:34:00 crc kubenswrapper[4703]: I1209 12:34:00.881156 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-fr24m" event={"ID":"ea7666b8-7519-49e0-b20d-5aa60df946a4","Type":"ContainerStarted","Data":"f9dc608a37ff211f5ad6be1a6de3c1a3504e06d903b766f50c65f2d639e6b541"} Dec 09 12:34:00 crc kubenswrapper[4703]: I1209 12:34:00.903815 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-fr24m" podStartSLOduration=2.481484102 podStartE2EDuration="2.903791583s" podCreationTimestamp="2025-12-09 12:33:58 +0000 UTC" firstStartedPulling="2025-12-09 12:33:59.820441357 +0000 UTC m=+1739.069204876" lastFinishedPulling="2025-12-09 12:34:00.242748838 +0000 UTC m=+1739.491512357" observedRunningTime="2025-12-09 12:34:00.901671786 +0000 UTC m=+1740.150435315" watchObservedRunningTime="2025-12-09 12:34:00.903791583 +0000 UTC m=+1740.152555102" Dec 09 12:34:01 crc kubenswrapper[4703]: E1209 12:34:01.078575 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:34:02 crc kubenswrapper[4703]: E1209 12:34:02.073243 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:34:03 crc kubenswrapper[4703]: I1209 12:34:03.924956 4703 generic.go:334] "Generic (PLEG): container finished" podID="ea7666b8-7519-49e0-b20d-5aa60df946a4" containerID="f9dc608a37ff211f5ad6be1a6de3c1a3504e06d903b766f50c65f2d639e6b541" exitCode=0 Dec 09 12:34:03 crc kubenswrapper[4703]: I1209 12:34:03.925027 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-fr24m" event={"ID":"ea7666b8-7519-49e0-b20d-5aa60df946a4","Type":"ContainerDied","Data":"f9dc608a37ff211f5ad6be1a6de3c1a3504e06d903b766f50c65f2d639e6b541"} Dec 09 12:34:05 crc kubenswrapper[4703]: I1209 12:34:05.491287 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-fr24m" Dec 09 12:34:05 crc kubenswrapper[4703]: I1209 12:34:05.625342 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ea7666b8-7519-49e0-b20d-5aa60df946a4-inventory\") pod \"ea7666b8-7519-49e0-b20d-5aa60df946a4\" (UID: \"ea7666b8-7519-49e0-b20d-5aa60df946a4\") " Dec 09 12:34:05 crc kubenswrapper[4703]: I1209 12:34:05.625485 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ea7666b8-7519-49e0-b20d-5aa60df946a4-ssh-key\") pod \"ea7666b8-7519-49e0-b20d-5aa60df946a4\" (UID: \"ea7666b8-7519-49e0-b20d-5aa60df946a4\") " Dec 09 12:34:05 crc kubenswrapper[4703]: I1209 12:34:05.625530 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5ch54\" (UniqueName: \"kubernetes.io/projected/ea7666b8-7519-49e0-b20d-5aa60df946a4-kube-api-access-5ch54\") pod \"ea7666b8-7519-49e0-b20d-5aa60df946a4\" (UID: \"ea7666b8-7519-49e0-b20d-5aa60df946a4\") " Dec 09 12:34:05 crc kubenswrapper[4703]: I1209 12:34:05.633272 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea7666b8-7519-49e0-b20d-5aa60df946a4-kube-api-access-5ch54" (OuterVolumeSpecName: "kube-api-access-5ch54") pod "ea7666b8-7519-49e0-b20d-5aa60df946a4" (UID: "ea7666b8-7519-49e0-b20d-5aa60df946a4"). InnerVolumeSpecName "kube-api-access-5ch54". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:34:05 crc kubenswrapper[4703]: I1209 12:34:05.662974 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea7666b8-7519-49e0-b20d-5aa60df946a4-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "ea7666b8-7519-49e0-b20d-5aa60df946a4" (UID: "ea7666b8-7519-49e0-b20d-5aa60df946a4"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:34:05 crc kubenswrapper[4703]: I1209 12:34:05.663370 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea7666b8-7519-49e0-b20d-5aa60df946a4-inventory" (OuterVolumeSpecName: "inventory") pod "ea7666b8-7519-49e0-b20d-5aa60df946a4" (UID: "ea7666b8-7519-49e0-b20d-5aa60df946a4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:34:05 crc kubenswrapper[4703]: I1209 12:34:05.729144 4703 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ea7666b8-7519-49e0-b20d-5aa60df946a4-inventory\") on node \"crc\" DevicePath \"\"" Dec 09 12:34:05 crc kubenswrapper[4703]: I1209 12:34:05.729208 4703 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ea7666b8-7519-49e0-b20d-5aa60df946a4-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 09 12:34:05 crc kubenswrapper[4703]: I1209 12:34:05.729219 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5ch54\" (UniqueName: \"kubernetes.io/projected/ea7666b8-7519-49e0-b20d-5aa60df946a4-kube-api-access-5ch54\") on node \"crc\" DevicePath \"\"" Dec 09 12:34:05 crc kubenswrapper[4703]: I1209 12:34:05.949085 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-fr24m" event={"ID":"ea7666b8-7519-49e0-b20d-5aa60df946a4","Type":"ContainerDied","Data":"3a9643cd10f86c5d6da1968a46e2e0e4a748ad42177d35381ff4d35a7b4c5ab3"} Dec 09 12:34:05 crc kubenswrapper[4703]: I1209 12:34:05.949133 4703 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a9643cd10f86c5d6da1968a46e2e0e4a748ad42177d35381ff4d35a7b4c5ab3" Dec 09 12:34:05 crc kubenswrapper[4703]: I1209 12:34:05.949136 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-fr24m" Dec 09 12:34:06 crc kubenswrapper[4703]: I1209 12:34:06.029687 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kdscp"] Dec 09 12:34:06 crc kubenswrapper[4703]: E1209 12:34:06.030450 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea7666b8-7519-49e0-b20d-5aa60df946a4" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Dec 09 12:34:06 crc kubenswrapper[4703]: I1209 12:34:06.030476 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea7666b8-7519-49e0-b20d-5aa60df946a4" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Dec 09 12:34:06 crc kubenswrapper[4703]: I1209 12:34:06.030763 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea7666b8-7519-49e0-b20d-5aa60df946a4" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Dec 09 12:34:06 crc kubenswrapper[4703]: I1209 12:34:06.031963 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kdscp" Dec 09 12:34:06 crc kubenswrapper[4703]: I1209 12:34:06.034882 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 09 12:34:06 crc kubenswrapper[4703]: I1209 12:34:06.035163 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-8xnzm" Dec 09 12:34:06 crc kubenswrapper[4703]: I1209 12:34:06.036309 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c01bdc0a-4376-4b9d-8418-40e064327bfe-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kdscp\" (UID: \"c01bdc0a-4376-4b9d-8418-40e064327bfe\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kdscp" Dec 09 12:34:06 crc kubenswrapper[4703]: I1209 12:34:06.036417 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdvkd\" (UniqueName: \"kubernetes.io/projected/c01bdc0a-4376-4b9d-8418-40e064327bfe-kube-api-access-hdvkd\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kdscp\" (UID: \"c01bdc0a-4376-4b9d-8418-40e064327bfe\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kdscp" Dec 09 12:34:06 crc kubenswrapper[4703]: I1209 12:34:06.036552 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c01bdc0a-4376-4b9d-8418-40e064327bfe-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kdscp\" (UID: \"c01bdc0a-4376-4b9d-8418-40e064327bfe\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kdscp" Dec 09 12:34:06 crc kubenswrapper[4703]: I1209 12:34:06.036706 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 09 12:34:06 crc kubenswrapper[4703]: I1209 12:34:06.036876 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c01bdc0a-4376-4b9d-8418-40e064327bfe-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kdscp\" (UID: \"c01bdc0a-4376-4b9d-8418-40e064327bfe\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kdscp" Dec 09 12:34:06 crc kubenswrapper[4703]: I1209 12:34:06.041250 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 09 12:34:06 crc kubenswrapper[4703]: I1209 12:34:06.042788 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kdscp"] Dec 09 12:34:06 crc kubenswrapper[4703]: I1209 12:34:06.139619 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c01bdc0a-4376-4b9d-8418-40e064327bfe-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kdscp\" (UID: \"c01bdc0a-4376-4b9d-8418-40e064327bfe\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kdscp" Dec 09 12:34:06 crc kubenswrapper[4703]: I1209 12:34:06.139712 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdvkd\" (UniqueName: \"kubernetes.io/projected/c01bdc0a-4376-4b9d-8418-40e064327bfe-kube-api-access-hdvkd\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kdscp\" (UID: \"c01bdc0a-4376-4b9d-8418-40e064327bfe\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kdscp" Dec 09 12:34:06 crc kubenswrapper[4703]: I1209 12:34:06.139758 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c01bdc0a-4376-4b9d-8418-40e064327bfe-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kdscp\" (UID: \"c01bdc0a-4376-4b9d-8418-40e064327bfe\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kdscp" Dec 09 12:34:06 crc kubenswrapper[4703]: I1209 12:34:06.139857 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c01bdc0a-4376-4b9d-8418-40e064327bfe-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kdscp\" (UID: \"c01bdc0a-4376-4b9d-8418-40e064327bfe\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kdscp" Dec 09 12:34:06 crc kubenswrapper[4703]: I1209 12:34:06.144166 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c01bdc0a-4376-4b9d-8418-40e064327bfe-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kdscp\" (UID: \"c01bdc0a-4376-4b9d-8418-40e064327bfe\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kdscp" Dec 09 12:34:06 crc kubenswrapper[4703]: I1209 12:34:06.145712 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c01bdc0a-4376-4b9d-8418-40e064327bfe-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kdscp\" (UID: \"c01bdc0a-4376-4b9d-8418-40e064327bfe\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kdscp" Dec 09 12:34:06 crc kubenswrapper[4703]: I1209 12:34:06.148232 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c01bdc0a-4376-4b9d-8418-40e064327bfe-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kdscp\" (UID: \"c01bdc0a-4376-4b9d-8418-40e064327bfe\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kdscp" Dec 09 12:34:06 crc kubenswrapper[4703]: I1209 12:34:06.162298 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdvkd\" (UniqueName: \"kubernetes.io/projected/c01bdc0a-4376-4b9d-8418-40e064327bfe-kube-api-access-hdvkd\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kdscp\" (UID: \"c01bdc0a-4376-4b9d-8418-40e064327bfe\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kdscp" Dec 09 12:34:06 crc kubenswrapper[4703]: I1209 12:34:06.357895 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kdscp" Dec 09 12:34:06 crc kubenswrapper[4703]: I1209 12:34:06.944939 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kdscp"] Dec 09 12:34:06 crc kubenswrapper[4703]: W1209 12:34:06.945926 4703 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc01bdc0a_4376_4b9d_8418_40e064327bfe.slice/crio-b3336a1ede882ad45b26d33dc79ced545269e1051f340f12b3c3970dbe19461f WatchSource:0}: Error finding container b3336a1ede882ad45b26d33dc79ced545269e1051f340f12b3c3970dbe19461f: Status 404 returned error can't find the container with id b3336a1ede882ad45b26d33dc79ced545269e1051f340f12b3c3970dbe19461f Dec 09 12:34:06 crc kubenswrapper[4703]: I1209 12:34:06.974567 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kdscp" event={"ID":"c01bdc0a-4376-4b9d-8418-40e064327bfe","Type":"ContainerStarted","Data":"b3336a1ede882ad45b26d33dc79ced545269e1051f340f12b3c3970dbe19461f"} Dec 09 12:34:07 crc kubenswrapper[4703]: I1209 12:34:07.988779 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kdscp" event={"ID":"c01bdc0a-4376-4b9d-8418-40e064327bfe","Type":"ContainerStarted","Data":"ec3678b6361519d99854188f39af807ab6879f68e698f2af0a65b1ef1ee4137d"} Dec 09 12:34:08 crc kubenswrapper[4703]: I1209 12:34:08.023620 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kdscp" podStartSLOduration=1.595929999 podStartE2EDuration="2.023594912s" podCreationTimestamp="2025-12-09 12:34:06 +0000 UTC" firstStartedPulling="2025-12-09 12:34:06.952202497 +0000 UTC m=+1746.200966016" lastFinishedPulling="2025-12-09 12:34:07.37986741 +0000 UTC m=+1746.628630929" observedRunningTime="2025-12-09 12:34:08.011632132 +0000 UTC m=+1747.260395661" watchObservedRunningTime="2025-12-09 12:34:08.023594912 +0000 UTC m=+1747.272358431" Dec 09 12:34:08 crc kubenswrapper[4703]: I1209 12:34:08.071152 4703 scope.go:117] "RemoveContainer" containerID="aec88ccdbd5bc6c7f7f9dd534b57b24318f2c545ead2d59998c5ffb08ae72b46" Dec 09 12:34:08 crc kubenswrapper[4703]: E1209 12:34:08.071519 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 12:34:16 crc kubenswrapper[4703]: E1209 12:34:16.196549 4703 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 12:34:16 crc kubenswrapper[4703]: E1209 12:34:16.197517 4703 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 12:34:16 crc kubenswrapper[4703]: E1209 12:34:16.197708 4703 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n565h678h645h65h685h657h568h564h8fhcfhc4h58fh66bh564h649h5d5hb5h67dh57fh97h58ch557h4h68h59fh5c8h65ch586h5bch5b8h5c6h7q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p7hbq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(ce42c586-f397-4f98-be45-f56d36115d7a): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 12:34:16 crc kubenswrapper[4703]: E1209 12:34:16.199084 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:34:16 crc kubenswrapper[4703]: E1209 12:34:16.205924 4703 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 09 12:34:16 crc kubenswrapper[4703]: E1209 12:34:16.206026 4703 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 09 12:34:16 crc kubenswrapper[4703]: E1209 12:34:16.206246 4703 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6kdkz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-6ljb8_openstack(862e9d91-760e-43af-aba3-a23255b0fd7a): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 12:34:16 crc kubenswrapper[4703]: E1209 12:34:16.210349 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:34:19 crc kubenswrapper[4703]: I1209 12:34:19.674897 4703 scope.go:117] "RemoveContainer" containerID="93950ba68f9aeddcee195a8836e85ce62543205c5a82a660738b7caa358c020f" Dec 09 12:34:19 crc kubenswrapper[4703]: I1209 12:34:19.707517 4703 scope.go:117] "RemoveContainer" containerID="53488f4d64c4a6e7e47e02f044e62c2880ce5a5c8f207c4e49d7f71fd0f1ba5b" Dec 09 12:34:23 crc kubenswrapper[4703]: I1209 12:34:23.070707 4703 scope.go:117] "RemoveContainer" containerID="aec88ccdbd5bc6c7f7f9dd534b57b24318f2c545ead2d59998c5ffb08ae72b46" Dec 09 12:34:23 crc kubenswrapper[4703]: E1209 12:34:23.071517 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 12:34:27 crc kubenswrapper[4703]: E1209 12:34:27.072465 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:34:30 crc kubenswrapper[4703]: E1209 12:34:30.071917 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:34:34 crc kubenswrapper[4703]: I1209 12:34:34.072055 4703 scope.go:117] "RemoveContainer" containerID="aec88ccdbd5bc6c7f7f9dd534b57b24318f2c545ead2d59998c5ffb08ae72b46" Dec 09 12:34:34 crc kubenswrapper[4703]: E1209 12:34:34.073094 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 12:34:39 crc kubenswrapper[4703]: E1209 12:34:39.072070 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:34:43 crc kubenswrapper[4703]: E1209 12:34:43.074128 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:34:45 crc kubenswrapper[4703]: I1209 12:34:45.070817 4703 scope.go:117] "RemoveContainer" containerID="aec88ccdbd5bc6c7f7f9dd534b57b24318f2c545ead2d59998c5ffb08ae72b46" Dec 09 12:34:45 crc kubenswrapper[4703]: E1209 12:34:45.071561 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 12:34:54 crc kubenswrapper[4703]: E1209 12:34:54.073271 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:34:55 crc kubenswrapper[4703]: E1209 12:34:55.071889 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:34:59 crc kubenswrapper[4703]: I1209 12:34:59.070797 4703 scope.go:117] "RemoveContainer" containerID="aec88ccdbd5bc6c7f7f9dd534b57b24318f2c545ead2d59998c5ffb08ae72b46" Dec 09 12:34:59 crc kubenswrapper[4703]: E1209 12:34:59.071909 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 12:35:07 crc kubenswrapper[4703]: E1209 12:35:07.073153 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:35:09 crc kubenswrapper[4703]: E1209 12:35:09.074678 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:35:12 crc kubenswrapper[4703]: I1209 12:35:12.070784 4703 scope.go:117] "RemoveContainer" containerID="aec88ccdbd5bc6c7f7f9dd534b57b24318f2c545ead2d59998c5ffb08ae72b46" Dec 09 12:35:12 crc kubenswrapper[4703]: E1209 12:35:12.071477 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 12:35:19 crc kubenswrapper[4703]: E1209 12:35:19.072857 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:35:19 crc kubenswrapper[4703]: I1209 12:35:19.814695 4703 scope.go:117] "RemoveContainer" containerID="943e64a17b3892f1f99a72714c0d139e740d57019c12afba0c75b8320b4461b5" Dec 09 12:35:19 crc kubenswrapper[4703]: I1209 12:35:19.859750 4703 scope.go:117] "RemoveContainer" containerID="5b6034afbb654131d7baac5e979bce18651dbfad9dd382dde333286858336607" Dec 09 12:35:24 crc kubenswrapper[4703]: E1209 12:35:24.071703 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:35:27 crc kubenswrapper[4703]: I1209 12:35:27.069457 4703 scope.go:117] "RemoveContainer" containerID="aec88ccdbd5bc6c7f7f9dd534b57b24318f2c545ead2d59998c5ffb08ae72b46" Dec 09 12:35:27 crc kubenswrapper[4703]: E1209 12:35:27.070307 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 12:35:34 crc kubenswrapper[4703]: E1209 12:35:34.072034 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:35:37 crc kubenswrapper[4703]: I1209 12:35:37.773666 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mg94v"] Dec 09 12:35:37 crc kubenswrapper[4703]: I1209 12:35:37.777168 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mg94v" Dec 09 12:35:37 crc kubenswrapper[4703]: I1209 12:35:37.787840 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mg94v"] Dec 09 12:35:37 crc kubenswrapper[4703]: I1209 12:35:37.813121 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66d40a6a-f4cd-43fd-9526-e1cbf16f9286-utilities\") pod \"certified-operators-mg94v\" (UID: \"66d40a6a-f4cd-43fd-9526-e1cbf16f9286\") " pod="openshift-marketplace/certified-operators-mg94v" Dec 09 12:35:37 crc kubenswrapper[4703]: I1209 12:35:37.813268 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89mh5\" (UniqueName: \"kubernetes.io/projected/66d40a6a-f4cd-43fd-9526-e1cbf16f9286-kube-api-access-89mh5\") pod \"certified-operators-mg94v\" (UID: \"66d40a6a-f4cd-43fd-9526-e1cbf16f9286\") " pod="openshift-marketplace/certified-operators-mg94v" Dec 09 12:35:37 crc kubenswrapper[4703]: I1209 12:35:37.813407 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66d40a6a-f4cd-43fd-9526-e1cbf16f9286-catalog-content\") pod \"certified-operators-mg94v\" (UID: \"66d40a6a-f4cd-43fd-9526-e1cbf16f9286\") " pod="openshift-marketplace/certified-operators-mg94v" Dec 09 12:35:37 crc kubenswrapper[4703]: I1209 12:35:37.915815 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66d40a6a-f4cd-43fd-9526-e1cbf16f9286-catalog-content\") pod \"certified-operators-mg94v\" (UID: \"66d40a6a-f4cd-43fd-9526-e1cbf16f9286\") " pod="openshift-marketplace/certified-operators-mg94v" Dec 09 12:35:37 crc kubenswrapper[4703]: I1209 12:35:37.915923 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66d40a6a-f4cd-43fd-9526-e1cbf16f9286-utilities\") pod \"certified-operators-mg94v\" (UID: \"66d40a6a-f4cd-43fd-9526-e1cbf16f9286\") " pod="openshift-marketplace/certified-operators-mg94v" Dec 09 12:35:37 crc kubenswrapper[4703]: I1209 12:35:37.915990 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89mh5\" (UniqueName: \"kubernetes.io/projected/66d40a6a-f4cd-43fd-9526-e1cbf16f9286-kube-api-access-89mh5\") pod \"certified-operators-mg94v\" (UID: \"66d40a6a-f4cd-43fd-9526-e1cbf16f9286\") " pod="openshift-marketplace/certified-operators-mg94v" Dec 09 12:35:37 crc kubenswrapper[4703]: I1209 12:35:37.916934 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66d40a6a-f4cd-43fd-9526-e1cbf16f9286-utilities\") pod \"certified-operators-mg94v\" (UID: \"66d40a6a-f4cd-43fd-9526-e1cbf16f9286\") " pod="openshift-marketplace/certified-operators-mg94v" Dec 09 12:35:37 crc kubenswrapper[4703]: I1209 12:35:37.917083 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66d40a6a-f4cd-43fd-9526-e1cbf16f9286-catalog-content\") pod \"certified-operators-mg94v\" (UID: \"66d40a6a-f4cd-43fd-9526-e1cbf16f9286\") " pod="openshift-marketplace/certified-operators-mg94v" Dec 09 12:35:37 crc kubenswrapper[4703]: I1209 12:35:37.941257 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89mh5\" (UniqueName: \"kubernetes.io/projected/66d40a6a-f4cd-43fd-9526-e1cbf16f9286-kube-api-access-89mh5\") pod \"certified-operators-mg94v\" (UID: \"66d40a6a-f4cd-43fd-9526-e1cbf16f9286\") " pod="openshift-marketplace/certified-operators-mg94v" Dec 09 12:35:37 crc kubenswrapper[4703]: I1209 12:35:37.968077 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-shx4n"] Dec 09 12:35:37 crc kubenswrapper[4703]: I1209 12:35:37.970776 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-shx4n" Dec 09 12:35:37 crc kubenswrapper[4703]: I1209 12:35:37.987800 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-shx4n"] Dec 09 12:35:38 crc kubenswrapper[4703]: I1209 12:35:38.018690 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6p69k\" (UniqueName: \"kubernetes.io/projected/3840d6f3-9258-41d0-8bc1-10a546e47e90-kube-api-access-6p69k\") pod \"community-operators-shx4n\" (UID: \"3840d6f3-9258-41d0-8bc1-10a546e47e90\") " pod="openshift-marketplace/community-operators-shx4n" Dec 09 12:35:38 crc kubenswrapper[4703]: I1209 12:35:38.018822 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3840d6f3-9258-41d0-8bc1-10a546e47e90-utilities\") pod \"community-operators-shx4n\" (UID: \"3840d6f3-9258-41d0-8bc1-10a546e47e90\") " pod="openshift-marketplace/community-operators-shx4n" Dec 09 12:35:38 crc kubenswrapper[4703]: I1209 12:35:38.018933 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3840d6f3-9258-41d0-8bc1-10a546e47e90-catalog-content\") pod \"community-operators-shx4n\" (UID: \"3840d6f3-9258-41d0-8bc1-10a546e47e90\") " pod="openshift-marketplace/community-operators-shx4n" Dec 09 12:35:38 crc kubenswrapper[4703]: I1209 12:35:38.073088 4703 scope.go:117] "RemoveContainer" containerID="aec88ccdbd5bc6c7f7f9dd534b57b24318f2c545ead2d59998c5ffb08ae72b46" Dec 09 12:35:38 crc kubenswrapper[4703]: I1209 12:35:38.102856 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mg94v" Dec 09 12:35:38 crc kubenswrapper[4703]: I1209 12:35:38.120642 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3840d6f3-9258-41d0-8bc1-10a546e47e90-catalog-content\") pod \"community-operators-shx4n\" (UID: \"3840d6f3-9258-41d0-8bc1-10a546e47e90\") " pod="openshift-marketplace/community-operators-shx4n" Dec 09 12:35:38 crc kubenswrapper[4703]: I1209 12:35:38.120898 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6p69k\" (UniqueName: \"kubernetes.io/projected/3840d6f3-9258-41d0-8bc1-10a546e47e90-kube-api-access-6p69k\") pod \"community-operators-shx4n\" (UID: \"3840d6f3-9258-41d0-8bc1-10a546e47e90\") " pod="openshift-marketplace/community-operators-shx4n" Dec 09 12:35:38 crc kubenswrapper[4703]: I1209 12:35:38.120962 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3840d6f3-9258-41d0-8bc1-10a546e47e90-utilities\") pod \"community-operators-shx4n\" (UID: \"3840d6f3-9258-41d0-8bc1-10a546e47e90\") " pod="openshift-marketplace/community-operators-shx4n" Dec 09 12:35:38 crc kubenswrapper[4703]: I1209 12:35:38.121646 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3840d6f3-9258-41d0-8bc1-10a546e47e90-utilities\") pod \"community-operators-shx4n\" (UID: \"3840d6f3-9258-41d0-8bc1-10a546e47e90\") " pod="openshift-marketplace/community-operators-shx4n" Dec 09 12:35:38 crc kubenswrapper[4703]: I1209 12:35:38.122207 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3840d6f3-9258-41d0-8bc1-10a546e47e90-catalog-content\") pod \"community-operators-shx4n\" (UID: \"3840d6f3-9258-41d0-8bc1-10a546e47e90\") " pod="openshift-marketplace/community-operators-shx4n" Dec 09 12:35:38 crc kubenswrapper[4703]: I1209 12:35:38.147121 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6p69k\" (UniqueName: \"kubernetes.io/projected/3840d6f3-9258-41d0-8bc1-10a546e47e90-kube-api-access-6p69k\") pod \"community-operators-shx4n\" (UID: \"3840d6f3-9258-41d0-8bc1-10a546e47e90\") " pod="openshift-marketplace/community-operators-shx4n" Dec 09 12:35:38 crc kubenswrapper[4703]: E1209 12:35:38.238124 4703 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 12:35:38 crc kubenswrapper[4703]: E1209 12:35:38.238391 4703 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 12:35:38 crc kubenswrapper[4703]: E1209 12:35:38.238795 4703 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n565h678h645h65h685h657h568h564h8fhcfhc4h58fh66bh564h649h5d5hb5h67dh57fh97h58ch557h4h68h59fh5c8h65ch586h5bch5b8h5c6h7q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p7hbq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(ce42c586-f397-4f98-be45-f56d36115d7a): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 12:35:38 crc kubenswrapper[4703]: E1209 12:35:38.240152 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:35:38 crc kubenswrapper[4703]: I1209 12:35:38.339495 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-shx4n" Dec 09 12:35:38 crc kubenswrapper[4703]: I1209 12:35:38.754936 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mg94v"] Dec 09 12:35:39 crc kubenswrapper[4703]: I1209 12:35:39.058470 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" event={"ID":"32956ceb-8540-406e-8693-e86efb46cd42","Type":"ContainerStarted","Data":"4a64237187233fa81213ab41a33446def4bff6242a1454c3a80bdbd693e41ba3"} Dec 09 12:35:39 crc kubenswrapper[4703]: I1209 12:35:39.064053 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mg94v" event={"ID":"66d40a6a-f4cd-43fd-9526-e1cbf16f9286","Type":"ContainerStarted","Data":"7977131282b3c7022cbea66a30e823fad6027ca594853e1106aa2fcdad6a1318"} Dec 09 12:35:39 crc kubenswrapper[4703]: I1209 12:35:39.142281 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-shx4n"] Dec 09 12:35:40 crc kubenswrapper[4703]: I1209 12:35:40.080608 4703 generic.go:334] "Generic (PLEG): container finished" podID="66d40a6a-f4cd-43fd-9526-e1cbf16f9286" containerID="6eaf84407c86e77976eb7fce0b01cc8fd852afe9330ce9dc05ff669ac219617e" exitCode=0 Dec 09 12:35:40 crc kubenswrapper[4703]: I1209 12:35:40.081056 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mg94v" event={"ID":"66d40a6a-f4cd-43fd-9526-e1cbf16f9286","Type":"ContainerDied","Data":"6eaf84407c86e77976eb7fce0b01cc8fd852afe9330ce9dc05ff669ac219617e"} Dec 09 12:35:40 crc kubenswrapper[4703]: I1209 12:35:40.085484 4703 generic.go:334] "Generic (PLEG): container finished" podID="3840d6f3-9258-41d0-8bc1-10a546e47e90" containerID="95ef917bb209c325c61c36b575a71d8352e85f75abd3cd5d1070fd5f2513ca15" exitCode=0 Dec 09 12:35:40 crc kubenswrapper[4703]: I1209 12:35:40.085566 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-shx4n" event={"ID":"3840d6f3-9258-41d0-8bc1-10a546e47e90","Type":"ContainerDied","Data":"95ef917bb209c325c61c36b575a71d8352e85f75abd3cd5d1070fd5f2513ca15"} Dec 09 12:35:40 crc kubenswrapper[4703]: I1209 12:35:40.085684 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-shx4n" event={"ID":"3840d6f3-9258-41d0-8bc1-10a546e47e90","Type":"ContainerStarted","Data":"e6a012f134f979e5eba36435841eb26205721847932259a083c6f2d74561b097"} Dec 09 12:35:42 crc kubenswrapper[4703]: I1209 12:35:42.111146 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-shx4n" event={"ID":"3840d6f3-9258-41d0-8bc1-10a546e47e90","Type":"ContainerStarted","Data":"0b973daf592fa243ee49467e2f49b2be5e05359e4f1dc0490345ba17a50202ef"} Dec 09 12:35:42 crc kubenswrapper[4703]: I1209 12:35:42.126992 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mg94v" event={"ID":"66d40a6a-f4cd-43fd-9526-e1cbf16f9286","Type":"ContainerStarted","Data":"259eee03662c853423c597975fff3b000b2ee07002b59115e0661e7447c670f2"} Dec 09 12:35:45 crc kubenswrapper[4703]: E1209 12:35:45.202121 4703 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 09 12:35:45 crc kubenswrapper[4703]: E1209 12:35:45.202821 4703 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 09 12:35:45 crc kubenswrapper[4703]: E1209 12:35:45.203431 4703 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6kdkz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-6ljb8_openstack(862e9d91-760e-43af-aba3-a23255b0fd7a): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 12:35:45 crc kubenswrapper[4703]: E1209 12:35:45.205902 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:35:46 crc kubenswrapper[4703]: I1209 12:35:46.182599 4703 generic.go:334] "Generic (PLEG): container finished" podID="3840d6f3-9258-41d0-8bc1-10a546e47e90" containerID="0b973daf592fa243ee49467e2f49b2be5e05359e4f1dc0490345ba17a50202ef" exitCode=0 Dec 09 12:35:46 crc kubenswrapper[4703]: I1209 12:35:46.182645 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-shx4n" event={"ID":"3840d6f3-9258-41d0-8bc1-10a546e47e90","Type":"ContainerDied","Data":"0b973daf592fa243ee49467e2f49b2be5e05359e4f1dc0490345ba17a50202ef"} Dec 09 12:35:46 crc kubenswrapper[4703]: I1209 12:35:46.186408 4703 generic.go:334] "Generic (PLEG): container finished" podID="66d40a6a-f4cd-43fd-9526-e1cbf16f9286" containerID="259eee03662c853423c597975fff3b000b2ee07002b59115e0661e7447c670f2" exitCode=0 Dec 09 12:35:46 crc kubenswrapper[4703]: I1209 12:35:46.186468 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mg94v" event={"ID":"66d40a6a-f4cd-43fd-9526-e1cbf16f9286","Type":"ContainerDied","Data":"259eee03662c853423c597975fff3b000b2ee07002b59115e0661e7447c670f2"} Dec 09 12:35:47 crc kubenswrapper[4703]: I1209 12:35:47.201315 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mg94v" event={"ID":"66d40a6a-f4cd-43fd-9526-e1cbf16f9286","Type":"ContainerStarted","Data":"1e2ce57de3555a34a38a5ca5420662c764209bdf79145e05b27cfe5cbd3d0481"} Dec 09 12:35:47 crc kubenswrapper[4703]: I1209 12:35:47.221074 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-shx4n" event={"ID":"3840d6f3-9258-41d0-8bc1-10a546e47e90","Type":"ContainerStarted","Data":"3dfabdfaa0e24b261c2867cc8ecd930abc25fd244478fd5c718638c6ef19842f"} Dec 09 12:35:47 crc kubenswrapper[4703]: I1209 12:35:47.241119 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mg94v" podStartSLOduration=3.736288466 podStartE2EDuration="10.241068555s" podCreationTimestamp="2025-12-09 12:35:37 +0000 UTC" firstStartedPulling="2025-12-09 12:35:40.083103153 +0000 UTC m=+1839.331866672" lastFinishedPulling="2025-12-09 12:35:46.587883242 +0000 UTC m=+1845.836646761" observedRunningTime="2025-12-09 12:35:47.22738749 +0000 UTC m=+1846.476151009" watchObservedRunningTime="2025-12-09 12:35:47.241068555 +0000 UTC m=+1846.489832074" Dec 09 12:35:47 crc kubenswrapper[4703]: I1209 12:35:47.268357 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-shx4n" podStartSLOduration=3.775557098 podStartE2EDuration="10.268328716s" podCreationTimestamp="2025-12-09 12:35:37 +0000 UTC" firstStartedPulling="2025-12-09 12:35:40.087749737 +0000 UTC m=+1839.336513256" lastFinishedPulling="2025-12-09 12:35:46.580521355 +0000 UTC m=+1845.829284874" observedRunningTime="2025-12-09 12:35:47.262784128 +0000 UTC m=+1846.511547657" watchObservedRunningTime="2025-12-09 12:35:47.268328716 +0000 UTC m=+1846.517092235" Dec 09 12:35:48 crc kubenswrapper[4703]: I1209 12:35:48.104607 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mg94v" Dec 09 12:35:48 crc kubenswrapper[4703]: I1209 12:35:48.104866 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mg94v" Dec 09 12:35:48 crc kubenswrapper[4703]: I1209 12:35:48.341310 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-shx4n" Dec 09 12:35:48 crc kubenswrapper[4703]: I1209 12:35:48.341384 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-shx4n" Dec 09 12:35:49 crc kubenswrapper[4703]: I1209 12:35:49.209528 4703 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-mg94v" podUID="66d40a6a-f4cd-43fd-9526-e1cbf16f9286" containerName="registry-server" probeResult="failure" output=< Dec 09 12:35:49 crc kubenswrapper[4703]: timeout: failed to connect service ":50051" within 1s Dec 09 12:35:49 crc kubenswrapper[4703]: > Dec 09 12:35:49 crc kubenswrapper[4703]: I1209 12:35:49.396785 4703 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-shx4n" podUID="3840d6f3-9258-41d0-8bc1-10a546e47e90" containerName="registry-server" probeResult="failure" output=< Dec 09 12:35:49 crc kubenswrapper[4703]: timeout: failed to connect service ":50051" within 1s Dec 09 12:35:49 crc kubenswrapper[4703]: > Dec 09 12:35:53 crc kubenswrapper[4703]: E1209 12:35:53.073024 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:35:58 crc kubenswrapper[4703]: I1209 12:35:58.158412 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mg94v" Dec 09 12:35:58 crc kubenswrapper[4703]: I1209 12:35:58.223345 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mg94v" Dec 09 12:35:58 crc kubenswrapper[4703]: I1209 12:35:58.404295 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-shx4n" Dec 09 12:35:58 crc kubenswrapper[4703]: I1209 12:35:58.405629 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mg94v"] Dec 09 12:35:58 crc kubenswrapper[4703]: I1209 12:35:58.479689 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-shx4n" Dec 09 12:35:59 crc kubenswrapper[4703]: I1209 12:35:59.356853 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mg94v" podUID="66d40a6a-f4cd-43fd-9526-e1cbf16f9286" containerName="registry-server" containerID="cri-o://1e2ce57de3555a34a38a5ca5420662c764209bdf79145e05b27cfe5cbd3d0481" gracePeriod=2 Dec 09 12:36:00 crc kubenswrapper[4703]: I1209 12:36:00.013664 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mg94v" Dec 09 12:36:00 crc kubenswrapper[4703]: E1209 12:36:00.072555 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:36:00 crc kubenswrapper[4703]: I1209 12:36:00.193915 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66d40a6a-f4cd-43fd-9526-e1cbf16f9286-utilities\") pod \"66d40a6a-f4cd-43fd-9526-e1cbf16f9286\" (UID: \"66d40a6a-f4cd-43fd-9526-e1cbf16f9286\") " Dec 09 12:36:00 crc kubenswrapper[4703]: I1209 12:36:00.194657 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-89mh5\" (UniqueName: \"kubernetes.io/projected/66d40a6a-f4cd-43fd-9526-e1cbf16f9286-kube-api-access-89mh5\") pod \"66d40a6a-f4cd-43fd-9526-e1cbf16f9286\" (UID: \"66d40a6a-f4cd-43fd-9526-e1cbf16f9286\") " Dec 09 12:36:00 crc kubenswrapper[4703]: I1209 12:36:00.194724 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66d40a6a-f4cd-43fd-9526-e1cbf16f9286-catalog-content\") pod \"66d40a6a-f4cd-43fd-9526-e1cbf16f9286\" (UID: \"66d40a6a-f4cd-43fd-9526-e1cbf16f9286\") " Dec 09 12:36:00 crc kubenswrapper[4703]: I1209 12:36:00.194864 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66d40a6a-f4cd-43fd-9526-e1cbf16f9286-utilities" (OuterVolumeSpecName: "utilities") pod "66d40a6a-f4cd-43fd-9526-e1cbf16f9286" (UID: "66d40a6a-f4cd-43fd-9526-e1cbf16f9286"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:36:00 crc kubenswrapper[4703]: I1209 12:36:00.196480 4703 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66d40a6a-f4cd-43fd-9526-e1cbf16f9286-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 12:36:00 crc kubenswrapper[4703]: I1209 12:36:00.203631 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66d40a6a-f4cd-43fd-9526-e1cbf16f9286-kube-api-access-89mh5" (OuterVolumeSpecName: "kube-api-access-89mh5") pod "66d40a6a-f4cd-43fd-9526-e1cbf16f9286" (UID: "66d40a6a-f4cd-43fd-9526-e1cbf16f9286"). InnerVolumeSpecName "kube-api-access-89mh5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:36:00 crc kubenswrapper[4703]: I1209 12:36:00.260273 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66d40a6a-f4cd-43fd-9526-e1cbf16f9286-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "66d40a6a-f4cd-43fd-9526-e1cbf16f9286" (UID: "66d40a6a-f4cd-43fd-9526-e1cbf16f9286"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:36:00 crc kubenswrapper[4703]: I1209 12:36:00.299431 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-89mh5\" (UniqueName: \"kubernetes.io/projected/66d40a6a-f4cd-43fd-9526-e1cbf16f9286-kube-api-access-89mh5\") on node \"crc\" DevicePath \"\"" Dec 09 12:36:00 crc kubenswrapper[4703]: I1209 12:36:00.299489 4703 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66d40a6a-f4cd-43fd-9526-e1cbf16f9286-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 12:36:00 crc kubenswrapper[4703]: I1209 12:36:00.374533 4703 generic.go:334] "Generic (PLEG): container finished" podID="66d40a6a-f4cd-43fd-9526-e1cbf16f9286" containerID="1e2ce57de3555a34a38a5ca5420662c764209bdf79145e05b27cfe5cbd3d0481" exitCode=0 Dec 09 12:36:00 crc kubenswrapper[4703]: I1209 12:36:00.374621 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mg94v" event={"ID":"66d40a6a-f4cd-43fd-9526-e1cbf16f9286","Type":"ContainerDied","Data":"1e2ce57de3555a34a38a5ca5420662c764209bdf79145e05b27cfe5cbd3d0481"} Dec 09 12:36:00 crc kubenswrapper[4703]: I1209 12:36:00.374662 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mg94v" event={"ID":"66d40a6a-f4cd-43fd-9526-e1cbf16f9286","Type":"ContainerDied","Data":"7977131282b3c7022cbea66a30e823fad6027ca594853e1106aa2fcdad6a1318"} Dec 09 12:36:00 crc kubenswrapper[4703]: I1209 12:36:00.374675 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mg94v" Dec 09 12:36:00 crc kubenswrapper[4703]: I1209 12:36:00.374687 4703 scope.go:117] "RemoveContainer" containerID="1e2ce57de3555a34a38a5ca5420662c764209bdf79145e05b27cfe5cbd3d0481" Dec 09 12:36:00 crc kubenswrapper[4703]: I1209 12:36:00.410016 4703 scope.go:117] "RemoveContainer" containerID="259eee03662c853423c597975fff3b000b2ee07002b59115e0661e7447c670f2" Dec 09 12:36:00 crc kubenswrapper[4703]: I1209 12:36:00.429020 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mg94v"] Dec 09 12:36:00 crc kubenswrapper[4703]: I1209 12:36:00.448765 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mg94v"] Dec 09 12:36:00 crc kubenswrapper[4703]: I1209 12:36:00.466375 4703 scope.go:117] "RemoveContainer" containerID="6eaf84407c86e77976eb7fce0b01cc8fd852afe9330ce9dc05ff669ac219617e" Dec 09 12:36:00 crc kubenswrapper[4703]: I1209 12:36:00.510623 4703 scope.go:117] "RemoveContainer" containerID="1e2ce57de3555a34a38a5ca5420662c764209bdf79145e05b27cfe5cbd3d0481" Dec 09 12:36:00 crc kubenswrapper[4703]: E1209 12:36:00.511151 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e2ce57de3555a34a38a5ca5420662c764209bdf79145e05b27cfe5cbd3d0481\": container with ID starting with 1e2ce57de3555a34a38a5ca5420662c764209bdf79145e05b27cfe5cbd3d0481 not found: ID does not exist" containerID="1e2ce57de3555a34a38a5ca5420662c764209bdf79145e05b27cfe5cbd3d0481" Dec 09 12:36:00 crc kubenswrapper[4703]: I1209 12:36:00.511222 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e2ce57de3555a34a38a5ca5420662c764209bdf79145e05b27cfe5cbd3d0481"} err="failed to get container status \"1e2ce57de3555a34a38a5ca5420662c764209bdf79145e05b27cfe5cbd3d0481\": rpc error: code = NotFound desc = could not find container \"1e2ce57de3555a34a38a5ca5420662c764209bdf79145e05b27cfe5cbd3d0481\": container with ID starting with 1e2ce57de3555a34a38a5ca5420662c764209bdf79145e05b27cfe5cbd3d0481 not found: ID does not exist" Dec 09 12:36:00 crc kubenswrapper[4703]: I1209 12:36:00.511258 4703 scope.go:117] "RemoveContainer" containerID="259eee03662c853423c597975fff3b000b2ee07002b59115e0661e7447c670f2" Dec 09 12:36:00 crc kubenswrapper[4703]: E1209 12:36:00.511633 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"259eee03662c853423c597975fff3b000b2ee07002b59115e0661e7447c670f2\": container with ID starting with 259eee03662c853423c597975fff3b000b2ee07002b59115e0661e7447c670f2 not found: ID does not exist" containerID="259eee03662c853423c597975fff3b000b2ee07002b59115e0661e7447c670f2" Dec 09 12:36:00 crc kubenswrapper[4703]: I1209 12:36:00.511699 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"259eee03662c853423c597975fff3b000b2ee07002b59115e0661e7447c670f2"} err="failed to get container status \"259eee03662c853423c597975fff3b000b2ee07002b59115e0661e7447c670f2\": rpc error: code = NotFound desc = could not find container \"259eee03662c853423c597975fff3b000b2ee07002b59115e0661e7447c670f2\": container with ID starting with 259eee03662c853423c597975fff3b000b2ee07002b59115e0661e7447c670f2 not found: ID does not exist" Dec 09 12:36:00 crc kubenswrapper[4703]: I1209 12:36:00.511737 4703 scope.go:117] "RemoveContainer" containerID="6eaf84407c86e77976eb7fce0b01cc8fd852afe9330ce9dc05ff669ac219617e" Dec 09 12:36:00 crc kubenswrapper[4703]: E1209 12:36:00.512155 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6eaf84407c86e77976eb7fce0b01cc8fd852afe9330ce9dc05ff669ac219617e\": container with ID starting with 6eaf84407c86e77976eb7fce0b01cc8fd852afe9330ce9dc05ff669ac219617e not found: ID does not exist" containerID="6eaf84407c86e77976eb7fce0b01cc8fd852afe9330ce9dc05ff669ac219617e" Dec 09 12:36:00 crc kubenswrapper[4703]: I1209 12:36:00.512347 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6eaf84407c86e77976eb7fce0b01cc8fd852afe9330ce9dc05ff669ac219617e"} err="failed to get container status \"6eaf84407c86e77976eb7fce0b01cc8fd852afe9330ce9dc05ff669ac219617e\": rpc error: code = NotFound desc = could not find container \"6eaf84407c86e77976eb7fce0b01cc8fd852afe9330ce9dc05ff669ac219617e\": container with ID starting with 6eaf84407c86e77976eb7fce0b01cc8fd852afe9330ce9dc05ff669ac219617e not found: ID does not exist" Dec 09 12:36:00 crc kubenswrapper[4703]: I1209 12:36:00.815945 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-shx4n"] Dec 09 12:36:00 crc kubenswrapper[4703]: I1209 12:36:00.816282 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-shx4n" podUID="3840d6f3-9258-41d0-8bc1-10a546e47e90" containerName="registry-server" containerID="cri-o://3dfabdfaa0e24b261c2867cc8ecd930abc25fd244478fd5c718638c6ef19842f" gracePeriod=2 Dec 09 12:36:01 crc kubenswrapper[4703]: I1209 12:36:01.085510 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66d40a6a-f4cd-43fd-9526-e1cbf16f9286" path="/var/lib/kubelet/pods/66d40a6a-f4cd-43fd-9526-e1cbf16f9286/volumes" Dec 09 12:36:01 crc kubenswrapper[4703]: I1209 12:36:01.399320 4703 generic.go:334] "Generic (PLEG): container finished" podID="3840d6f3-9258-41d0-8bc1-10a546e47e90" containerID="3dfabdfaa0e24b261c2867cc8ecd930abc25fd244478fd5c718638c6ef19842f" exitCode=0 Dec 09 12:36:01 crc kubenswrapper[4703]: I1209 12:36:01.399419 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-shx4n" event={"ID":"3840d6f3-9258-41d0-8bc1-10a546e47e90","Type":"ContainerDied","Data":"3dfabdfaa0e24b261c2867cc8ecd930abc25fd244478fd5c718638c6ef19842f"} Dec 09 12:36:01 crc kubenswrapper[4703]: I1209 12:36:01.524271 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-shx4n" Dec 09 12:36:01 crc kubenswrapper[4703]: I1209 12:36:01.654599 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3840d6f3-9258-41d0-8bc1-10a546e47e90-catalog-content\") pod \"3840d6f3-9258-41d0-8bc1-10a546e47e90\" (UID: \"3840d6f3-9258-41d0-8bc1-10a546e47e90\") " Dec 09 12:36:01 crc kubenswrapper[4703]: I1209 12:36:01.654979 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3840d6f3-9258-41d0-8bc1-10a546e47e90-utilities\") pod \"3840d6f3-9258-41d0-8bc1-10a546e47e90\" (UID: \"3840d6f3-9258-41d0-8bc1-10a546e47e90\") " Dec 09 12:36:01 crc kubenswrapper[4703]: I1209 12:36:01.655031 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6p69k\" (UniqueName: \"kubernetes.io/projected/3840d6f3-9258-41d0-8bc1-10a546e47e90-kube-api-access-6p69k\") pod \"3840d6f3-9258-41d0-8bc1-10a546e47e90\" (UID: \"3840d6f3-9258-41d0-8bc1-10a546e47e90\") " Dec 09 12:36:01 crc kubenswrapper[4703]: I1209 12:36:01.657120 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3840d6f3-9258-41d0-8bc1-10a546e47e90-utilities" (OuterVolumeSpecName: "utilities") pod "3840d6f3-9258-41d0-8bc1-10a546e47e90" (UID: "3840d6f3-9258-41d0-8bc1-10a546e47e90"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:36:01 crc kubenswrapper[4703]: I1209 12:36:01.663554 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3840d6f3-9258-41d0-8bc1-10a546e47e90-kube-api-access-6p69k" (OuterVolumeSpecName: "kube-api-access-6p69k") pod "3840d6f3-9258-41d0-8bc1-10a546e47e90" (UID: "3840d6f3-9258-41d0-8bc1-10a546e47e90"). InnerVolumeSpecName "kube-api-access-6p69k". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:36:01 crc kubenswrapper[4703]: I1209 12:36:01.716925 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3840d6f3-9258-41d0-8bc1-10a546e47e90-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3840d6f3-9258-41d0-8bc1-10a546e47e90" (UID: "3840d6f3-9258-41d0-8bc1-10a546e47e90"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:36:01 crc kubenswrapper[4703]: I1209 12:36:01.760832 4703 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3840d6f3-9258-41d0-8bc1-10a546e47e90-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 12:36:01 crc kubenswrapper[4703]: I1209 12:36:01.760890 4703 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3840d6f3-9258-41d0-8bc1-10a546e47e90-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 12:36:01 crc kubenswrapper[4703]: I1209 12:36:01.760903 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6p69k\" (UniqueName: \"kubernetes.io/projected/3840d6f3-9258-41d0-8bc1-10a546e47e90-kube-api-access-6p69k\") on node \"crc\" DevicePath \"\"" Dec 09 12:36:02 crc kubenswrapper[4703]: I1209 12:36:02.416315 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-shx4n" event={"ID":"3840d6f3-9258-41d0-8bc1-10a546e47e90","Type":"ContainerDied","Data":"e6a012f134f979e5eba36435841eb26205721847932259a083c6f2d74561b097"} Dec 09 12:36:02 crc kubenswrapper[4703]: I1209 12:36:02.416389 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-shx4n" Dec 09 12:36:02 crc kubenswrapper[4703]: I1209 12:36:02.416656 4703 scope.go:117] "RemoveContainer" containerID="3dfabdfaa0e24b261c2867cc8ecd930abc25fd244478fd5c718638c6ef19842f" Dec 09 12:36:02 crc kubenswrapper[4703]: I1209 12:36:02.442745 4703 scope.go:117] "RemoveContainer" containerID="0b973daf592fa243ee49467e2f49b2be5e05359e4f1dc0490345ba17a50202ef" Dec 09 12:36:02 crc kubenswrapper[4703]: I1209 12:36:02.464258 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-shx4n"] Dec 09 12:36:02 crc kubenswrapper[4703]: I1209 12:36:02.480042 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-shx4n"] Dec 09 12:36:02 crc kubenswrapper[4703]: I1209 12:36:02.483241 4703 scope.go:117] "RemoveContainer" containerID="95ef917bb209c325c61c36b575a71d8352e85f75abd3cd5d1070fd5f2513ca15" Dec 09 12:36:03 crc kubenswrapper[4703]: I1209 12:36:03.085834 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3840d6f3-9258-41d0-8bc1-10a546e47e90" path="/var/lib/kubelet/pods/3840d6f3-9258-41d0-8bc1-10a546e47e90/volumes" Dec 09 12:36:04 crc kubenswrapper[4703]: E1209 12:36:04.073022 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:36:15 crc kubenswrapper[4703]: E1209 12:36:15.073693 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:36:18 crc kubenswrapper[4703]: E1209 12:36:18.072965 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:36:28 crc kubenswrapper[4703]: E1209 12:36:28.073861 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:36:29 crc kubenswrapper[4703]: E1209 12:36:29.071914 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:36:39 crc kubenswrapper[4703]: E1209 12:36:39.073548 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:36:43 crc kubenswrapper[4703]: E1209 12:36:43.073387 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:36:45 crc kubenswrapper[4703]: I1209 12:36:45.067230 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-63d9-account-create-update-s4v4m"] Dec 09 12:36:45 crc kubenswrapper[4703]: I1209 12:36:45.090311 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-44txf"] Dec 09 12:36:45 crc kubenswrapper[4703]: I1209 12:36:45.103939 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-b17c-account-create-update-vgfln"] Dec 09 12:36:45 crc kubenswrapper[4703]: I1209 12:36:45.117224 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-63d9-account-create-update-s4v4m"] Dec 09 12:36:45 crc kubenswrapper[4703]: I1209 12:36:45.132019 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-nfgrz"] Dec 09 12:36:45 crc kubenswrapper[4703]: I1209 12:36:45.146508 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-44txf"] Dec 09 12:36:45 crc kubenswrapper[4703]: I1209 12:36:45.158681 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-b17c-account-create-update-vgfln"] Dec 09 12:36:45 crc kubenswrapper[4703]: I1209 12:36:45.172287 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-nfgrz"] Dec 09 12:36:47 crc kubenswrapper[4703]: I1209 12:36:47.095616 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02c41eb5-761c-4cb6-b9dc-57f238f48b87" path="/var/lib/kubelet/pods/02c41eb5-761c-4cb6-b9dc-57f238f48b87/volumes" Dec 09 12:36:47 crc kubenswrapper[4703]: I1209 12:36:47.096601 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d051446-c217-438b-9b9f-7477ae28c6f4" path="/var/lib/kubelet/pods/1d051446-c217-438b-9b9f-7477ae28c6f4/volumes" Dec 09 12:36:47 crc kubenswrapper[4703]: I1209 12:36:47.097448 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58dc912e-9acc-425c-b79f-c76604da270b" path="/var/lib/kubelet/pods/58dc912e-9acc-425c-b79f-c76604da270b/volumes" Dec 09 12:36:47 crc kubenswrapper[4703]: I1209 12:36:47.098101 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e04561f1-805c-4e06-a01c-6548e9a234e5" path="/var/lib/kubelet/pods/e04561f1-805c-4e06-a01c-6548e9a234e5/volumes" Dec 09 12:36:48 crc kubenswrapper[4703]: I1209 12:36:48.052255 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-0b9b-account-create-update-cv86j"] Dec 09 12:36:48 crc kubenswrapper[4703]: I1209 12:36:48.062764 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-0b9b-account-create-update-cv86j"] Dec 09 12:36:48 crc kubenswrapper[4703]: I1209 12:36:48.109924 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-kwzbm"] Dec 09 12:36:48 crc kubenswrapper[4703]: I1209 12:36:48.124567 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-kwzbm"] Dec 09 12:36:49 crc kubenswrapper[4703]: I1209 12:36:49.083713 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7aa6f4bf-5553-4149-ae85-b17092bef181" path="/var/lib/kubelet/pods/7aa6f4bf-5553-4149-ae85-b17092bef181/volumes" Dec 09 12:36:49 crc kubenswrapper[4703]: I1209 12:36:49.084352 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f188683b-b8cf-4aab-81cb-ad7318e3e07f" path="/var/lib/kubelet/pods/f188683b-b8cf-4aab-81cb-ad7318e3e07f/volumes" Dec 09 12:36:50 crc kubenswrapper[4703]: E1209 12:36:50.071857 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:36:55 crc kubenswrapper[4703]: E1209 12:36:55.073274 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:37:01 crc kubenswrapper[4703]: E1209 12:37:01.082556 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:37:08 crc kubenswrapper[4703]: E1209 12:37:08.073284 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:37:13 crc kubenswrapper[4703]: E1209 12:37:13.080303 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:37:16 crc kubenswrapper[4703]: I1209 12:37:16.059244 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-8tkw9"] Dec 09 12:37:16 crc kubenswrapper[4703]: I1209 12:37:16.072152 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-8tkw9"] Dec 09 12:37:17 crc kubenswrapper[4703]: I1209 12:37:17.083926 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7648d9a-d5c5-4854-ad02-ddf686748d6a" path="/var/lib/kubelet/pods/e7648d9a-d5c5-4854-ad02-ddf686748d6a/volumes" Dec 09 12:37:19 crc kubenswrapper[4703]: E1209 12:37:19.073929 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:37:20 crc kubenswrapper[4703]: I1209 12:37:20.049579 4703 scope.go:117] "RemoveContainer" containerID="20cc77896d41d7d9a2ffe3a600b895b715d42c2f993eb8da23b8acec00fc0703" Dec 09 12:37:20 crc kubenswrapper[4703]: I1209 12:37:20.089314 4703 scope.go:117] "RemoveContainer" containerID="4b9d06174afbe1726e1b77cf2da342f854c892cc37ab83cf2b5a5cf40de1552d" Dec 09 12:37:20 crc kubenswrapper[4703]: I1209 12:37:20.151644 4703 scope.go:117] "RemoveContainer" containerID="64325e0686200aa9d7e9a6bc134228605d4b1832fe255bf900296ba1fdff6630" Dec 09 12:37:20 crc kubenswrapper[4703]: I1209 12:37:20.215635 4703 scope.go:117] "RemoveContainer" containerID="1d21cd5ee97425d5b078d5f90d4c77843d9520c6c2e53ea074fd2c9ef637612b" Dec 09 12:37:20 crc kubenswrapper[4703]: I1209 12:37:20.273948 4703 scope.go:117] "RemoveContainer" containerID="05069b6086c0d5d0c7d734afa0ed5f28ddde9eb9e989d5d1c9b476975f9a73b2" Dec 09 12:37:20 crc kubenswrapper[4703]: I1209 12:37:20.340657 4703 scope.go:117] "RemoveContainer" containerID="9fefe56742d92f05796b63895e03254a7f8fa3b253391bdfcd472de5d1b83d26" Dec 09 12:37:20 crc kubenswrapper[4703]: I1209 12:37:20.423867 4703 scope.go:117] "RemoveContainer" containerID="e80be929b3869438c19736e55e885dbea0518c3720d8019b45e243a6ddb3507f" Dec 09 12:37:22 crc kubenswrapper[4703]: I1209 12:37:22.057276 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-7693-account-create-update-2gdqt"] Dec 09 12:37:22 crc kubenswrapper[4703]: I1209 12:37:22.077251 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-b7ce-account-create-update-c4zgt"] Dec 09 12:37:22 crc kubenswrapper[4703]: I1209 12:37:22.097834 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-db-create-bfgfz"] Dec 09 12:37:22 crc kubenswrapper[4703]: I1209 12:37:22.113850 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-ed5b-account-create-update-2nct2"] Dec 09 12:37:22 crc kubenswrapper[4703]: I1209 12:37:22.128742 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-7rnfn"] Dec 09 12:37:22 crc kubenswrapper[4703]: I1209 12:37:22.144289 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-b7ce-account-create-update-c4zgt"] Dec 09 12:37:22 crc kubenswrapper[4703]: I1209 12:37:22.158135 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-ed5b-account-create-update-2nct2"] Dec 09 12:37:22 crc kubenswrapper[4703]: I1209 12:37:22.170606 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-7693-account-create-update-2gdqt"] Dec 09 12:37:22 crc kubenswrapper[4703]: I1209 12:37:22.184319 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-7rnfn"] Dec 09 12:37:22 crc kubenswrapper[4703]: I1209 12:37:22.196091 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-db-create-bfgfz"] Dec 09 12:37:22 crc kubenswrapper[4703]: I1209 12:37:22.207649 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-4wl7p"] Dec 09 12:37:22 crc kubenswrapper[4703]: I1209 12:37:22.219421 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-0f3e-account-create-update-2cnnf"] Dec 09 12:37:22 crc kubenswrapper[4703]: I1209 12:37:22.230055 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-4wl7p"] Dec 09 12:37:22 crc kubenswrapper[4703]: I1209 12:37:22.244494 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-0f3e-account-create-update-2cnnf"] Dec 09 12:37:23 crc kubenswrapper[4703]: I1209 12:37:23.086515 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4231ddd2-a973-43ee-bfa0-3ab32515beb8" path="/var/lib/kubelet/pods/4231ddd2-a973-43ee-bfa0-3ab32515beb8/volumes" Dec 09 12:37:23 crc kubenswrapper[4703]: I1209 12:37:23.087972 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c0dec58-a2d5-44ef-8a01-2d7b555aed11" path="/var/lib/kubelet/pods/5c0dec58-a2d5-44ef-8a01-2d7b555aed11/volumes" Dec 09 12:37:23 crc kubenswrapper[4703]: I1209 12:37:23.088765 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72429324-7660-4f13-8658-ede06684d423" path="/var/lib/kubelet/pods/72429324-7660-4f13-8658-ede06684d423/volumes" Dec 09 12:37:23 crc kubenswrapper[4703]: I1209 12:37:23.089460 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a9ded47-bcfd-4613-9fec-80c12e9a64b0" path="/var/lib/kubelet/pods/8a9ded47-bcfd-4613-9fec-80c12e9a64b0/volumes" Dec 09 12:37:23 crc kubenswrapper[4703]: I1209 12:37:23.090725 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90165483-6678-444e-bda2-249f59813ba9" path="/var/lib/kubelet/pods/90165483-6678-444e-bda2-249f59813ba9/volumes" Dec 09 12:37:23 crc kubenswrapper[4703]: I1209 12:37:23.091543 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9640f126-3eb2-4402-a141-f3bb22b15f40" path="/var/lib/kubelet/pods/9640f126-3eb2-4402-a141-f3bb22b15f40/volumes" Dec 09 12:37:23 crc kubenswrapper[4703]: I1209 12:37:23.092244 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf475bf7-37dd-4920-888c-b53e8065591e" path="/var/lib/kubelet/pods/bf475bf7-37dd-4920-888c-b53e8065591e/volumes" Dec 09 12:37:25 crc kubenswrapper[4703]: E1209 12:37:25.074165 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:37:27 crc kubenswrapper[4703]: I1209 12:37:27.036457 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-4fqbv"] Dec 09 12:37:27 crc kubenswrapper[4703]: I1209 12:37:27.045186 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-4fqbv"] Dec 09 12:37:27 crc kubenswrapper[4703]: I1209 12:37:27.082939 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5253b89-3c89-45a9-8806-dfabf281ebbb" path="/var/lib/kubelet/pods/d5253b89-3c89-45a9-8806-dfabf281ebbb/volumes" Dec 09 12:37:29 crc kubenswrapper[4703]: I1209 12:37:29.497786 4703 generic.go:334] "Generic (PLEG): container finished" podID="c01bdc0a-4376-4b9d-8418-40e064327bfe" containerID="ec3678b6361519d99854188f39af807ab6879f68e698f2af0a65b1ef1ee4137d" exitCode=0 Dec 09 12:37:29 crc kubenswrapper[4703]: I1209 12:37:29.497894 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kdscp" event={"ID":"c01bdc0a-4376-4b9d-8418-40e064327bfe","Type":"ContainerDied","Data":"ec3678b6361519d99854188f39af807ab6879f68e698f2af0a65b1ef1ee4137d"} Dec 09 12:37:31 crc kubenswrapper[4703]: I1209 12:37:31.099783 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kdscp" Dec 09 12:37:31 crc kubenswrapper[4703]: I1209 12:37:31.154570 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c01bdc0a-4376-4b9d-8418-40e064327bfe-ssh-key\") pod \"c01bdc0a-4376-4b9d-8418-40e064327bfe\" (UID: \"c01bdc0a-4376-4b9d-8418-40e064327bfe\") " Dec 09 12:37:31 crc kubenswrapper[4703]: I1209 12:37:31.154817 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c01bdc0a-4376-4b9d-8418-40e064327bfe-bootstrap-combined-ca-bundle\") pod \"c01bdc0a-4376-4b9d-8418-40e064327bfe\" (UID: \"c01bdc0a-4376-4b9d-8418-40e064327bfe\") " Dec 09 12:37:31 crc kubenswrapper[4703]: I1209 12:37:31.154949 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hdvkd\" (UniqueName: \"kubernetes.io/projected/c01bdc0a-4376-4b9d-8418-40e064327bfe-kube-api-access-hdvkd\") pod \"c01bdc0a-4376-4b9d-8418-40e064327bfe\" (UID: \"c01bdc0a-4376-4b9d-8418-40e064327bfe\") " Dec 09 12:37:31 crc kubenswrapper[4703]: I1209 12:37:31.155108 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c01bdc0a-4376-4b9d-8418-40e064327bfe-inventory\") pod \"c01bdc0a-4376-4b9d-8418-40e064327bfe\" (UID: \"c01bdc0a-4376-4b9d-8418-40e064327bfe\") " Dec 09 12:37:31 crc kubenswrapper[4703]: I1209 12:37:31.162395 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c01bdc0a-4376-4b9d-8418-40e064327bfe-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "c01bdc0a-4376-4b9d-8418-40e064327bfe" (UID: "c01bdc0a-4376-4b9d-8418-40e064327bfe"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:37:31 crc kubenswrapper[4703]: I1209 12:37:31.162988 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c01bdc0a-4376-4b9d-8418-40e064327bfe-kube-api-access-hdvkd" (OuterVolumeSpecName: "kube-api-access-hdvkd") pod "c01bdc0a-4376-4b9d-8418-40e064327bfe" (UID: "c01bdc0a-4376-4b9d-8418-40e064327bfe"). InnerVolumeSpecName "kube-api-access-hdvkd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:37:31 crc kubenswrapper[4703]: I1209 12:37:31.199365 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c01bdc0a-4376-4b9d-8418-40e064327bfe-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "c01bdc0a-4376-4b9d-8418-40e064327bfe" (UID: "c01bdc0a-4376-4b9d-8418-40e064327bfe"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:37:31 crc kubenswrapper[4703]: I1209 12:37:31.209041 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c01bdc0a-4376-4b9d-8418-40e064327bfe-inventory" (OuterVolumeSpecName: "inventory") pod "c01bdc0a-4376-4b9d-8418-40e064327bfe" (UID: "c01bdc0a-4376-4b9d-8418-40e064327bfe"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:37:31 crc kubenswrapper[4703]: I1209 12:37:31.259286 4703 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c01bdc0a-4376-4b9d-8418-40e064327bfe-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 09 12:37:31 crc kubenswrapper[4703]: I1209 12:37:31.259343 4703 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c01bdc0a-4376-4b9d-8418-40e064327bfe-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:37:31 crc kubenswrapper[4703]: I1209 12:37:31.259356 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hdvkd\" (UniqueName: \"kubernetes.io/projected/c01bdc0a-4376-4b9d-8418-40e064327bfe-kube-api-access-hdvkd\") on node \"crc\" DevicePath \"\"" Dec 09 12:37:31 crc kubenswrapper[4703]: I1209 12:37:31.259366 4703 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c01bdc0a-4376-4b9d-8418-40e064327bfe-inventory\") on node \"crc\" DevicePath \"\"" Dec 09 12:37:31 crc kubenswrapper[4703]: I1209 12:37:31.524301 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kdscp" event={"ID":"c01bdc0a-4376-4b9d-8418-40e064327bfe","Type":"ContainerDied","Data":"b3336a1ede882ad45b26d33dc79ced545269e1051f340f12b3c3970dbe19461f"} Dec 09 12:37:31 crc kubenswrapper[4703]: I1209 12:37:31.524617 4703 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3336a1ede882ad45b26d33dc79ced545269e1051f340f12b3c3970dbe19461f" Dec 09 12:37:31 crc kubenswrapper[4703]: I1209 12:37:31.524679 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kdscp" Dec 09 12:37:31 crc kubenswrapper[4703]: I1209 12:37:31.668922 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7ghts"] Dec 09 12:37:31 crc kubenswrapper[4703]: E1209 12:37:31.669566 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66d40a6a-f4cd-43fd-9526-e1cbf16f9286" containerName="extract-utilities" Dec 09 12:37:31 crc kubenswrapper[4703]: I1209 12:37:31.669593 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="66d40a6a-f4cd-43fd-9526-e1cbf16f9286" containerName="extract-utilities" Dec 09 12:37:31 crc kubenswrapper[4703]: E1209 12:37:31.669612 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c01bdc0a-4376-4b9d-8418-40e064327bfe" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 09 12:37:31 crc kubenswrapper[4703]: I1209 12:37:31.669623 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="c01bdc0a-4376-4b9d-8418-40e064327bfe" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 09 12:37:31 crc kubenswrapper[4703]: E1209 12:37:31.669650 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66d40a6a-f4cd-43fd-9526-e1cbf16f9286" containerName="registry-server" Dec 09 12:37:31 crc kubenswrapper[4703]: I1209 12:37:31.669658 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="66d40a6a-f4cd-43fd-9526-e1cbf16f9286" containerName="registry-server" Dec 09 12:37:31 crc kubenswrapper[4703]: E1209 12:37:31.669675 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3840d6f3-9258-41d0-8bc1-10a546e47e90" containerName="registry-server" Dec 09 12:37:31 crc kubenswrapper[4703]: I1209 12:37:31.669683 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="3840d6f3-9258-41d0-8bc1-10a546e47e90" containerName="registry-server" Dec 09 12:37:31 crc kubenswrapper[4703]: E1209 12:37:31.669692 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3840d6f3-9258-41d0-8bc1-10a546e47e90" containerName="extract-content" Dec 09 12:37:31 crc kubenswrapper[4703]: I1209 12:37:31.669700 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="3840d6f3-9258-41d0-8bc1-10a546e47e90" containerName="extract-content" Dec 09 12:37:31 crc kubenswrapper[4703]: E1209 12:37:31.669730 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3840d6f3-9258-41d0-8bc1-10a546e47e90" containerName="extract-utilities" Dec 09 12:37:31 crc kubenswrapper[4703]: I1209 12:37:31.669740 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="3840d6f3-9258-41d0-8bc1-10a546e47e90" containerName="extract-utilities" Dec 09 12:37:31 crc kubenswrapper[4703]: E1209 12:37:31.669754 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66d40a6a-f4cd-43fd-9526-e1cbf16f9286" containerName="extract-content" Dec 09 12:37:31 crc kubenswrapper[4703]: I1209 12:37:31.669762 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="66d40a6a-f4cd-43fd-9526-e1cbf16f9286" containerName="extract-content" Dec 09 12:37:31 crc kubenswrapper[4703]: I1209 12:37:31.670037 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="66d40a6a-f4cd-43fd-9526-e1cbf16f9286" containerName="registry-server" Dec 09 12:37:31 crc kubenswrapper[4703]: I1209 12:37:31.670064 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="c01bdc0a-4376-4b9d-8418-40e064327bfe" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 09 12:37:31 crc kubenswrapper[4703]: I1209 12:37:31.670077 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="3840d6f3-9258-41d0-8bc1-10a546e47e90" containerName="registry-server" Dec 09 12:37:31 crc kubenswrapper[4703]: I1209 12:37:31.671173 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7ghts" Dec 09 12:37:31 crc kubenswrapper[4703]: I1209 12:37:31.680969 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 09 12:37:31 crc kubenswrapper[4703]: I1209 12:37:31.681272 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-8xnzm" Dec 09 12:37:31 crc kubenswrapper[4703]: I1209 12:37:31.681542 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 09 12:37:31 crc kubenswrapper[4703]: I1209 12:37:31.681706 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 09 12:37:31 crc kubenswrapper[4703]: I1209 12:37:31.691531 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7ghts"] Dec 09 12:37:31 crc kubenswrapper[4703]: I1209 12:37:31.771750 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4157464d-e43c-4d62-89b5-ececeb2ff437-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-7ghts\" (UID: \"4157464d-e43c-4d62-89b5-ececeb2ff437\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7ghts" Dec 09 12:37:31 crc kubenswrapper[4703]: I1209 12:37:31.771817 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcf9j\" (UniqueName: \"kubernetes.io/projected/4157464d-e43c-4d62-89b5-ececeb2ff437-kube-api-access-tcf9j\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-7ghts\" (UID: \"4157464d-e43c-4d62-89b5-ececeb2ff437\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7ghts" Dec 09 12:37:31 crc kubenswrapper[4703]: I1209 12:37:31.771862 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4157464d-e43c-4d62-89b5-ececeb2ff437-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-7ghts\" (UID: \"4157464d-e43c-4d62-89b5-ececeb2ff437\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7ghts" Dec 09 12:37:31 crc kubenswrapper[4703]: I1209 12:37:31.874567 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4157464d-e43c-4d62-89b5-ececeb2ff437-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-7ghts\" (UID: \"4157464d-e43c-4d62-89b5-ececeb2ff437\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7ghts" Dec 09 12:37:31 crc kubenswrapper[4703]: I1209 12:37:31.874625 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tcf9j\" (UniqueName: \"kubernetes.io/projected/4157464d-e43c-4d62-89b5-ececeb2ff437-kube-api-access-tcf9j\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-7ghts\" (UID: \"4157464d-e43c-4d62-89b5-ececeb2ff437\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7ghts" Dec 09 12:37:31 crc kubenswrapper[4703]: I1209 12:37:31.874684 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4157464d-e43c-4d62-89b5-ececeb2ff437-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-7ghts\" (UID: \"4157464d-e43c-4d62-89b5-ececeb2ff437\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7ghts" Dec 09 12:37:31 crc kubenswrapper[4703]: I1209 12:37:31.883774 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4157464d-e43c-4d62-89b5-ececeb2ff437-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-7ghts\" (UID: \"4157464d-e43c-4d62-89b5-ececeb2ff437\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7ghts" Dec 09 12:37:31 crc kubenswrapper[4703]: I1209 12:37:31.909271 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tcf9j\" (UniqueName: \"kubernetes.io/projected/4157464d-e43c-4d62-89b5-ececeb2ff437-kube-api-access-tcf9j\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-7ghts\" (UID: \"4157464d-e43c-4d62-89b5-ececeb2ff437\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7ghts" Dec 09 12:37:31 crc kubenswrapper[4703]: I1209 12:37:31.915534 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4157464d-e43c-4d62-89b5-ececeb2ff437-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-7ghts\" (UID: \"4157464d-e43c-4d62-89b5-ececeb2ff437\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7ghts" Dec 09 12:37:32 crc kubenswrapper[4703]: I1209 12:37:32.013073 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7ghts" Dec 09 12:37:32 crc kubenswrapper[4703]: I1209 12:37:32.653333 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7ghts"] Dec 09 12:37:32 crc kubenswrapper[4703]: I1209 12:37:32.656249 4703 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 09 12:37:33 crc kubenswrapper[4703]: E1209 12:37:33.082575 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:37:33 crc kubenswrapper[4703]: I1209 12:37:33.555576 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7ghts" event={"ID":"4157464d-e43c-4d62-89b5-ececeb2ff437","Type":"ContainerStarted","Data":"56e8270a97b07f5bd45295423057449ca84402f7af4dc0c8761b93b6803197df"} Dec 09 12:37:33 crc kubenswrapper[4703]: I1209 12:37:33.555648 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7ghts" event={"ID":"4157464d-e43c-4d62-89b5-ececeb2ff437","Type":"ContainerStarted","Data":"b0be373eb27fceb113559e470c22c3c0b83d26821002d66517c03d7881b05b13"} Dec 09 12:37:33 crc kubenswrapper[4703]: I1209 12:37:33.585544 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7ghts" podStartSLOduration=2.204140192 podStartE2EDuration="2.585515587s" podCreationTimestamp="2025-12-09 12:37:31 +0000 UTC" firstStartedPulling="2025-12-09 12:37:32.656014814 +0000 UTC m=+1951.904778333" lastFinishedPulling="2025-12-09 12:37:33.037390209 +0000 UTC m=+1952.286153728" observedRunningTime="2025-12-09 12:37:33.574595317 +0000 UTC m=+1952.823358856" watchObservedRunningTime="2025-12-09 12:37:33.585515587 +0000 UTC m=+1952.834279106" Dec 09 12:37:39 crc kubenswrapper[4703]: E1209 12:37:39.072689 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:37:45 crc kubenswrapper[4703]: E1209 12:37:45.082730 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:37:53 crc kubenswrapper[4703]: E1209 12:37:53.072505 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:37:57 crc kubenswrapper[4703]: E1209 12:37:57.072810 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:38:00 crc kubenswrapper[4703]: I1209 12:38:00.083675 4703 patch_prober.go:28] interesting pod/machine-config-daemon-q8sfk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 12:38:00 crc kubenswrapper[4703]: I1209 12:38:00.083960 4703 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 12:38:06 crc kubenswrapper[4703]: I1209 12:38:06.048600 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-29stw"] Dec 09 12:38:06 crc kubenswrapper[4703]: I1209 12:38:06.063920 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-55f4w"] Dec 09 12:38:06 crc kubenswrapper[4703]: I1209 12:38:06.074718 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-29stw"] Dec 09 12:38:06 crc kubenswrapper[4703]: I1209 12:38:06.084437 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-55f4w"] Dec 09 12:38:07 crc kubenswrapper[4703]: E1209 12:38:07.073249 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:38:07 crc kubenswrapper[4703]: I1209 12:38:07.085711 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d3e75ac-9860-42b3-b442-f9bd60c82e58" path="/var/lib/kubelet/pods/0d3e75ac-9860-42b3-b442-f9bd60c82e58/volumes" Dec 09 12:38:07 crc kubenswrapper[4703]: I1209 12:38:07.086433 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4e5cccd-9c7c-467b-a10b-4a989ea688e3" path="/var/lib/kubelet/pods/e4e5cccd-9c7c-467b-a10b-4a989ea688e3/volumes" Dec 09 12:38:10 crc kubenswrapper[4703]: E1209 12:38:10.072181 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:38:20 crc kubenswrapper[4703]: I1209 12:38:20.747514 4703 scope.go:117] "RemoveContainer" containerID="e8d2763d1b2c74ab50dd51c74ca6a9e741daa257fa609a0dac2037996b9d20f0" Dec 09 12:38:20 crc kubenswrapper[4703]: I1209 12:38:20.796660 4703 scope.go:117] "RemoveContainer" containerID="f2aabc9a8ae2ebe702d2c1fa8a9cf62a8a17b1b6a75187b7d06fdbebde54602b" Dec 09 12:38:20 crc kubenswrapper[4703]: I1209 12:38:20.850244 4703 scope.go:117] "RemoveContainer" containerID="f5e4c85f2571d5e6aaeeb15fb13a0172a43115609aae53de1c9348495cb363de" Dec 09 12:38:20 crc kubenswrapper[4703]: I1209 12:38:20.911065 4703 scope.go:117] "RemoveContainer" containerID="d7be62cb0c4ea0c1d8b1cb1b97e23d227b57b22d5ef05ada651409b6e39a4e1f" Dec 09 12:38:20 crc kubenswrapper[4703]: I1209 12:38:20.938064 4703 scope.go:117] "RemoveContainer" containerID="cc004dec1884e7bde620a684ee12b6a54fda82586a1a849048e930e961a48e98" Dec 09 12:38:21 crc kubenswrapper[4703]: I1209 12:38:21.007605 4703 scope.go:117] "RemoveContainer" containerID="7488ed608448613a60ebc602396db673e0ad3ea7a2355cc16b76e77e175fb52b" Dec 09 12:38:21 crc kubenswrapper[4703]: I1209 12:38:21.056410 4703 scope.go:117] "RemoveContainer" containerID="597b350cad8d8aacc535de81806eabf41e75c87bf67770c72b69ed15a3617f2c" Dec 09 12:38:21 crc kubenswrapper[4703]: E1209 12:38:21.071526 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:38:21 crc kubenswrapper[4703]: I1209 12:38:21.111430 4703 scope.go:117] "RemoveContainer" containerID="a26689d288ea613e0334ef9a66f35a89866a4f4a77d1792b39715b0d18789404" Dec 09 12:38:21 crc kubenswrapper[4703]: I1209 12:38:21.170923 4703 scope.go:117] "RemoveContainer" containerID="cc4bdd8d09115be028816c8cc89586b2c346a0671b25aa43d6e71daed0bad9b9" Dec 09 12:38:21 crc kubenswrapper[4703]: I1209 12:38:21.199157 4703 scope.go:117] "RemoveContainer" containerID="c93d36ae95a018ec0cad8bb7d4331b801f4ee97e8ec9fde125cf2ee8564bded6" Dec 09 12:38:21 crc kubenswrapper[4703]: E1209 12:38:21.211031 4703 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 12:38:21 crc kubenswrapper[4703]: E1209 12:38:21.211104 4703 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 12:38:21 crc kubenswrapper[4703]: E1209 12:38:21.211312 4703 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n565h678h645h65h685h657h568h564h8fhcfhc4h58fh66bh564h649h5d5hb5h67dh57fh97h58ch557h4h68h59fh5c8h65ch586h5bch5b8h5c6h7q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p7hbq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(ce42c586-f397-4f98-be45-f56d36115d7a): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 12:38:21 crc kubenswrapper[4703]: E1209 12:38:21.212705 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:38:21 crc kubenswrapper[4703]: I1209 12:38:21.225344 4703 scope.go:117] "RemoveContainer" containerID="92be794a9d02e8f84283d05339272c02db8f36ab4afebcd8203b7e9346dc941a" Dec 09 12:38:21 crc kubenswrapper[4703]: I1209 12:38:21.254351 4703 scope.go:117] "RemoveContainer" containerID="5317d50e770dc7f504e942603be31254e3829b1453e26c80c41adea181fc1596" Dec 09 12:38:30 crc kubenswrapper[4703]: I1209 12:38:30.083588 4703 patch_prober.go:28] interesting pod/machine-config-daemon-q8sfk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 12:38:30 crc kubenswrapper[4703]: I1209 12:38:30.084227 4703 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 12:38:34 crc kubenswrapper[4703]: I1209 12:38:34.072953 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-q85xp"] Dec 09 12:38:34 crc kubenswrapper[4703]: I1209 12:38:34.082714 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-q85xp"] Dec 09 12:38:34 crc kubenswrapper[4703]: I1209 12:38:34.102315 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-6b8bj"] Dec 09 12:38:34 crc kubenswrapper[4703]: I1209 12:38:34.118345 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-lq8j2"] Dec 09 12:38:34 crc kubenswrapper[4703]: I1209 12:38:34.127474 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-6b8bj"] Dec 09 12:38:34 crc kubenswrapper[4703]: I1209 12:38:34.139068 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-lq8j2"] Dec 09 12:38:35 crc kubenswrapper[4703]: E1209 12:38:35.072893 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:38:35 crc kubenswrapper[4703]: I1209 12:38:35.084623 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ecae80d-2552-44d1-9b62-c7112893d38f" path="/var/lib/kubelet/pods/3ecae80d-2552-44d1-9b62-c7112893d38f/volumes" Dec 09 12:38:35 crc kubenswrapper[4703]: I1209 12:38:35.085640 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6eb7e89e-dacb-4bd5-a8e5-ebe2a91807d7" path="/var/lib/kubelet/pods/6eb7e89e-dacb-4bd5-a8e5-ebe2a91807d7/volumes" Dec 09 12:38:35 crc kubenswrapper[4703]: I1209 12:38:35.086251 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75741f58-7780-4daa-a61b-6985e8baeb5b" path="/var/lib/kubelet/pods/75741f58-7780-4daa-a61b-6985e8baeb5b/volumes" Dec 09 12:38:35 crc kubenswrapper[4703]: E1209 12:38:35.159190 4703 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 09 12:38:35 crc kubenswrapper[4703]: E1209 12:38:35.159299 4703 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 09 12:38:35 crc kubenswrapper[4703]: E1209 12:38:35.159542 4703 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6kdkz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-6ljb8_openstack(862e9d91-760e-43af-aba3-a23255b0fd7a): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 12:38:35 crc kubenswrapper[4703]: E1209 12:38:35.160785 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:38:46 crc kubenswrapper[4703]: I1209 12:38:46.063665 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-5tddl"] Dec 09 12:38:46 crc kubenswrapper[4703]: I1209 12:38:46.075602 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-5tddl"] Dec 09 12:38:47 crc kubenswrapper[4703]: I1209 12:38:47.087784 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24ebaba5-65a6-4be5-8112-10c77a6d986c" path="/var/lib/kubelet/pods/24ebaba5-65a6-4be5-8112-10c77a6d986c/volumes" Dec 09 12:38:48 crc kubenswrapper[4703]: E1209 12:38:48.072199 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:38:50 crc kubenswrapper[4703]: E1209 12:38:50.072182 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:38:52 crc kubenswrapper[4703]: I1209 12:38:52.636124 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6qh44"] Dec 09 12:38:52 crc kubenswrapper[4703]: I1209 12:38:52.647641 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6qh44" Dec 09 12:38:52 crc kubenswrapper[4703]: I1209 12:38:52.659021 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6qh44"] Dec 09 12:38:52 crc kubenswrapper[4703]: I1209 12:38:52.771900 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rg7kj\" (UniqueName: \"kubernetes.io/projected/c901e280-1fff-4576-a016-e5da19e0028d-kube-api-access-rg7kj\") pod \"redhat-operators-6qh44\" (UID: \"c901e280-1fff-4576-a016-e5da19e0028d\") " pod="openshift-marketplace/redhat-operators-6qh44" Dec 09 12:38:52 crc kubenswrapper[4703]: I1209 12:38:52.772174 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c901e280-1fff-4576-a016-e5da19e0028d-utilities\") pod \"redhat-operators-6qh44\" (UID: \"c901e280-1fff-4576-a016-e5da19e0028d\") " pod="openshift-marketplace/redhat-operators-6qh44" Dec 09 12:38:52 crc kubenswrapper[4703]: I1209 12:38:52.772259 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c901e280-1fff-4576-a016-e5da19e0028d-catalog-content\") pod \"redhat-operators-6qh44\" (UID: \"c901e280-1fff-4576-a016-e5da19e0028d\") " pod="openshift-marketplace/redhat-operators-6qh44" Dec 09 12:38:52 crc kubenswrapper[4703]: I1209 12:38:52.874746 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c901e280-1fff-4576-a016-e5da19e0028d-catalog-content\") pod \"redhat-operators-6qh44\" (UID: \"c901e280-1fff-4576-a016-e5da19e0028d\") " pod="openshift-marketplace/redhat-operators-6qh44" Dec 09 12:38:52 crc kubenswrapper[4703]: I1209 12:38:52.875077 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rg7kj\" (UniqueName: \"kubernetes.io/projected/c901e280-1fff-4576-a016-e5da19e0028d-kube-api-access-rg7kj\") pod \"redhat-operators-6qh44\" (UID: \"c901e280-1fff-4576-a016-e5da19e0028d\") " pod="openshift-marketplace/redhat-operators-6qh44" Dec 09 12:38:52 crc kubenswrapper[4703]: I1209 12:38:52.875111 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c901e280-1fff-4576-a016-e5da19e0028d-utilities\") pod \"redhat-operators-6qh44\" (UID: \"c901e280-1fff-4576-a016-e5da19e0028d\") " pod="openshift-marketplace/redhat-operators-6qh44" Dec 09 12:38:52 crc kubenswrapper[4703]: I1209 12:38:52.875860 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c901e280-1fff-4576-a016-e5da19e0028d-utilities\") pod \"redhat-operators-6qh44\" (UID: \"c901e280-1fff-4576-a016-e5da19e0028d\") " pod="openshift-marketplace/redhat-operators-6qh44" Dec 09 12:38:52 crc kubenswrapper[4703]: I1209 12:38:52.876171 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c901e280-1fff-4576-a016-e5da19e0028d-catalog-content\") pod \"redhat-operators-6qh44\" (UID: \"c901e280-1fff-4576-a016-e5da19e0028d\") " pod="openshift-marketplace/redhat-operators-6qh44" Dec 09 12:38:52 crc kubenswrapper[4703]: I1209 12:38:52.904981 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rg7kj\" (UniqueName: \"kubernetes.io/projected/c901e280-1fff-4576-a016-e5da19e0028d-kube-api-access-rg7kj\") pod \"redhat-operators-6qh44\" (UID: \"c901e280-1fff-4576-a016-e5da19e0028d\") " pod="openshift-marketplace/redhat-operators-6qh44" Dec 09 12:38:52 crc kubenswrapper[4703]: I1209 12:38:52.991009 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6qh44" Dec 09 12:38:53 crc kubenswrapper[4703]: I1209 12:38:53.552368 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6qh44"] Dec 09 12:38:53 crc kubenswrapper[4703]: W1209 12:38:53.571940 4703 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc901e280_1fff_4576_a016_e5da19e0028d.slice/crio-fa14ec803e2c93734b79f939256589f57829a2feae3696b4b7dc808089f60a8d WatchSource:0}: Error finding container fa14ec803e2c93734b79f939256589f57829a2feae3696b4b7dc808089f60a8d: Status 404 returned error can't find the container with id fa14ec803e2c93734b79f939256589f57829a2feae3696b4b7dc808089f60a8d Dec 09 12:38:53 crc kubenswrapper[4703]: I1209 12:38:53.696675 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6qh44" event={"ID":"c901e280-1fff-4576-a016-e5da19e0028d","Type":"ContainerStarted","Data":"fa14ec803e2c93734b79f939256589f57829a2feae3696b4b7dc808089f60a8d"} Dec 09 12:38:54 crc kubenswrapper[4703]: I1209 12:38:54.708832 4703 generic.go:334] "Generic (PLEG): container finished" podID="c901e280-1fff-4576-a016-e5da19e0028d" containerID="df9fd20cae91e8c7dbae4b9b6611ac422b5ae03de8c7f29e6e740345b33a7669" exitCode=0 Dec 09 12:38:54 crc kubenswrapper[4703]: I1209 12:38:54.708890 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6qh44" event={"ID":"c901e280-1fff-4576-a016-e5da19e0028d","Type":"ContainerDied","Data":"df9fd20cae91e8c7dbae4b9b6611ac422b5ae03de8c7f29e6e740345b33a7669"} Dec 09 12:38:55 crc kubenswrapper[4703]: I1209 12:38:55.723488 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6qh44" event={"ID":"c901e280-1fff-4576-a016-e5da19e0028d","Type":"ContainerStarted","Data":"13f9cf518fe3fefdfe6cccbdabdeed6a7835199753295aadd029f948809e02df"} Dec 09 12:39:00 crc kubenswrapper[4703]: I1209 12:39:00.083601 4703 patch_prober.go:28] interesting pod/machine-config-daemon-q8sfk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 12:39:00 crc kubenswrapper[4703]: I1209 12:39:00.084200 4703 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 12:39:00 crc kubenswrapper[4703]: I1209 12:39:00.084267 4703 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" Dec 09 12:39:00 crc kubenswrapper[4703]: I1209 12:39:00.085252 4703 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4a64237187233fa81213ab41a33446def4bff6242a1454c3a80bdbd693e41ba3"} pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 12:39:00 crc kubenswrapper[4703]: I1209 12:39:00.085324 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" containerName="machine-config-daemon" containerID="cri-o://4a64237187233fa81213ab41a33446def4bff6242a1454c3a80bdbd693e41ba3" gracePeriod=600 Dec 09 12:39:00 crc kubenswrapper[4703]: I1209 12:39:00.335316 4703 generic.go:334] "Generic (PLEG): container finished" podID="32956ceb-8540-406e-8693-e86efb46cd42" containerID="4a64237187233fa81213ab41a33446def4bff6242a1454c3a80bdbd693e41ba3" exitCode=0 Dec 09 12:39:00 crc kubenswrapper[4703]: I1209 12:39:00.335373 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" event={"ID":"32956ceb-8540-406e-8693-e86efb46cd42","Type":"ContainerDied","Data":"4a64237187233fa81213ab41a33446def4bff6242a1454c3a80bdbd693e41ba3"} Dec 09 12:39:00 crc kubenswrapper[4703]: I1209 12:39:00.335418 4703 scope.go:117] "RemoveContainer" containerID="aec88ccdbd5bc6c7f7f9dd534b57b24318f2c545ead2d59998c5ffb08ae72b46" Dec 09 12:39:01 crc kubenswrapper[4703]: E1209 12:39:01.102349 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:39:01 crc kubenswrapper[4703]: I1209 12:39:01.345999 4703 generic.go:334] "Generic (PLEG): container finished" podID="c901e280-1fff-4576-a016-e5da19e0028d" containerID="13f9cf518fe3fefdfe6cccbdabdeed6a7835199753295aadd029f948809e02df" exitCode=0 Dec 09 12:39:01 crc kubenswrapper[4703]: I1209 12:39:01.346081 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6qh44" event={"ID":"c901e280-1fff-4576-a016-e5da19e0028d","Type":"ContainerDied","Data":"13f9cf518fe3fefdfe6cccbdabdeed6a7835199753295aadd029f948809e02df"} Dec 09 12:39:01 crc kubenswrapper[4703]: I1209 12:39:01.349475 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" event={"ID":"32956ceb-8540-406e-8693-e86efb46cd42","Type":"ContainerStarted","Data":"81e995d92400a3150feea876a90492238c9377534b0477b21807ac1cd60af1d5"} Dec 09 12:39:02 crc kubenswrapper[4703]: I1209 12:39:02.368075 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6qh44" event={"ID":"c901e280-1fff-4576-a016-e5da19e0028d","Type":"ContainerStarted","Data":"433b81377d07282e5bdfec55605c7959e433d5257e1df3cd152a06de3f19d0e7"} Dec 09 12:39:02 crc kubenswrapper[4703]: I1209 12:39:02.395816 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6qh44" podStartSLOduration=3.230420599 podStartE2EDuration="10.395790987s" podCreationTimestamp="2025-12-09 12:38:52 +0000 UTC" firstStartedPulling="2025-12-09 12:38:54.711743732 +0000 UTC m=+2033.960507251" lastFinishedPulling="2025-12-09 12:39:01.87711413 +0000 UTC m=+2041.125877639" observedRunningTime="2025-12-09 12:39:02.391283237 +0000 UTC m=+2041.640046756" watchObservedRunningTime="2025-12-09 12:39:02.395790987 +0000 UTC m=+2041.644554506" Dec 09 12:39:02 crc kubenswrapper[4703]: I1209 12:39:02.992137 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6qh44" Dec 09 12:39:02 crc kubenswrapper[4703]: I1209 12:39:02.992485 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6qh44" Dec 09 12:39:04 crc kubenswrapper[4703]: I1209 12:39:04.045494 4703 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6qh44" podUID="c901e280-1fff-4576-a016-e5da19e0028d" containerName="registry-server" probeResult="failure" output=< Dec 09 12:39:04 crc kubenswrapper[4703]: timeout: failed to connect service ":50051" within 1s Dec 09 12:39:04 crc kubenswrapper[4703]: > Dec 09 12:39:05 crc kubenswrapper[4703]: E1209 12:39:05.072568 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:39:06 crc kubenswrapper[4703]: I1209 12:39:06.032528 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-storageinit-zflpx"] Dec 09 12:39:06 crc kubenswrapper[4703]: I1209 12:39:06.043590 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-storageinit-zflpx"] Dec 09 12:39:07 crc kubenswrapper[4703]: I1209 12:39:07.083330 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f9a15f4-669d-4ad2-89a7-4c422b0b6bf1" path="/var/lib/kubelet/pods/0f9a15f4-669d-4ad2-89a7-4c422b0b6bf1/volumes" Dec 09 12:39:13 crc kubenswrapper[4703]: I1209 12:39:13.047366 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6qh44" Dec 09 12:39:13 crc kubenswrapper[4703]: I1209 12:39:13.114758 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6qh44" Dec 09 12:39:13 crc kubenswrapper[4703]: I1209 12:39:13.293134 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6qh44"] Dec 09 12:39:14 crc kubenswrapper[4703]: I1209 12:39:14.498186 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6qh44" podUID="c901e280-1fff-4576-a016-e5da19e0028d" containerName="registry-server" containerID="cri-o://433b81377d07282e5bdfec55605c7959e433d5257e1df3cd152a06de3f19d0e7" gracePeriod=2 Dec 09 12:39:15 crc kubenswrapper[4703]: E1209 12:39:15.072489 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:39:15 crc kubenswrapper[4703]: I1209 12:39:15.100257 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6qh44" Dec 09 12:39:15 crc kubenswrapper[4703]: I1209 12:39:15.198356 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rg7kj\" (UniqueName: \"kubernetes.io/projected/c901e280-1fff-4576-a016-e5da19e0028d-kube-api-access-rg7kj\") pod \"c901e280-1fff-4576-a016-e5da19e0028d\" (UID: \"c901e280-1fff-4576-a016-e5da19e0028d\") " Dec 09 12:39:15 crc kubenswrapper[4703]: I1209 12:39:15.198536 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c901e280-1fff-4576-a016-e5da19e0028d-utilities\") pod \"c901e280-1fff-4576-a016-e5da19e0028d\" (UID: \"c901e280-1fff-4576-a016-e5da19e0028d\") " Dec 09 12:39:15 crc kubenswrapper[4703]: I1209 12:39:15.198874 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c901e280-1fff-4576-a016-e5da19e0028d-catalog-content\") pod \"c901e280-1fff-4576-a016-e5da19e0028d\" (UID: \"c901e280-1fff-4576-a016-e5da19e0028d\") " Dec 09 12:39:15 crc kubenswrapper[4703]: I1209 12:39:15.199677 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c901e280-1fff-4576-a016-e5da19e0028d-utilities" (OuterVolumeSpecName: "utilities") pod "c901e280-1fff-4576-a016-e5da19e0028d" (UID: "c901e280-1fff-4576-a016-e5da19e0028d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:39:15 crc kubenswrapper[4703]: I1209 12:39:15.231584 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c901e280-1fff-4576-a016-e5da19e0028d-kube-api-access-rg7kj" (OuterVolumeSpecName: "kube-api-access-rg7kj") pod "c901e280-1fff-4576-a016-e5da19e0028d" (UID: "c901e280-1fff-4576-a016-e5da19e0028d"). InnerVolumeSpecName "kube-api-access-rg7kj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:39:15 crc kubenswrapper[4703]: I1209 12:39:15.303376 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rg7kj\" (UniqueName: \"kubernetes.io/projected/c901e280-1fff-4576-a016-e5da19e0028d-kube-api-access-rg7kj\") on node \"crc\" DevicePath \"\"" Dec 09 12:39:15 crc kubenswrapper[4703]: I1209 12:39:15.303747 4703 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c901e280-1fff-4576-a016-e5da19e0028d-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 12:39:15 crc kubenswrapper[4703]: I1209 12:39:15.344776 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c901e280-1fff-4576-a016-e5da19e0028d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c901e280-1fff-4576-a016-e5da19e0028d" (UID: "c901e280-1fff-4576-a016-e5da19e0028d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:39:15 crc kubenswrapper[4703]: I1209 12:39:15.406043 4703 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c901e280-1fff-4576-a016-e5da19e0028d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 12:39:15 crc kubenswrapper[4703]: I1209 12:39:15.514917 4703 generic.go:334] "Generic (PLEG): container finished" podID="c901e280-1fff-4576-a016-e5da19e0028d" containerID="433b81377d07282e5bdfec55605c7959e433d5257e1df3cd152a06de3f19d0e7" exitCode=0 Dec 09 12:39:15 crc kubenswrapper[4703]: I1209 12:39:15.515024 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6qh44" Dec 09 12:39:15 crc kubenswrapper[4703]: I1209 12:39:15.515028 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6qh44" event={"ID":"c901e280-1fff-4576-a016-e5da19e0028d","Type":"ContainerDied","Data":"433b81377d07282e5bdfec55605c7959e433d5257e1df3cd152a06de3f19d0e7"} Dec 09 12:39:15 crc kubenswrapper[4703]: I1209 12:39:15.516353 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6qh44" event={"ID":"c901e280-1fff-4576-a016-e5da19e0028d","Type":"ContainerDied","Data":"fa14ec803e2c93734b79f939256589f57829a2feae3696b4b7dc808089f60a8d"} Dec 09 12:39:15 crc kubenswrapper[4703]: I1209 12:39:15.516385 4703 scope.go:117] "RemoveContainer" containerID="433b81377d07282e5bdfec55605c7959e433d5257e1df3cd152a06de3f19d0e7" Dec 09 12:39:15 crc kubenswrapper[4703]: I1209 12:39:15.545421 4703 scope.go:117] "RemoveContainer" containerID="13f9cf518fe3fefdfe6cccbdabdeed6a7835199753295aadd029f948809e02df" Dec 09 12:39:15 crc kubenswrapper[4703]: I1209 12:39:15.572712 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6qh44"] Dec 09 12:39:15 crc kubenswrapper[4703]: I1209 12:39:15.582627 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6qh44"] Dec 09 12:39:15 crc kubenswrapper[4703]: I1209 12:39:15.613586 4703 scope.go:117] "RemoveContainer" containerID="df9fd20cae91e8c7dbae4b9b6611ac422b5ae03de8c7f29e6e740345b33a7669" Dec 09 12:39:15 crc kubenswrapper[4703]: I1209 12:39:15.648923 4703 scope.go:117] "RemoveContainer" containerID="433b81377d07282e5bdfec55605c7959e433d5257e1df3cd152a06de3f19d0e7" Dec 09 12:39:15 crc kubenswrapper[4703]: E1209 12:39:15.649615 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"433b81377d07282e5bdfec55605c7959e433d5257e1df3cd152a06de3f19d0e7\": container with ID starting with 433b81377d07282e5bdfec55605c7959e433d5257e1df3cd152a06de3f19d0e7 not found: ID does not exist" containerID="433b81377d07282e5bdfec55605c7959e433d5257e1df3cd152a06de3f19d0e7" Dec 09 12:39:15 crc kubenswrapper[4703]: I1209 12:39:15.649667 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"433b81377d07282e5bdfec55605c7959e433d5257e1df3cd152a06de3f19d0e7"} err="failed to get container status \"433b81377d07282e5bdfec55605c7959e433d5257e1df3cd152a06de3f19d0e7\": rpc error: code = NotFound desc = could not find container \"433b81377d07282e5bdfec55605c7959e433d5257e1df3cd152a06de3f19d0e7\": container with ID starting with 433b81377d07282e5bdfec55605c7959e433d5257e1df3cd152a06de3f19d0e7 not found: ID does not exist" Dec 09 12:39:15 crc kubenswrapper[4703]: I1209 12:39:15.649699 4703 scope.go:117] "RemoveContainer" containerID="13f9cf518fe3fefdfe6cccbdabdeed6a7835199753295aadd029f948809e02df" Dec 09 12:39:15 crc kubenswrapper[4703]: E1209 12:39:15.650131 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13f9cf518fe3fefdfe6cccbdabdeed6a7835199753295aadd029f948809e02df\": container with ID starting with 13f9cf518fe3fefdfe6cccbdabdeed6a7835199753295aadd029f948809e02df not found: ID does not exist" containerID="13f9cf518fe3fefdfe6cccbdabdeed6a7835199753295aadd029f948809e02df" Dec 09 12:39:15 crc kubenswrapper[4703]: I1209 12:39:15.650172 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13f9cf518fe3fefdfe6cccbdabdeed6a7835199753295aadd029f948809e02df"} err="failed to get container status \"13f9cf518fe3fefdfe6cccbdabdeed6a7835199753295aadd029f948809e02df\": rpc error: code = NotFound desc = could not find container \"13f9cf518fe3fefdfe6cccbdabdeed6a7835199753295aadd029f948809e02df\": container with ID starting with 13f9cf518fe3fefdfe6cccbdabdeed6a7835199753295aadd029f948809e02df not found: ID does not exist" Dec 09 12:39:15 crc kubenswrapper[4703]: I1209 12:39:15.650218 4703 scope.go:117] "RemoveContainer" containerID="df9fd20cae91e8c7dbae4b9b6611ac422b5ae03de8c7f29e6e740345b33a7669" Dec 09 12:39:15 crc kubenswrapper[4703]: E1209 12:39:15.650634 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df9fd20cae91e8c7dbae4b9b6611ac422b5ae03de8c7f29e6e740345b33a7669\": container with ID starting with df9fd20cae91e8c7dbae4b9b6611ac422b5ae03de8c7f29e6e740345b33a7669 not found: ID does not exist" containerID="df9fd20cae91e8c7dbae4b9b6611ac422b5ae03de8c7f29e6e740345b33a7669" Dec 09 12:39:15 crc kubenswrapper[4703]: I1209 12:39:15.650664 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df9fd20cae91e8c7dbae4b9b6611ac422b5ae03de8c7f29e6e740345b33a7669"} err="failed to get container status \"df9fd20cae91e8c7dbae4b9b6611ac422b5ae03de8c7f29e6e740345b33a7669\": rpc error: code = NotFound desc = could not find container \"df9fd20cae91e8c7dbae4b9b6611ac422b5ae03de8c7f29e6e740345b33a7669\": container with ID starting with df9fd20cae91e8c7dbae4b9b6611ac422b5ae03de8c7f29e6e740345b33a7669 not found: ID does not exist" Dec 09 12:39:16 crc kubenswrapper[4703]: E1209 12:39:16.070990 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:39:17 crc kubenswrapper[4703]: I1209 12:39:17.083990 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c901e280-1fff-4576-a016-e5da19e0028d" path="/var/lib/kubelet/pods/c901e280-1fff-4576-a016-e5da19e0028d/volumes" Dec 09 12:39:21 crc kubenswrapper[4703]: I1209 12:39:21.480561 4703 scope.go:117] "RemoveContainer" containerID="d740a9dc6c4bf9dbe86f36ddda107bf71bc493667564b38b84e6fcf64e2c4497" Dec 09 12:39:21 crc kubenswrapper[4703]: I1209 12:39:21.522396 4703 scope.go:117] "RemoveContainer" containerID="6f5fa72102cf4873c2f34f345e09eb221c662001500660781d4003559b5546ae" Dec 09 12:39:21 crc kubenswrapper[4703]: I1209 12:39:21.583075 4703 scope.go:117] "RemoveContainer" containerID="5f49fc5c65b86923d680125a38df5215f1aed13237ac8f72e8e63f62750a4346" Dec 09 12:39:21 crc kubenswrapper[4703]: I1209 12:39:21.690456 4703 scope.go:117] "RemoveContainer" containerID="ffe8c00834f88e8a1b16950144072de9b323588a9a22d84558f4307225e3aba7" Dec 09 12:39:21 crc kubenswrapper[4703]: I1209 12:39:21.745448 4703 scope.go:117] "RemoveContainer" containerID="a267761f02b0927b2a54c59cddb2937a8648c0ef4f03c7f5e1463b4b17805921" Dec 09 12:39:27 crc kubenswrapper[4703]: E1209 12:39:27.072921 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:39:28 crc kubenswrapper[4703]: E1209 12:39:28.072109 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:39:42 crc kubenswrapper[4703]: I1209 12:39:42.059840 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-fms5f"] Dec 09 12:39:42 crc kubenswrapper[4703]: I1209 12:39:42.072631 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-fms5f"] Dec 09 12:39:42 crc kubenswrapper[4703]: E1209 12:39:42.072959 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:39:43 crc kubenswrapper[4703]: I1209 12:39:43.046150 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-bqvwt"] Dec 09 12:39:43 crc kubenswrapper[4703]: I1209 12:39:43.062750 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-241d-account-create-update-jx7s6"] Dec 09 12:39:43 crc kubenswrapper[4703]: E1209 12:39:43.072389 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:39:43 crc kubenswrapper[4703]: I1209 12:39:43.085033 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b5815e2-e19c-491f-b0c9-19e651e10fec" path="/var/lib/kubelet/pods/3b5815e2-e19c-491f-b0c9-19e651e10fec/volumes" Dec 09 12:39:43 crc kubenswrapper[4703]: I1209 12:39:43.085844 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-5jcln"] Dec 09 12:39:43 crc kubenswrapper[4703]: I1209 12:39:43.089106 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-5432-account-create-update-t6hf2"] Dec 09 12:39:43 crc kubenswrapper[4703]: I1209 12:39:43.101380 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-2cb9-account-create-update-n9tpj"] Dec 09 12:39:43 crc kubenswrapper[4703]: I1209 12:39:43.113684 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-bqvwt"] Dec 09 12:39:43 crc kubenswrapper[4703]: I1209 12:39:43.126055 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-5432-account-create-update-t6hf2"] Dec 09 12:39:43 crc kubenswrapper[4703]: I1209 12:39:43.137772 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-5jcln"] Dec 09 12:39:43 crc kubenswrapper[4703]: I1209 12:39:43.149439 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-241d-account-create-update-jx7s6"] Dec 09 12:39:43 crc kubenswrapper[4703]: I1209 12:39:43.159502 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-2cb9-account-create-update-n9tpj"] Dec 09 12:39:45 crc kubenswrapper[4703]: I1209 12:39:45.085119 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59cdfe75-7ac7-4bf4-9b72-951c5ee0bc73" path="/var/lib/kubelet/pods/59cdfe75-7ac7-4bf4-9b72-951c5ee0bc73/volumes" Dec 09 12:39:45 crc kubenswrapper[4703]: I1209 12:39:45.086404 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76d30e84-090d-427a-b0bc-f41ced88f0b4" path="/var/lib/kubelet/pods/76d30e84-090d-427a-b0bc-f41ced88f0b4/volumes" Dec 09 12:39:45 crc kubenswrapper[4703]: I1209 12:39:45.087176 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9b712bf-6532-486d-bf10-577b49caba4c" path="/var/lib/kubelet/pods/d9b712bf-6532-486d-bf10-577b49caba4c/volumes" Dec 09 12:39:45 crc kubenswrapper[4703]: I1209 12:39:45.087878 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f26022ad-c235-4dd6-abe2-40fe489afe81" path="/var/lib/kubelet/pods/f26022ad-c235-4dd6-abe2-40fe489afe81/volumes" Dec 09 12:39:45 crc kubenswrapper[4703]: I1209 12:39:45.089322 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb4bb3a0-758c-4ce3-b2a6-5b34f95531b9" path="/var/lib/kubelet/pods/fb4bb3a0-758c-4ce3-b2a6-5b34f95531b9/volumes" Dec 09 12:39:56 crc kubenswrapper[4703]: E1209 12:39:56.073115 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:39:56 crc kubenswrapper[4703]: E1209 12:39:56.073684 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:40:10 crc kubenswrapper[4703]: E1209 12:40:10.085482 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:40:11 crc kubenswrapper[4703]: E1209 12:40:11.083449 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:40:22 crc kubenswrapper[4703]: I1209 12:40:22.015771 4703 scope.go:117] "RemoveContainer" containerID="72f141da5c8a031c7d1251d02c2802bd911e730c1e439cf1df41961391ffb863" Dec 09 12:40:22 crc kubenswrapper[4703]: I1209 12:40:22.052497 4703 scope.go:117] "RemoveContainer" containerID="96d8eea2d9e701a39ee402070c6fd873845af43c63404625177ee02b3fcc549e" Dec 09 12:40:22 crc kubenswrapper[4703]: E1209 12:40:22.072855 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:40:22 crc kubenswrapper[4703]: I1209 12:40:22.120017 4703 scope.go:117] "RemoveContainer" containerID="e551da69f2355c1f3f90cd74c443fb8372ec937fbde9a16152a5767d78549c3e" Dec 09 12:40:22 crc kubenswrapper[4703]: I1209 12:40:22.193360 4703 scope.go:117] "RemoveContainer" containerID="54250a58131dee7a170d21769a59d82eae888a0c48199c1ab36b6af7eb2bead5" Dec 09 12:40:22 crc kubenswrapper[4703]: I1209 12:40:22.234839 4703 scope.go:117] "RemoveContainer" containerID="a7a8dfd51dc66e816339a1f32b077ab2394ffdb55d1bd8f1a1a0dc2fa0de5464" Dec 09 12:40:22 crc kubenswrapper[4703]: I1209 12:40:22.298206 4703 scope.go:117] "RemoveContainer" containerID="7b08e94082453392b3ff99d150e7a01f9247396a0410b23ab2b090f4d33a5e26" Dec 09 12:40:25 crc kubenswrapper[4703]: E1209 12:40:25.073543 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:40:34 crc kubenswrapper[4703]: E1209 12:40:34.079430 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:40:40 crc kubenswrapper[4703]: E1209 12:40:40.073673 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:40:41 crc kubenswrapper[4703]: I1209 12:40:41.089641 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-xdtxt"] Dec 09 12:40:41 crc kubenswrapper[4703]: I1209 12:40:41.106100 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-xdtxt"] Dec 09 12:40:43 crc kubenswrapper[4703]: I1209 12:40:43.084867 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="955f8136-42b0-4fce-8853-3c467cf8d070" path="/var/lib/kubelet/pods/955f8136-42b0-4fce-8853-3c467cf8d070/volumes" Dec 09 12:40:46 crc kubenswrapper[4703]: E1209 12:40:46.072182 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:40:52 crc kubenswrapper[4703]: E1209 12:40:52.073769 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:40:58 crc kubenswrapper[4703]: E1209 12:40:58.072767 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:41:00 crc kubenswrapper[4703]: I1209 12:41:00.084150 4703 patch_prober.go:28] interesting pod/machine-config-daemon-q8sfk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 12:41:00 crc kubenswrapper[4703]: I1209 12:41:00.084532 4703 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 12:41:06 crc kubenswrapper[4703]: E1209 12:41:06.072634 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:41:12 crc kubenswrapper[4703]: E1209 12:41:12.078610 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:41:15 crc kubenswrapper[4703]: I1209 12:41:15.048934 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-zhqpt"] Dec 09 12:41:15 crc kubenswrapper[4703]: I1209 12:41:15.060276 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-zhqpt"] Dec 09 12:41:15 crc kubenswrapper[4703]: I1209 12:41:15.086020 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25934d60-c1c2-41eb-9470-cdcab1791642" path="/var/lib/kubelet/pods/25934d60-c1c2-41eb-9470-cdcab1791642/volumes" Dec 09 12:41:16 crc kubenswrapper[4703]: I1209 12:41:16.042385 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-mz25q"] Dec 09 12:41:16 crc kubenswrapper[4703]: I1209 12:41:16.052604 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-mz25q"] Dec 09 12:41:17 crc kubenswrapper[4703]: I1209 12:41:17.085938 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="544746e3-7d8d-4459-a5d0-e6688c51ccbd" path="/var/lib/kubelet/pods/544746e3-7d8d-4459-a5d0-e6688c51ccbd/volumes" Dec 09 12:41:21 crc kubenswrapper[4703]: E1209 12:41:21.079253 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:41:22 crc kubenswrapper[4703]: I1209 12:41:22.519255 4703 scope.go:117] "RemoveContainer" containerID="f5e137403eee78de79fb4ff40bf3ebdec88e1d547fca3518aa70ff40c87b7550" Dec 09 12:41:22 crc kubenswrapper[4703]: I1209 12:41:22.595685 4703 scope.go:117] "RemoveContainer" containerID="9e13c46d74f28c98eeae969c2900de9c130c095741926371229528dd47c053e4" Dec 09 12:41:22 crc kubenswrapper[4703]: I1209 12:41:22.629536 4703 scope.go:117] "RemoveContainer" containerID="4a9f04ce87d53b7cb3be2bf3ca56af97f08617520d8cd41677fd8eb0dc9f2eff" Dec 09 12:41:25 crc kubenswrapper[4703]: E1209 12:41:25.076130 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:41:30 crc kubenswrapper[4703]: I1209 12:41:30.083398 4703 patch_prober.go:28] interesting pod/machine-config-daemon-q8sfk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 12:41:30 crc kubenswrapper[4703]: I1209 12:41:30.084124 4703 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 12:41:33 crc kubenswrapper[4703]: E1209 12:41:33.074450 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:41:39 crc kubenswrapper[4703]: E1209 12:41:39.073223 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:41:44 crc kubenswrapper[4703]: E1209 12:41:44.076355 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:41:53 crc kubenswrapper[4703]: E1209 12:41:53.090349 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:41:57 crc kubenswrapper[4703]: I1209 12:41:57.048017 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-znxfc"] Dec 09 12:41:57 crc kubenswrapper[4703]: E1209 12:41:57.079828 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:41:57 crc kubenswrapper[4703]: I1209 12:41:57.103274 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-znxfc"] Dec 09 12:41:59 crc kubenswrapper[4703]: I1209 12:41:59.085151 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bb06d6a-e31a-4624-bbca-fa4f970d5685" path="/var/lib/kubelet/pods/3bb06d6a-e31a-4624-bbca-fa4f970d5685/volumes" Dec 09 12:42:00 crc kubenswrapper[4703]: I1209 12:42:00.083286 4703 patch_prober.go:28] interesting pod/machine-config-daemon-q8sfk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 12:42:00 crc kubenswrapper[4703]: I1209 12:42:00.083372 4703 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 12:42:00 crc kubenswrapper[4703]: I1209 12:42:00.083434 4703 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" Dec 09 12:42:00 crc kubenswrapper[4703]: I1209 12:42:00.084154 4703 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"81e995d92400a3150feea876a90492238c9377534b0477b21807ac1cd60af1d5"} pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 12:42:00 crc kubenswrapper[4703]: I1209 12:42:00.084261 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" containerName="machine-config-daemon" containerID="cri-o://81e995d92400a3150feea876a90492238c9377534b0477b21807ac1cd60af1d5" gracePeriod=600 Dec 09 12:42:00 crc kubenswrapper[4703]: E1209 12:42:00.333928 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 12:42:00 crc kubenswrapper[4703]: I1209 12:42:00.430740 4703 generic.go:334] "Generic (PLEG): container finished" podID="32956ceb-8540-406e-8693-e86efb46cd42" containerID="81e995d92400a3150feea876a90492238c9377534b0477b21807ac1cd60af1d5" exitCode=0 Dec 09 12:42:00 crc kubenswrapper[4703]: I1209 12:42:00.430798 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" event={"ID":"32956ceb-8540-406e-8693-e86efb46cd42","Type":"ContainerDied","Data":"81e995d92400a3150feea876a90492238c9377534b0477b21807ac1cd60af1d5"} Dec 09 12:42:00 crc kubenswrapper[4703]: I1209 12:42:00.430839 4703 scope.go:117] "RemoveContainer" containerID="4a64237187233fa81213ab41a33446def4bff6242a1454c3a80bdbd693e41ba3" Dec 09 12:42:00 crc kubenswrapper[4703]: I1209 12:42:00.432466 4703 scope.go:117] "RemoveContainer" containerID="81e995d92400a3150feea876a90492238c9377534b0477b21807ac1cd60af1d5" Dec 09 12:42:00 crc kubenswrapper[4703]: E1209 12:42:00.432762 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 12:42:05 crc kubenswrapper[4703]: E1209 12:42:05.072010 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:42:09 crc kubenswrapper[4703]: E1209 12:42:09.073695 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:42:14 crc kubenswrapper[4703]: I1209 12:42:14.070000 4703 scope.go:117] "RemoveContainer" containerID="81e995d92400a3150feea876a90492238c9377534b0477b21807ac1cd60af1d5" Dec 09 12:42:14 crc kubenswrapper[4703]: E1209 12:42:14.071979 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 12:42:18 crc kubenswrapper[4703]: E1209 12:42:18.075602 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:42:22 crc kubenswrapper[4703]: I1209 12:42:22.779600 4703 scope.go:117] "RemoveContainer" containerID="5444d85da0ade88650fe09ff8acb4d4b0e785660fa906a130d8a24089db9f2ea" Dec 09 12:42:24 crc kubenswrapper[4703]: E1209 12:42:24.072436 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:42:28 crc kubenswrapper[4703]: I1209 12:42:28.070285 4703 scope.go:117] "RemoveContainer" containerID="81e995d92400a3150feea876a90492238c9377534b0477b21807ac1cd60af1d5" Dec 09 12:42:28 crc kubenswrapper[4703]: E1209 12:42:28.071092 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 12:42:31 crc kubenswrapper[4703]: E1209 12:42:31.080287 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:42:35 crc kubenswrapper[4703]: E1209 12:42:35.073673 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:42:42 crc kubenswrapper[4703]: I1209 12:42:42.070467 4703 scope.go:117] "RemoveContainer" containerID="81e995d92400a3150feea876a90492238c9377534b0477b21807ac1cd60af1d5" Dec 09 12:42:42 crc kubenswrapper[4703]: E1209 12:42:42.071462 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 12:42:45 crc kubenswrapper[4703]: E1209 12:42:45.073497 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:42:46 crc kubenswrapper[4703]: E1209 12:42:46.074111 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:42:56 crc kubenswrapper[4703]: I1209 12:42:56.071484 4703 scope.go:117] "RemoveContainer" containerID="81e995d92400a3150feea876a90492238c9377534b0477b21807ac1cd60af1d5" Dec 09 12:42:56 crc kubenswrapper[4703]: E1209 12:42:56.072618 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 12:42:56 crc kubenswrapper[4703]: E1209 12:42:56.074382 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:42:56 crc kubenswrapper[4703]: I1209 12:42:56.920710 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qfd2r"] Dec 09 12:42:56 crc kubenswrapper[4703]: E1209 12:42:56.921240 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c901e280-1fff-4576-a016-e5da19e0028d" containerName="extract-utilities" Dec 09 12:42:56 crc kubenswrapper[4703]: I1209 12:42:56.921430 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="c901e280-1fff-4576-a016-e5da19e0028d" containerName="extract-utilities" Dec 09 12:42:56 crc kubenswrapper[4703]: E1209 12:42:56.921442 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c901e280-1fff-4576-a016-e5da19e0028d" containerName="registry-server" Dec 09 12:42:56 crc kubenswrapper[4703]: I1209 12:42:56.921448 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="c901e280-1fff-4576-a016-e5da19e0028d" containerName="registry-server" Dec 09 12:42:56 crc kubenswrapper[4703]: E1209 12:42:56.921490 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c901e280-1fff-4576-a016-e5da19e0028d" containerName="extract-content" Dec 09 12:42:56 crc kubenswrapper[4703]: I1209 12:42:56.921496 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="c901e280-1fff-4576-a016-e5da19e0028d" containerName="extract-content" Dec 09 12:42:56 crc kubenswrapper[4703]: I1209 12:42:56.921779 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="c901e280-1fff-4576-a016-e5da19e0028d" containerName="registry-server" Dec 09 12:42:56 crc kubenswrapper[4703]: I1209 12:42:56.923813 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qfd2r" Dec 09 12:42:56 crc kubenswrapper[4703]: I1209 12:42:56.954562 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qfd2r"] Dec 09 12:42:57 crc kubenswrapper[4703]: I1209 12:42:57.020207 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/904a8e4f-6578-4011-9045-23ea10323dc3-utilities\") pod \"redhat-marketplace-qfd2r\" (UID: \"904a8e4f-6578-4011-9045-23ea10323dc3\") " pod="openshift-marketplace/redhat-marketplace-qfd2r" Dec 09 12:42:57 crc kubenswrapper[4703]: I1209 12:42:57.020381 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/904a8e4f-6578-4011-9045-23ea10323dc3-catalog-content\") pod \"redhat-marketplace-qfd2r\" (UID: \"904a8e4f-6578-4011-9045-23ea10323dc3\") " pod="openshift-marketplace/redhat-marketplace-qfd2r" Dec 09 12:42:57 crc kubenswrapper[4703]: I1209 12:42:57.020496 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8h55z\" (UniqueName: \"kubernetes.io/projected/904a8e4f-6578-4011-9045-23ea10323dc3-kube-api-access-8h55z\") pod \"redhat-marketplace-qfd2r\" (UID: \"904a8e4f-6578-4011-9045-23ea10323dc3\") " pod="openshift-marketplace/redhat-marketplace-qfd2r" Dec 09 12:42:57 crc kubenswrapper[4703]: E1209 12:42:57.072579 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:42:57 crc kubenswrapper[4703]: I1209 12:42:57.122674 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/904a8e4f-6578-4011-9045-23ea10323dc3-utilities\") pod \"redhat-marketplace-qfd2r\" (UID: \"904a8e4f-6578-4011-9045-23ea10323dc3\") " pod="openshift-marketplace/redhat-marketplace-qfd2r" Dec 09 12:42:57 crc kubenswrapper[4703]: I1209 12:42:57.122806 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/904a8e4f-6578-4011-9045-23ea10323dc3-catalog-content\") pod \"redhat-marketplace-qfd2r\" (UID: \"904a8e4f-6578-4011-9045-23ea10323dc3\") " pod="openshift-marketplace/redhat-marketplace-qfd2r" Dec 09 12:42:57 crc kubenswrapper[4703]: I1209 12:42:57.123805 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8h55z\" (UniqueName: \"kubernetes.io/projected/904a8e4f-6578-4011-9045-23ea10323dc3-kube-api-access-8h55z\") pod \"redhat-marketplace-qfd2r\" (UID: \"904a8e4f-6578-4011-9045-23ea10323dc3\") " pod="openshift-marketplace/redhat-marketplace-qfd2r" Dec 09 12:42:57 crc kubenswrapper[4703]: I1209 12:42:57.124400 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/904a8e4f-6578-4011-9045-23ea10323dc3-catalog-content\") pod \"redhat-marketplace-qfd2r\" (UID: \"904a8e4f-6578-4011-9045-23ea10323dc3\") " pod="openshift-marketplace/redhat-marketplace-qfd2r" Dec 09 12:42:57 crc kubenswrapper[4703]: I1209 12:42:57.124483 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/904a8e4f-6578-4011-9045-23ea10323dc3-utilities\") pod \"redhat-marketplace-qfd2r\" (UID: \"904a8e4f-6578-4011-9045-23ea10323dc3\") " pod="openshift-marketplace/redhat-marketplace-qfd2r" Dec 09 12:42:57 crc kubenswrapper[4703]: I1209 12:42:57.150729 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8h55z\" (UniqueName: \"kubernetes.io/projected/904a8e4f-6578-4011-9045-23ea10323dc3-kube-api-access-8h55z\") pod \"redhat-marketplace-qfd2r\" (UID: \"904a8e4f-6578-4011-9045-23ea10323dc3\") " pod="openshift-marketplace/redhat-marketplace-qfd2r" Dec 09 12:42:57 crc kubenswrapper[4703]: I1209 12:42:57.252162 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qfd2r" Dec 09 12:42:58 crc kubenswrapper[4703]: I1209 12:42:58.626741 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qfd2r"] Dec 09 12:42:59 crc kubenswrapper[4703]: I1209 12:42:59.132988 4703 generic.go:334] "Generic (PLEG): container finished" podID="904a8e4f-6578-4011-9045-23ea10323dc3" containerID="9dabd37e60c85fd657300603d8a86c9f0c4119ab5c75a86e8895cc70994fc493" exitCode=0 Dec 09 12:42:59 crc kubenswrapper[4703]: I1209 12:42:59.133104 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qfd2r" event={"ID":"904a8e4f-6578-4011-9045-23ea10323dc3","Type":"ContainerDied","Data":"9dabd37e60c85fd657300603d8a86c9f0c4119ab5c75a86e8895cc70994fc493"} Dec 09 12:42:59 crc kubenswrapper[4703]: I1209 12:42:59.133412 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qfd2r" event={"ID":"904a8e4f-6578-4011-9045-23ea10323dc3","Type":"ContainerStarted","Data":"44a5338a36d2f1df577e8afba6daec4e43b0af031503c1ddf7d8fad4a91824a5"} Dec 09 12:42:59 crc kubenswrapper[4703]: I1209 12:42:59.137430 4703 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 09 12:43:00 crc kubenswrapper[4703]: I1209 12:43:00.150228 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qfd2r" event={"ID":"904a8e4f-6578-4011-9045-23ea10323dc3","Type":"ContainerStarted","Data":"950f14350c69fe99c89edd3874e1c0d79036863a3c25622a9fd0af4819d3ff8a"} Dec 09 12:43:01 crc kubenswrapper[4703]: I1209 12:43:01.166016 4703 generic.go:334] "Generic (PLEG): container finished" podID="904a8e4f-6578-4011-9045-23ea10323dc3" containerID="950f14350c69fe99c89edd3874e1c0d79036863a3c25622a9fd0af4819d3ff8a" exitCode=0 Dec 09 12:43:01 crc kubenswrapper[4703]: I1209 12:43:01.166092 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qfd2r" event={"ID":"904a8e4f-6578-4011-9045-23ea10323dc3","Type":"ContainerDied","Data":"950f14350c69fe99c89edd3874e1c0d79036863a3c25622a9fd0af4819d3ff8a"} Dec 09 12:43:02 crc kubenswrapper[4703]: I1209 12:43:02.181673 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qfd2r" event={"ID":"904a8e4f-6578-4011-9045-23ea10323dc3","Type":"ContainerStarted","Data":"24db3fdb8485aaba8c67e213b9f0d94bedb2286e0d2753659e3c3f4dd57f59a4"} Dec 09 12:43:02 crc kubenswrapper[4703]: I1209 12:43:02.206308 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qfd2r" podStartSLOduration=3.6624143890000003 podStartE2EDuration="6.206275889s" podCreationTimestamp="2025-12-09 12:42:56 +0000 UTC" firstStartedPulling="2025-12-09 12:42:59.137086678 +0000 UTC m=+2278.385850197" lastFinishedPulling="2025-12-09 12:43:01.680948178 +0000 UTC m=+2280.929711697" observedRunningTime="2025-12-09 12:43:02.201709998 +0000 UTC m=+2281.450473527" watchObservedRunningTime="2025-12-09 12:43:02.206275889 +0000 UTC m=+2281.455039408" Dec 09 12:43:07 crc kubenswrapper[4703]: E1209 12:43:07.073058 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:43:07 crc kubenswrapper[4703]: I1209 12:43:07.253909 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qfd2r" Dec 09 12:43:07 crc kubenswrapper[4703]: I1209 12:43:07.253989 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qfd2r" Dec 09 12:43:07 crc kubenswrapper[4703]: I1209 12:43:07.363231 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qfd2r" Dec 09 12:43:08 crc kubenswrapper[4703]: I1209 12:43:08.329374 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qfd2r" Dec 09 12:43:08 crc kubenswrapper[4703]: I1209 12:43:08.390032 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qfd2r"] Dec 09 12:43:09 crc kubenswrapper[4703]: I1209 12:43:09.071237 4703 scope.go:117] "RemoveContainer" containerID="81e995d92400a3150feea876a90492238c9377534b0477b21807ac1cd60af1d5" Dec 09 12:43:09 crc kubenswrapper[4703]: E1209 12:43:09.071564 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 12:43:10 crc kubenswrapper[4703]: I1209 12:43:10.282074 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qfd2r" podUID="904a8e4f-6578-4011-9045-23ea10323dc3" containerName="registry-server" containerID="cri-o://24db3fdb8485aaba8c67e213b9f0d94bedb2286e0d2753659e3c3f4dd57f59a4" gracePeriod=2 Dec 09 12:43:10 crc kubenswrapper[4703]: I1209 12:43:10.940517 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qfd2r" Dec 09 12:43:11 crc kubenswrapper[4703]: I1209 12:43:11.061164 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/904a8e4f-6578-4011-9045-23ea10323dc3-catalog-content\") pod \"904a8e4f-6578-4011-9045-23ea10323dc3\" (UID: \"904a8e4f-6578-4011-9045-23ea10323dc3\") " Dec 09 12:43:11 crc kubenswrapper[4703]: I1209 12:43:11.061303 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/904a8e4f-6578-4011-9045-23ea10323dc3-utilities\") pod \"904a8e4f-6578-4011-9045-23ea10323dc3\" (UID: \"904a8e4f-6578-4011-9045-23ea10323dc3\") " Dec 09 12:43:11 crc kubenswrapper[4703]: I1209 12:43:11.061848 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8h55z\" (UniqueName: \"kubernetes.io/projected/904a8e4f-6578-4011-9045-23ea10323dc3-kube-api-access-8h55z\") pod \"904a8e4f-6578-4011-9045-23ea10323dc3\" (UID: \"904a8e4f-6578-4011-9045-23ea10323dc3\") " Dec 09 12:43:11 crc kubenswrapper[4703]: I1209 12:43:11.062266 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/904a8e4f-6578-4011-9045-23ea10323dc3-utilities" (OuterVolumeSpecName: "utilities") pod "904a8e4f-6578-4011-9045-23ea10323dc3" (UID: "904a8e4f-6578-4011-9045-23ea10323dc3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:43:11 crc kubenswrapper[4703]: I1209 12:43:11.063370 4703 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/904a8e4f-6578-4011-9045-23ea10323dc3-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 12:43:11 crc kubenswrapper[4703]: I1209 12:43:11.071050 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/904a8e4f-6578-4011-9045-23ea10323dc3-kube-api-access-8h55z" (OuterVolumeSpecName: "kube-api-access-8h55z") pod "904a8e4f-6578-4011-9045-23ea10323dc3" (UID: "904a8e4f-6578-4011-9045-23ea10323dc3"). InnerVolumeSpecName "kube-api-access-8h55z". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:43:11 crc kubenswrapper[4703]: I1209 12:43:11.084323 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/904a8e4f-6578-4011-9045-23ea10323dc3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "904a8e4f-6578-4011-9045-23ea10323dc3" (UID: "904a8e4f-6578-4011-9045-23ea10323dc3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:43:11 crc kubenswrapper[4703]: I1209 12:43:11.169135 4703 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/904a8e4f-6578-4011-9045-23ea10323dc3-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 12:43:11 crc kubenswrapper[4703]: I1209 12:43:11.169611 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8h55z\" (UniqueName: \"kubernetes.io/projected/904a8e4f-6578-4011-9045-23ea10323dc3-kube-api-access-8h55z\") on node \"crc\" DevicePath \"\"" Dec 09 12:43:11 crc kubenswrapper[4703]: I1209 12:43:11.297065 4703 generic.go:334] "Generic (PLEG): container finished" podID="904a8e4f-6578-4011-9045-23ea10323dc3" containerID="24db3fdb8485aaba8c67e213b9f0d94bedb2286e0d2753659e3c3f4dd57f59a4" exitCode=0 Dec 09 12:43:11 crc kubenswrapper[4703]: I1209 12:43:11.297127 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qfd2r" event={"ID":"904a8e4f-6578-4011-9045-23ea10323dc3","Type":"ContainerDied","Data":"24db3fdb8485aaba8c67e213b9f0d94bedb2286e0d2753659e3c3f4dd57f59a4"} Dec 09 12:43:11 crc kubenswrapper[4703]: I1209 12:43:11.298101 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qfd2r" event={"ID":"904a8e4f-6578-4011-9045-23ea10323dc3","Type":"ContainerDied","Data":"44a5338a36d2f1df577e8afba6daec4e43b0af031503c1ddf7d8fad4a91824a5"} Dec 09 12:43:11 crc kubenswrapper[4703]: I1209 12:43:11.298171 4703 scope.go:117] "RemoveContainer" containerID="24db3fdb8485aaba8c67e213b9f0d94bedb2286e0d2753659e3c3f4dd57f59a4" Dec 09 12:43:11 crc kubenswrapper[4703]: I1209 12:43:11.297178 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qfd2r" Dec 09 12:43:11 crc kubenswrapper[4703]: I1209 12:43:11.332263 4703 scope.go:117] "RemoveContainer" containerID="950f14350c69fe99c89edd3874e1c0d79036863a3c25622a9fd0af4819d3ff8a" Dec 09 12:43:11 crc kubenswrapper[4703]: I1209 12:43:11.336047 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qfd2r"] Dec 09 12:43:11 crc kubenswrapper[4703]: I1209 12:43:11.351517 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qfd2r"] Dec 09 12:43:11 crc kubenswrapper[4703]: I1209 12:43:11.363510 4703 scope.go:117] "RemoveContainer" containerID="9dabd37e60c85fd657300603d8a86c9f0c4119ab5c75a86e8895cc70994fc493" Dec 09 12:43:11 crc kubenswrapper[4703]: I1209 12:43:11.412994 4703 scope.go:117] "RemoveContainer" containerID="24db3fdb8485aaba8c67e213b9f0d94bedb2286e0d2753659e3c3f4dd57f59a4" Dec 09 12:43:11 crc kubenswrapper[4703]: E1209 12:43:11.413835 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24db3fdb8485aaba8c67e213b9f0d94bedb2286e0d2753659e3c3f4dd57f59a4\": container with ID starting with 24db3fdb8485aaba8c67e213b9f0d94bedb2286e0d2753659e3c3f4dd57f59a4 not found: ID does not exist" containerID="24db3fdb8485aaba8c67e213b9f0d94bedb2286e0d2753659e3c3f4dd57f59a4" Dec 09 12:43:11 crc kubenswrapper[4703]: I1209 12:43:11.413944 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24db3fdb8485aaba8c67e213b9f0d94bedb2286e0d2753659e3c3f4dd57f59a4"} err="failed to get container status \"24db3fdb8485aaba8c67e213b9f0d94bedb2286e0d2753659e3c3f4dd57f59a4\": rpc error: code = NotFound desc = could not find container \"24db3fdb8485aaba8c67e213b9f0d94bedb2286e0d2753659e3c3f4dd57f59a4\": container with ID starting with 24db3fdb8485aaba8c67e213b9f0d94bedb2286e0d2753659e3c3f4dd57f59a4 not found: ID does not exist" Dec 09 12:43:11 crc kubenswrapper[4703]: I1209 12:43:11.414128 4703 scope.go:117] "RemoveContainer" containerID="950f14350c69fe99c89edd3874e1c0d79036863a3c25622a9fd0af4819d3ff8a" Dec 09 12:43:11 crc kubenswrapper[4703]: E1209 12:43:11.414877 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"950f14350c69fe99c89edd3874e1c0d79036863a3c25622a9fd0af4819d3ff8a\": container with ID starting with 950f14350c69fe99c89edd3874e1c0d79036863a3c25622a9fd0af4819d3ff8a not found: ID does not exist" containerID="950f14350c69fe99c89edd3874e1c0d79036863a3c25622a9fd0af4819d3ff8a" Dec 09 12:43:11 crc kubenswrapper[4703]: I1209 12:43:11.414925 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"950f14350c69fe99c89edd3874e1c0d79036863a3c25622a9fd0af4819d3ff8a"} err="failed to get container status \"950f14350c69fe99c89edd3874e1c0d79036863a3c25622a9fd0af4819d3ff8a\": rpc error: code = NotFound desc = could not find container \"950f14350c69fe99c89edd3874e1c0d79036863a3c25622a9fd0af4819d3ff8a\": container with ID starting with 950f14350c69fe99c89edd3874e1c0d79036863a3c25622a9fd0af4819d3ff8a not found: ID does not exist" Dec 09 12:43:11 crc kubenswrapper[4703]: I1209 12:43:11.414962 4703 scope.go:117] "RemoveContainer" containerID="9dabd37e60c85fd657300603d8a86c9f0c4119ab5c75a86e8895cc70994fc493" Dec 09 12:43:11 crc kubenswrapper[4703]: E1209 12:43:11.415383 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9dabd37e60c85fd657300603d8a86c9f0c4119ab5c75a86e8895cc70994fc493\": container with ID starting with 9dabd37e60c85fd657300603d8a86c9f0c4119ab5c75a86e8895cc70994fc493 not found: ID does not exist" containerID="9dabd37e60c85fd657300603d8a86c9f0c4119ab5c75a86e8895cc70994fc493" Dec 09 12:43:11 crc kubenswrapper[4703]: I1209 12:43:11.415485 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9dabd37e60c85fd657300603d8a86c9f0c4119ab5c75a86e8895cc70994fc493"} err="failed to get container status \"9dabd37e60c85fd657300603d8a86c9f0c4119ab5c75a86e8895cc70994fc493\": rpc error: code = NotFound desc = could not find container \"9dabd37e60c85fd657300603d8a86c9f0c4119ab5c75a86e8895cc70994fc493\": container with ID starting with 9dabd37e60c85fd657300603d8a86c9f0c4119ab5c75a86e8895cc70994fc493 not found: ID does not exist" Dec 09 12:43:12 crc kubenswrapper[4703]: E1209 12:43:12.072432 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:43:13 crc kubenswrapper[4703]: I1209 12:43:13.085723 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="904a8e4f-6578-4011-9045-23ea10323dc3" path="/var/lib/kubelet/pods/904a8e4f-6578-4011-9045-23ea10323dc3/volumes" Dec 09 12:43:18 crc kubenswrapper[4703]: E1209 12:43:18.072660 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:43:23 crc kubenswrapper[4703]: I1209 12:43:23.070756 4703 scope.go:117] "RemoveContainer" containerID="81e995d92400a3150feea876a90492238c9377534b0477b21807ac1cd60af1d5" Dec 09 12:43:23 crc kubenswrapper[4703]: E1209 12:43:23.071762 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 12:43:27 crc kubenswrapper[4703]: E1209 12:43:27.175050 4703 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 12:43:27 crc kubenswrapper[4703]: E1209 12:43:27.175649 4703 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 12:43:27 crc kubenswrapper[4703]: E1209 12:43:27.175813 4703 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n565h678h645h65h685h657h568h564h8fhcfhc4h58fh66bh564h649h5d5hb5h67dh57fh97h58ch557h4h68h59fh5c8h65ch586h5bch5b8h5c6h7q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p7hbq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(ce42c586-f397-4f98-be45-f56d36115d7a): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 12:43:27 crc kubenswrapper[4703]: E1209 12:43:27.177893 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:43:32 crc kubenswrapper[4703]: E1209 12:43:32.073485 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:43:38 crc kubenswrapper[4703]: I1209 12:43:38.069637 4703 scope.go:117] "RemoveContainer" containerID="81e995d92400a3150feea876a90492238c9377534b0477b21807ac1cd60af1d5" Dec 09 12:43:38 crc kubenswrapper[4703]: E1209 12:43:38.070359 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 12:43:42 crc kubenswrapper[4703]: E1209 12:43:42.073383 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:43:45 crc kubenswrapper[4703]: E1209 12:43:45.205772 4703 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 09 12:43:45 crc kubenswrapper[4703]: E1209 12:43:45.206373 4703 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 09 12:43:45 crc kubenswrapper[4703]: E1209 12:43:45.206623 4703 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6kdkz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-6ljb8_openstack(862e9d91-760e-43af-aba3-a23255b0fd7a): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 12:43:45 crc kubenswrapper[4703]: E1209 12:43:45.207872 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:43:53 crc kubenswrapper[4703]: I1209 12:43:53.071223 4703 scope.go:117] "RemoveContainer" containerID="81e995d92400a3150feea876a90492238c9377534b0477b21807ac1cd60af1d5" Dec 09 12:43:53 crc kubenswrapper[4703]: E1209 12:43:53.072518 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 12:43:53 crc kubenswrapper[4703]: E1209 12:43:53.073846 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:44:01 crc kubenswrapper[4703]: E1209 12:44:01.084248 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:44:04 crc kubenswrapper[4703]: I1209 12:44:04.070397 4703 scope.go:117] "RemoveContainer" containerID="81e995d92400a3150feea876a90492238c9377534b0477b21807ac1cd60af1d5" Dec 09 12:44:04 crc kubenswrapper[4703]: E1209 12:44:04.071956 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 12:44:04 crc kubenswrapper[4703]: E1209 12:44:04.073395 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:44:13 crc kubenswrapper[4703]: E1209 12:44:13.072407 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:44:15 crc kubenswrapper[4703]: I1209 12:44:15.070011 4703 scope.go:117] "RemoveContainer" containerID="81e995d92400a3150feea876a90492238c9377534b0477b21807ac1cd60af1d5" Dec 09 12:44:15 crc kubenswrapper[4703]: E1209 12:44:15.070602 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 12:44:15 crc kubenswrapper[4703]: E1209 12:44:15.072429 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:44:25 crc kubenswrapper[4703]: E1209 12:44:25.072246 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:44:28 crc kubenswrapper[4703]: I1209 12:44:28.070273 4703 scope.go:117] "RemoveContainer" containerID="81e995d92400a3150feea876a90492238c9377534b0477b21807ac1cd60af1d5" Dec 09 12:44:28 crc kubenswrapper[4703]: E1209 12:44:28.070940 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 12:44:28 crc kubenswrapper[4703]: E1209 12:44:28.073144 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:44:37 crc kubenswrapper[4703]: E1209 12:44:37.072349 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:44:38 crc kubenswrapper[4703]: I1209 12:44:38.208384 4703 generic.go:334] "Generic (PLEG): container finished" podID="4157464d-e43c-4d62-89b5-ececeb2ff437" containerID="56e8270a97b07f5bd45295423057449ca84402f7af4dc0c8761b93b6803197df" exitCode=2 Dec 09 12:44:38 crc kubenswrapper[4703]: I1209 12:44:38.208733 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7ghts" event={"ID":"4157464d-e43c-4d62-89b5-ececeb2ff437","Type":"ContainerDied","Data":"56e8270a97b07f5bd45295423057449ca84402f7af4dc0c8761b93b6803197df"} Dec 09 12:44:39 crc kubenswrapper[4703]: I1209 12:44:39.849225 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7ghts" Dec 09 12:44:39 crc kubenswrapper[4703]: I1209 12:44:39.899513 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4157464d-e43c-4d62-89b5-ececeb2ff437-ssh-key\") pod \"4157464d-e43c-4d62-89b5-ececeb2ff437\" (UID: \"4157464d-e43c-4d62-89b5-ececeb2ff437\") " Dec 09 12:44:39 crc kubenswrapper[4703]: I1209 12:44:39.899829 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tcf9j\" (UniqueName: \"kubernetes.io/projected/4157464d-e43c-4d62-89b5-ececeb2ff437-kube-api-access-tcf9j\") pod \"4157464d-e43c-4d62-89b5-ececeb2ff437\" (UID: \"4157464d-e43c-4d62-89b5-ececeb2ff437\") " Dec 09 12:44:39 crc kubenswrapper[4703]: I1209 12:44:39.899929 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4157464d-e43c-4d62-89b5-ececeb2ff437-inventory\") pod \"4157464d-e43c-4d62-89b5-ececeb2ff437\" (UID: \"4157464d-e43c-4d62-89b5-ececeb2ff437\") " Dec 09 12:44:39 crc kubenswrapper[4703]: I1209 12:44:39.922551 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4157464d-e43c-4d62-89b5-ececeb2ff437-kube-api-access-tcf9j" (OuterVolumeSpecName: "kube-api-access-tcf9j") pod "4157464d-e43c-4d62-89b5-ececeb2ff437" (UID: "4157464d-e43c-4d62-89b5-ececeb2ff437"). InnerVolumeSpecName "kube-api-access-tcf9j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:44:39 crc kubenswrapper[4703]: I1209 12:44:39.962596 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4157464d-e43c-4d62-89b5-ececeb2ff437-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "4157464d-e43c-4d62-89b5-ececeb2ff437" (UID: "4157464d-e43c-4d62-89b5-ececeb2ff437"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:44:39 crc kubenswrapper[4703]: I1209 12:44:39.996457 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4157464d-e43c-4d62-89b5-ececeb2ff437-inventory" (OuterVolumeSpecName: "inventory") pod "4157464d-e43c-4d62-89b5-ececeb2ff437" (UID: "4157464d-e43c-4d62-89b5-ececeb2ff437"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:44:40 crc kubenswrapper[4703]: I1209 12:44:40.009315 4703 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4157464d-e43c-4d62-89b5-ececeb2ff437-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 09 12:44:40 crc kubenswrapper[4703]: I1209 12:44:40.009376 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tcf9j\" (UniqueName: \"kubernetes.io/projected/4157464d-e43c-4d62-89b5-ececeb2ff437-kube-api-access-tcf9j\") on node \"crc\" DevicePath \"\"" Dec 09 12:44:40 crc kubenswrapper[4703]: I1209 12:44:40.009434 4703 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4157464d-e43c-4d62-89b5-ececeb2ff437-inventory\") on node \"crc\" DevicePath \"\"" Dec 09 12:44:40 crc kubenswrapper[4703]: I1209 12:44:40.074583 4703 scope.go:117] "RemoveContainer" containerID="81e995d92400a3150feea876a90492238c9377534b0477b21807ac1cd60af1d5" Dec 09 12:44:40 crc kubenswrapper[4703]: E1209 12:44:40.075065 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 12:44:40 crc kubenswrapper[4703]: I1209 12:44:40.243508 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7ghts" event={"ID":"4157464d-e43c-4d62-89b5-ececeb2ff437","Type":"ContainerDied","Data":"b0be373eb27fceb113559e470c22c3c0b83d26821002d66517c03d7881b05b13"} Dec 09 12:44:40 crc kubenswrapper[4703]: I1209 12:44:40.243565 4703 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b0be373eb27fceb113559e470c22c3c0b83d26821002d66517c03d7881b05b13" Dec 09 12:44:40 crc kubenswrapper[4703]: I1209 12:44:40.243602 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7ghts" Dec 09 12:44:43 crc kubenswrapper[4703]: E1209 12:44:43.075485 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:44:48 crc kubenswrapper[4703]: I1209 12:44:48.036374 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vnj2z"] Dec 09 12:44:48 crc kubenswrapper[4703]: E1209 12:44:48.037503 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="904a8e4f-6578-4011-9045-23ea10323dc3" containerName="extract-content" Dec 09 12:44:48 crc kubenswrapper[4703]: I1209 12:44:48.037523 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="904a8e4f-6578-4011-9045-23ea10323dc3" containerName="extract-content" Dec 09 12:44:48 crc kubenswrapper[4703]: E1209 12:44:48.037539 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="904a8e4f-6578-4011-9045-23ea10323dc3" containerName="extract-utilities" Dec 09 12:44:48 crc kubenswrapper[4703]: I1209 12:44:48.037547 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="904a8e4f-6578-4011-9045-23ea10323dc3" containerName="extract-utilities" Dec 09 12:44:48 crc kubenswrapper[4703]: E1209 12:44:48.037562 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4157464d-e43c-4d62-89b5-ececeb2ff437" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 09 12:44:48 crc kubenswrapper[4703]: I1209 12:44:48.037571 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="4157464d-e43c-4d62-89b5-ececeb2ff437" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 09 12:44:48 crc kubenswrapper[4703]: E1209 12:44:48.037591 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="904a8e4f-6578-4011-9045-23ea10323dc3" containerName="registry-server" Dec 09 12:44:48 crc kubenswrapper[4703]: I1209 12:44:48.037598 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="904a8e4f-6578-4011-9045-23ea10323dc3" containerName="registry-server" Dec 09 12:44:48 crc kubenswrapper[4703]: I1209 12:44:48.037900 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="904a8e4f-6578-4011-9045-23ea10323dc3" containerName="registry-server" Dec 09 12:44:48 crc kubenswrapper[4703]: I1209 12:44:48.037921 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="4157464d-e43c-4d62-89b5-ececeb2ff437" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 09 12:44:48 crc kubenswrapper[4703]: I1209 12:44:48.038965 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vnj2z" Dec 09 12:44:48 crc kubenswrapper[4703]: I1209 12:44:48.044558 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 09 12:44:48 crc kubenswrapper[4703]: I1209 12:44:48.044907 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 09 12:44:48 crc kubenswrapper[4703]: I1209 12:44:48.045630 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-8xnzm" Dec 09 12:44:48 crc kubenswrapper[4703]: I1209 12:44:48.045788 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 09 12:44:48 crc kubenswrapper[4703]: I1209 12:44:48.050491 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vnj2z"] Dec 09 12:44:48 crc kubenswrapper[4703]: I1209 12:44:48.106512 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7340d18a-8eee-4c8f-88d0-13d7bb17a825-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-vnj2z\" (UID: \"7340d18a-8eee-4c8f-88d0-13d7bb17a825\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vnj2z" Dec 09 12:44:48 crc kubenswrapper[4703]: I1209 12:44:48.106573 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7340d18a-8eee-4c8f-88d0-13d7bb17a825-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-vnj2z\" (UID: \"7340d18a-8eee-4c8f-88d0-13d7bb17a825\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vnj2z" Dec 09 12:44:48 crc kubenswrapper[4703]: I1209 12:44:48.107004 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvlpp\" (UniqueName: \"kubernetes.io/projected/7340d18a-8eee-4c8f-88d0-13d7bb17a825-kube-api-access-tvlpp\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-vnj2z\" (UID: \"7340d18a-8eee-4c8f-88d0-13d7bb17a825\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vnj2z" Dec 09 12:44:48 crc kubenswrapper[4703]: I1209 12:44:48.210177 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvlpp\" (UniqueName: \"kubernetes.io/projected/7340d18a-8eee-4c8f-88d0-13d7bb17a825-kube-api-access-tvlpp\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-vnj2z\" (UID: \"7340d18a-8eee-4c8f-88d0-13d7bb17a825\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vnj2z" Dec 09 12:44:48 crc kubenswrapper[4703]: I1209 12:44:48.210821 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7340d18a-8eee-4c8f-88d0-13d7bb17a825-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-vnj2z\" (UID: \"7340d18a-8eee-4c8f-88d0-13d7bb17a825\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vnj2z" Dec 09 12:44:48 crc kubenswrapper[4703]: I1209 12:44:48.210879 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7340d18a-8eee-4c8f-88d0-13d7bb17a825-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-vnj2z\" (UID: \"7340d18a-8eee-4c8f-88d0-13d7bb17a825\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vnj2z" Dec 09 12:44:48 crc kubenswrapper[4703]: I1209 12:44:48.218331 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7340d18a-8eee-4c8f-88d0-13d7bb17a825-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-vnj2z\" (UID: \"7340d18a-8eee-4c8f-88d0-13d7bb17a825\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vnj2z" Dec 09 12:44:48 crc kubenswrapper[4703]: I1209 12:44:48.224032 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7340d18a-8eee-4c8f-88d0-13d7bb17a825-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-vnj2z\" (UID: \"7340d18a-8eee-4c8f-88d0-13d7bb17a825\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vnj2z" Dec 09 12:44:48 crc kubenswrapper[4703]: I1209 12:44:48.236948 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvlpp\" (UniqueName: \"kubernetes.io/projected/7340d18a-8eee-4c8f-88d0-13d7bb17a825-kube-api-access-tvlpp\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-vnj2z\" (UID: \"7340d18a-8eee-4c8f-88d0-13d7bb17a825\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vnj2z" Dec 09 12:44:48 crc kubenswrapper[4703]: I1209 12:44:48.379290 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vnj2z" Dec 09 12:44:49 crc kubenswrapper[4703]: I1209 12:44:49.132347 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vnj2z"] Dec 09 12:44:49 crc kubenswrapper[4703]: I1209 12:44:49.347648 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vnj2z" event={"ID":"7340d18a-8eee-4c8f-88d0-13d7bb17a825","Type":"ContainerStarted","Data":"0dbdf39646352d10badaa68186c42c322891e56699b09c665f3579352d793276"} Dec 09 12:44:50 crc kubenswrapper[4703]: I1209 12:44:50.375521 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vnj2z" event={"ID":"7340d18a-8eee-4c8f-88d0-13d7bb17a825","Type":"ContainerStarted","Data":"1e49ca9bf6f2027ca647a07c9453bb31b84f43f2e874733de3cdfda249b66d1b"} Dec 09 12:44:50 crc kubenswrapper[4703]: I1209 12:44:50.411276 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vnj2z" podStartSLOduration=1.989739927 podStartE2EDuration="2.411239056s" podCreationTimestamp="2025-12-09 12:44:48 +0000 UTC" firstStartedPulling="2025-12-09 12:44:49.148734232 +0000 UTC m=+2388.397497751" lastFinishedPulling="2025-12-09 12:44:49.570233361 +0000 UTC m=+2388.818996880" observedRunningTime="2025-12-09 12:44:50.398919561 +0000 UTC m=+2389.647683080" watchObservedRunningTime="2025-12-09 12:44:50.411239056 +0000 UTC m=+2389.660002585" Dec 09 12:44:51 crc kubenswrapper[4703]: E1209 12:44:51.079959 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:44:54 crc kubenswrapper[4703]: I1209 12:44:54.069603 4703 scope.go:117] "RemoveContainer" containerID="81e995d92400a3150feea876a90492238c9377534b0477b21807ac1cd60af1d5" Dec 09 12:44:54 crc kubenswrapper[4703]: E1209 12:44:54.070595 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 12:44:54 crc kubenswrapper[4703]: E1209 12:44:54.073521 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:45:00 crc kubenswrapper[4703]: I1209 12:45:00.166977 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421405-hznf9"] Dec 09 12:45:00 crc kubenswrapper[4703]: I1209 12:45:00.170124 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421405-hznf9" Dec 09 12:45:00 crc kubenswrapper[4703]: I1209 12:45:00.173346 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 09 12:45:00 crc kubenswrapper[4703]: I1209 12:45:00.173349 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 09 12:45:00 crc kubenswrapper[4703]: I1209 12:45:00.180434 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421405-hznf9"] Dec 09 12:45:00 crc kubenswrapper[4703]: I1209 12:45:00.225379 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jg5l\" (UniqueName: \"kubernetes.io/projected/d99fd229-9d94-411b-a64e-3edf0afffe01-kube-api-access-6jg5l\") pod \"collect-profiles-29421405-hznf9\" (UID: \"d99fd229-9d94-411b-a64e-3edf0afffe01\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421405-hznf9" Dec 09 12:45:00 crc kubenswrapper[4703]: I1209 12:45:00.225492 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d99fd229-9d94-411b-a64e-3edf0afffe01-secret-volume\") pod \"collect-profiles-29421405-hznf9\" (UID: \"d99fd229-9d94-411b-a64e-3edf0afffe01\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421405-hznf9" Dec 09 12:45:00 crc kubenswrapper[4703]: I1209 12:45:00.226126 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d99fd229-9d94-411b-a64e-3edf0afffe01-config-volume\") pod \"collect-profiles-29421405-hznf9\" (UID: \"d99fd229-9d94-411b-a64e-3edf0afffe01\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421405-hznf9" Dec 09 12:45:00 crc kubenswrapper[4703]: I1209 12:45:00.329867 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d99fd229-9d94-411b-a64e-3edf0afffe01-config-volume\") pod \"collect-profiles-29421405-hznf9\" (UID: \"d99fd229-9d94-411b-a64e-3edf0afffe01\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421405-hznf9" Dec 09 12:45:00 crc kubenswrapper[4703]: I1209 12:45:00.330017 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jg5l\" (UniqueName: \"kubernetes.io/projected/d99fd229-9d94-411b-a64e-3edf0afffe01-kube-api-access-6jg5l\") pod \"collect-profiles-29421405-hznf9\" (UID: \"d99fd229-9d94-411b-a64e-3edf0afffe01\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421405-hznf9" Dec 09 12:45:00 crc kubenswrapper[4703]: I1209 12:45:00.330103 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d99fd229-9d94-411b-a64e-3edf0afffe01-secret-volume\") pod \"collect-profiles-29421405-hznf9\" (UID: \"d99fd229-9d94-411b-a64e-3edf0afffe01\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421405-hznf9" Dec 09 12:45:00 crc kubenswrapper[4703]: I1209 12:45:00.331320 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d99fd229-9d94-411b-a64e-3edf0afffe01-config-volume\") pod \"collect-profiles-29421405-hznf9\" (UID: \"d99fd229-9d94-411b-a64e-3edf0afffe01\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421405-hznf9" Dec 09 12:45:00 crc kubenswrapper[4703]: I1209 12:45:00.336765 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d99fd229-9d94-411b-a64e-3edf0afffe01-secret-volume\") pod \"collect-profiles-29421405-hznf9\" (UID: \"d99fd229-9d94-411b-a64e-3edf0afffe01\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421405-hznf9" Dec 09 12:45:00 crc kubenswrapper[4703]: I1209 12:45:00.392143 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jg5l\" (UniqueName: \"kubernetes.io/projected/d99fd229-9d94-411b-a64e-3edf0afffe01-kube-api-access-6jg5l\") pod \"collect-profiles-29421405-hznf9\" (UID: \"d99fd229-9d94-411b-a64e-3edf0afffe01\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421405-hznf9" Dec 09 12:45:00 crc kubenswrapper[4703]: I1209 12:45:00.500729 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421405-hznf9" Dec 09 12:45:01 crc kubenswrapper[4703]: I1209 12:45:01.109540 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421405-hznf9"] Dec 09 12:45:01 crc kubenswrapper[4703]: W1209 12:45:01.113508 4703 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd99fd229_9d94_411b_a64e_3edf0afffe01.slice/crio-a7832bd08658c4e7213cd60f363935f02652f8224bb273861ef4d62f7a7f6af5 WatchSource:0}: Error finding container a7832bd08658c4e7213cd60f363935f02652f8224bb273861ef4d62f7a7f6af5: Status 404 returned error can't find the container with id a7832bd08658c4e7213cd60f363935f02652f8224bb273861ef4d62f7a7f6af5 Dec 09 12:45:01 crc kubenswrapper[4703]: I1209 12:45:01.483809 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421405-hznf9" event={"ID":"d99fd229-9d94-411b-a64e-3edf0afffe01","Type":"ContainerStarted","Data":"57b5e80d337fcdd8d89bc10f93c6b80354ca1c3d0e5acafa1a35085ebd40f142"} Dec 09 12:45:01 crc kubenswrapper[4703]: I1209 12:45:01.483879 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421405-hznf9" event={"ID":"d99fd229-9d94-411b-a64e-3edf0afffe01","Type":"ContainerStarted","Data":"a7832bd08658c4e7213cd60f363935f02652f8224bb273861ef4d62f7a7f6af5"} Dec 09 12:45:01 crc kubenswrapper[4703]: I1209 12:45:01.518566 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29421405-hznf9" podStartSLOduration=1.5185310319999998 podStartE2EDuration="1.518531032s" podCreationTimestamp="2025-12-09 12:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:45:01.50440054 +0000 UTC m=+2400.753164059" watchObservedRunningTime="2025-12-09 12:45:01.518531032 +0000 UTC m=+2400.767294551" Dec 09 12:45:02 crc kubenswrapper[4703]: I1209 12:45:02.497419 4703 generic.go:334] "Generic (PLEG): container finished" podID="d99fd229-9d94-411b-a64e-3edf0afffe01" containerID="57b5e80d337fcdd8d89bc10f93c6b80354ca1c3d0e5acafa1a35085ebd40f142" exitCode=0 Dec 09 12:45:02 crc kubenswrapper[4703]: I1209 12:45:02.497536 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421405-hznf9" event={"ID":"d99fd229-9d94-411b-a64e-3edf0afffe01","Type":"ContainerDied","Data":"57b5e80d337fcdd8d89bc10f93c6b80354ca1c3d0e5acafa1a35085ebd40f142"} Dec 09 12:45:04 crc kubenswrapper[4703]: I1209 12:45:04.019712 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421405-hznf9" Dec 09 12:45:04 crc kubenswrapper[4703]: I1209 12:45:04.135570 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6jg5l\" (UniqueName: \"kubernetes.io/projected/d99fd229-9d94-411b-a64e-3edf0afffe01-kube-api-access-6jg5l\") pod \"d99fd229-9d94-411b-a64e-3edf0afffe01\" (UID: \"d99fd229-9d94-411b-a64e-3edf0afffe01\") " Dec 09 12:45:04 crc kubenswrapper[4703]: I1209 12:45:04.136033 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d99fd229-9d94-411b-a64e-3edf0afffe01-config-volume\") pod \"d99fd229-9d94-411b-a64e-3edf0afffe01\" (UID: \"d99fd229-9d94-411b-a64e-3edf0afffe01\") " Dec 09 12:45:04 crc kubenswrapper[4703]: I1209 12:45:04.136210 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d99fd229-9d94-411b-a64e-3edf0afffe01-secret-volume\") pod \"d99fd229-9d94-411b-a64e-3edf0afffe01\" (UID: \"d99fd229-9d94-411b-a64e-3edf0afffe01\") " Dec 09 12:45:04 crc kubenswrapper[4703]: I1209 12:45:04.136910 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d99fd229-9d94-411b-a64e-3edf0afffe01-config-volume" (OuterVolumeSpecName: "config-volume") pod "d99fd229-9d94-411b-a64e-3edf0afffe01" (UID: "d99fd229-9d94-411b-a64e-3edf0afffe01"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:45:04 crc kubenswrapper[4703]: I1209 12:45:04.166589 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d99fd229-9d94-411b-a64e-3edf0afffe01-kube-api-access-6jg5l" (OuterVolumeSpecName: "kube-api-access-6jg5l") pod "d99fd229-9d94-411b-a64e-3edf0afffe01" (UID: "d99fd229-9d94-411b-a64e-3edf0afffe01"). InnerVolumeSpecName "kube-api-access-6jg5l". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:45:04 crc kubenswrapper[4703]: I1209 12:45:04.167384 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d99fd229-9d94-411b-a64e-3edf0afffe01-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d99fd229-9d94-411b-a64e-3edf0afffe01" (UID: "d99fd229-9d94-411b-a64e-3edf0afffe01"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:45:04 crc kubenswrapper[4703]: I1209 12:45:04.241017 4703 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d99fd229-9d94-411b-a64e-3edf0afffe01-config-volume\") on node \"crc\" DevicePath \"\"" Dec 09 12:45:04 crc kubenswrapper[4703]: I1209 12:45:04.241062 4703 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d99fd229-9d94-411b-a64e-3edf0afffe01-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 09 12:45:04 crc kubenswrapper[4703]: I1209 12:45:04.241073 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6jg5l\" (UniqueName: \"kubernetes.io/projected/d99fd229-9d94-411b-a64e-3edf0afffe01-kube-api-access-6jg5l\") on node \"crc\" DevicePath \"\"" Dec 09 12:45:04 crc kubenswrapper[4703]: I1209 12:45:04.522036 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421405-hznf9" event={"ID":"d99fd229-9d94-411b-a64e-3edf0afffe01","Type":"ContainerDied","Data":"a7832bd08658c4e7213cd60f363935f02652f8224bb273861ef4d62f7a7f6af5"} Dec 09 12:45:04 crc kubenswrapper[4703]: I1209 12:45:04.522092 4703 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a7832bd08658c4e7213cd60f363935f02652f8224bb273861ef4d62f7a7f6af5" Dec 09 12:45:04 crc kubenswrapper[4703]: I1209 12:45:04.522156 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421405-hznf9" Dec 09 12:45:04 crc kubenswrapper[4703]: I1209 12:45:04.604594 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421360-wvnqz"] Dec 09 12:45:04 crc kubenswrapper[4703]: I1209 12:45:04.617804 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421360-wvnqz"] Dec 09 12:45:05 crc kubenswrapper[4703]: I1209 12:45:05.087934 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="049a6f1a-4b47-421d-bfd6-f1c89a5c0a80" path="/var/lib/kubelet/pods/049a6f1a-4b47-421d-bfd6-f1c89a5c0a80/volumes" Dec 09 12:45:06 crc kubenswrapper[4703]: E1209 12:45:06.072059 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:45:06 crc kubenswrapper[4703]: E1209 12:45:06.072059 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:45:07 crc kubenswrapper[4703]: I1209 12:45:07.070599 4703 scope.go:117] "RemoveContainer" containerID="81e995d92400a3150feea876a90492238c9377534b0477b21807ac1cd60af1d5" Dec 09 12:45:07 crc kubenswrapper[4703]: E1209 12:45:07.070978 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 12:45:17 crc kubenswrapper[4703]: E1209 12:45:17.073034 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:45:18 crc kubenswrapper[4703]: I1209 12:45:18.071710 4703 scope.go:117] "RemoveContainer" containerID="81e995d92400a3150feea876a90492238c9377534b0477b21807ac1cd60af1d5" Dec 09 12:45:18 crc kubenswrapper[4703]: E1209 12:45:18.072031 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 12:45:20 crc kubenswrapper[4703]: E1209 12:45:20.072270 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:45:22 crc kubenswrapper[4703]: I1209 12:45:22.957430 4703 scope.go:117] "RemoveContainer" containerID="a43783aa0bf25bc8bb1b96389392d0c658ad806ff8f5c10d27ea3c0be3cf38a7" Dec 09 12:45:28 crc kubenswrapper[4703]: E1209 12:45:28.074671 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:45:33 crc kubenswrapper[4703]: I1209 12:45:33.070512 4703 scope.go:117] "RemoveContainer" containerID="81e995d92400a3150feea876a90492238c9377534b0477b21807ac1cd60af1d5" Dec 09 12:45:33 crc kubenswrapper[4703]: E1209 12:45:33.071357 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 12:45:35 crc kubenswrapper[4703]: E1209 12:45:35.073113 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:45:39 crc kubenswrapper[4703]: E1209 12:45:39.073087 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:45:46 crc kubenswrapper[4703]: I1209 12:45:46.070363 4703 scope.go:117] "RemoveContainer" containerID="81e995d92400a3150feea876a90492238c9377534b0477b21807ac1cd60af1d5" Dec 09 12:45:46 crc kubenswrapper[4703]: E1209 12:45:46.071221 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 12:45:46 crc kubenswrapper[4703]: E1209 12:45:46.073241 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:45:50 crc kubenswrapper[4703]: E1209 12:45:50.071806 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:45:50 crc kubenswrapper[4703]: I1209 12:45:50.873009 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-cqf8r"] Dec 09 12:45:50 crc kubenswrapper[4703]: E1209 12:45:50.873816 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d99fd229-9d94-411b-a64e-3edf0afffe01" containerName="collect-profiles" Dec 09 12:45:50 crc kubenswrapper[4703]: I1209 12:45:50.873843 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="d99fd229-9d94-411b-a64e-3edf0afffe01" containerName="collect-profiles" Dec 09 12:45:50 crc kubenswrapper[4703]: I1209 12:45:50.874180 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="d99fd229-9d94-411b-a64e-3edf0afffe01" containerName="collect-profiles" Dec 09 12:45:50 crc kubenswrapper[4703]: I1209 12:45:50.877869 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cqf8r" Dec 09 12:45:50 crc kubenswrapper[4703]: I1209 12:45:50.893175 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cqf8r"] Dec 09 12:45:50 crc kubenswrapper[4703]: I1209 12:45:50.959276 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/044aa616-03a1-4123-b572-a1c43b4cd46f-utilities\") pod \"certified-operators-cqf8r\" (UID: \"044aa616-03a1-4123-b572-a1c43b4cd46f\") " pod="openshift-marketplace/certified-operators-cqf8r" Dec 09 12:45:50 crc kubenswrapper[4703]: I1209 12:45:50.960143 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/044aa616-03a1-4123-b572-a1c43b4cd46f-catalog-content\") pod \"certified-operators-cqf8r\" (UID: \"044aa616-03a1-4123-b572-a1c43b4cd46f\") " pod="openshift-marketplace/certified-operators-cqf8r" Dec 09 12:45:50 crc kubenswrapper[4703]: I1209 12:45:50.960356 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvn9c\" (UniqueName: \"kubernetes.io/projected/044aa616-03a1-4123-b572-a1c43b4cd46f-kube-api-access-kvn9c\") pod \"certified-operators-cqf8r\" (UID: \"044aa616-03a1-4123-b572-a1c43b4cd46f\") " pod="openshift-marketplace/certified-operators-cqf8r" Dec 09 12:45:51 crc kubenswrapper[4703]: I1209 12:45:51.062977 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/044aa616-03a1-4123-b572-a1c43b4cd46f-catalog-content\") pod \"certified-operators-cqf8r\" (UID: \"044aa616-03a1-4123-b572-a1c43b4cd46f\") " pod="openshift-marketplace/certified-operators-cqf8r" Dec 09 12:45:51 crc kubenswrapper[4703]: I1209 12:45:51.063073 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvn9c\" (UniqueName: \"kubernetes.io/projected/044aa616-03a1-4123-b572-a1c43b4cd46f-kube-api-access-kvn9c\") pod \"certified-operators-cqf8r\" (UID: \"044aa616-03a1-4123-b572-a1c43b4cd46f\") " pod="openshift-marketplace/certified-operators-cqf8r" Dec 09 12:45:51 crc kubenswrapper[4703]: I1209 12:45:51.063145 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/044aa616-03a1-4123-b572-a1c43b4cd46f-utilities\") pod \"certified-operators-cqf8r\" (UID: \"044aa616-03a1-4123-b572-a1c43b4cd46f\") " pod="openshift-marketplace/certified-operators-cqf8r" Dec 09 12:45:51 crc kubenswrapper[4703]: I1209 12:45:51.063612 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/044aa616-03a1-4123-b572-a1c43b4cd46f-catalog-content\") pod \"certified-operators-cqf8r\" (UID: \"044aa616-03a1-4123-b572-a1c43b4cd46f\") " pod="openshift-marketplace/certified-operators-cqf8r" Dec 09 12:45:51 crc kubenswrapper[4703]: I1209 12:45:51.063747 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/044aa616-03a1-4123-b572-a1c43b4cd46f-utilities\") pod \"certified-operators-cqf8r\" (UID: \"044aa616-03a1-4123-b572-a1c43b4cd46f\") " pod="openshift-marketplace/certified-operators-cqf8r" Dec 09 12:45:51 crc kubenswrapper[4703]: I1209 12:45:51.089773 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvn9c\" (UniqueName: \"kubernetes.io/projected/044aa616-03a1-4123-b572-a1c43b4cd46f-kube-api-access-kvn9c\") pod \"certified-operators-cqf8r\" (UID: \"044aa616-03a1-4123-b572-a1c43b4cd46f\") " pod="openshift-marketplace/certified-operators-cqf8r" Dec 09 12:45:51 crc kubenswrapper[4703]: I1209 12:45:51.228750 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cqf8r" Dec 09 12:45:51 crc kubenswrapper[4703]: I1209 12:45:51.812021 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cqf8r"] Dec 09 12:45:52 crc kubenswrapper[4703]: I1209 12:45:52.046788 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cqf8r" event={"ID":"044aa616-03a1-4123-b572-a1c43b4cd46f","Type":"ContainerStarted","Data":"27010c9c66875a54296eaa00b76d80f531ccbf9766606029bcf9ce3f0d9f72fa"} Dec 09 12:45:52 crc kubenswrapper[4703]: I1209 12:45:52.047117 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cqf8r" event={"ID":"044aa616-03a1-4123-b572-a1c43b4cd46f","Type":"ContainerStarted","Data":"3f7fb17b6cbf08faa7e066fb5001ec469301c351dc329c9fc68dfd77b38921e7"} Dec 09 12:45:53 crc kubenswrapper[4703]: I1209 12:45:53.058334 4703 generic.go:334] "Generic (PLEG): container finished" podID="044aa616-03a1-4123-b572-a1c43b4cd46f" containerID="27010c9c66875a54296eaa00b76d80f531ccbf9766606029bcf9ce3f0d9f72fa" exitCode=0 Dec 09 12:45:53 crc kubenswrapper[4703]: I1209 12:45:53.058398 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cqf8r" event={"ID":"044aa616-03a1-4123-b572-a1c43b4cd46f","Type":"ContainerDied","Data":"27010c9c66875a54296eaa00b76d80f531ccbf9766606029bcf9ce3f0d9f72fa"} Dec 09 12:45:54 crc kubenswrapper[4703]: I1209 12:45:54.080552 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cqf8r" event={"ID":"044aa616-03a1-4123-b572-a1c43b4cd46f","Type":"ContainerStarted","Data":"09773018b9a4e633727e5a3f7315f7c948eb2b4492e7ab8a0761734f36ccf90f"} Dec 09 12:45:56 crc kubenswrapper[4703]: I1209 12:45:56.103320 4703 generic.go:334] "Generic (PLEG): container finished" podID="044aa616-03a1-4123-b572-a1c43b4cd46f" containerID="09773018b9a4e633727e5a3f7315f7c948eb2b4492e7ab8a0761734f36ccf90f" exitCode=0 Dec 09 12:45:56 crc kubenswrapper[4703]: I1209 12:45:56.103359 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cqf8r" event={"ID":"044aa616-03a1-4123-b572-a1c43b4cd46f","Type":"ContainerDied","Data":"09773018b9a4e633727e5a3f7315f7c948eb2b4492e7ab8a0761734f36ccf90f"} Dec 09 12:45:57 crc kubenswrapper[4703]: I1209 12:45:57.071009 4703 scope.go:117] "RemoveContainer" containerID="81e995d92400a3150feea876a90492238c9377534b0477b21807ac1cd60af1d5" Dec 09 12:45:57 crc kubenswrapper[4703]: E1209 12:45:57.071963 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 12:45:57 crc kubenswrapper[4703]: I1209 12:45:57.115039 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cqf8r" event={"ID":"044aa616-03a1-4123-b572-a1c43b4cd46f","Type":"ContainerStarted","Data":"43a7e1d32619052e794743c47bce2de259ba16ed6e3d813bbf0577f6fcdc86ff"} Dec 09 12:45:57 crc kubenswrapper[4703]: I1209 12:45:57.144739 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-cqf8r" podStartSLOduration=3.488715038 podStartE2EDuration="7.144712661s" podCreationTimestamp="2025-12-09 12:45:50 +0000 UTC" firstStartedPulling="2025-12-09 12:45:53.06249578 +0000 UTC m=+2452.311259299" lastFinishedPulling="2025-12-09 12:45:56.718493403 +0000 UTC m=+2455.967256922" observedRunningTime="2025-12-09 12:45:57.1344278 +0000 UTC m=+2456.383191329" watchObservedRunningTime="2025-12-09 12:45:57.144712661 +0000 UTC m=+2456.393476180" Dec 09 12:46:01 crc kubenswrapper[4703]: E1209 12:46:01.079141 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:46:01 crc kubenswrapper[4703]: I1209 12:46:01.228943 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-cqf8r" Dec 09 12:46:01 crc kubenswrapper[4703]: I1209 12:46:01.229054 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-cqf8r" Dec 09 12:46:01 crc kubenswrapper[4703]: I1209 12:46:01.317331 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-cqf8r" Dec 09 12:46:02 crc kubenswrapper[4703]: I1209 12:46:02.209525 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-cqf8r" Dec 09 12:46:02 crc kubenswrapper[4703]: I1209 12:46:02.275623 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cqf8r"] Dec 09 12:46:04 crc kubenswrapper[4703]: E1209 12:46:04.072227 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:46:04 crc kubenswrapper[4703]: I1209 12:46:04.180524 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-cqf8r" podUID="044aa616-03a1-4123-b572-a1c43b4cd46f" containerName="registry-server" containerID="cri-o://43a7e1d32619052e794743c47bce2de259ba16ed6e3d813bbf0577f6fcdc86ff" gracePeriod=2 Dec 09 12:46:04 crc kubenswrapper[4703]: I1209 12:46:04.754626 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cqf8r" Dec 09 12:46:04 crc kubenswrapper[4703]: I1209 12:46:04.808461 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kvn9c\" (UniqueName: \"kubernetes.io/projected/044aa616-03a1-4123-b572-a1c43b4cd46f-kube-api-access-kvn9c\") pod \"044aa616-03a1-4123-b572-a1c43b4cd46f\" (UID: \"044aa616-03a1-4123-b572-a1c43b4cd46f\") " Dec 09 12:46:04 crc kubenswrapper[4703]: I1209 12:46:04.808635 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/044aa616-03a1-4123-b572-a1c43b4cd46f-catalog-content\") pod \"044aa616-03a1-4123-b572-a1c43b4cd46f\" (UID: \"044aa616-03a1-4123-b572-a1c43b4cd46f\") " Dec 09 12:46:04 crc kubenswrapper[4703]: I1209 12:46:04.808706 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/044aa616-03a1-4123-b572-a1c43b4cd46f-utilities\") pod \"044aa616-03a1-4123-b572-a1c43b4cd46f\" (UID: \"044aa616-03a1-4123-b572-a1c43b4cd46f\") " Dec 09 12:46:04 crc kubenswrapper[4703]: I1209 12:46:04.809467 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/044aa616-03a1-4123-b572-a1c43b4cd46f-utilities" (OuterVolumeSpecName: "utilities") pod "044aa616-03a1-4123-b572-a1c43b4cd46f" (UID: "044aa616-03a1-4123-b572-a1c43b4cd46f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:46:04 crc kubenswrapper[4703]: I1209 12:46:04.815880 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/044aa616-03a1-4123-b572-a1c43b4cd46f-kube-api-access-kvn9c" (OuterVolumeSpecName: "kube-api-access-kvn9c") pod "044aa616-03a1-4123-b572-a1c43b4cd46f" (UID: "044aa616-03a1-4123-b572-a1c43b4cd46f"). InnerVolumeSpecName "kube-api-access-kvn9c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:46:04 crc kubenswrapper[4703]: I1209 12:46:04.867819 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/044aa616-03a1-4123-b572-a1c43b4cd46f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "044aa616-03a1-4123-b572-a1c43b4cd46f" (UID: "044aa616-03a1-4123-b572-a1c43b4cd46f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:46:04 crc kubenswrapper[4703]: I1209 12:46:04.911429 4703 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/044aa616-03a1-4123-b572-a1c43b4cd46f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 12:46:04 crc kubenswrapper[4703]: I1209 12:46:04.911476 4703 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/044aa616-03a1-4123-b572-a1c43b4cd46f-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 12:46:04 crc kubenswrapper[4703]: I1209 12:46:04.911489 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kvn9c\" (UniqueName: \"kubernetes.io/projected/044aa616-03a1-4123-b572-a1c43b4cd46f-kube-api-access-kvn9c\") on node \"crc\" DevicePath \"\"" Dec 09 12:46:05 crc kubenswrapper[4703]: I1209 12:46:05.194384 4703 generic.go:334] "Generic (PLEG): container finished" podID="044aa616-03a1-4123-b572-a1c43b4cd46f" containerID="43a7e1d32619052e794743c47bce2de259ba16ed6e3d813bbf0577f6fcdc86ff" exitCode=0 Dec 09 12:46:05 crc kubenswrapper[4703]: I1209 12:46:05.194444 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cqf8r" event={"ID":"044aa616-03a1-4123-b572-a1c43b4cd46f","Type":"ContainerDied","Data":"43a7e1d32619052e794743c47bce2de259ba16ed6e3d813bbf0577f6fcdc86ff"} Dec 09 12:46:05 crc kubenswrapper[4703]: I1209 12:46:05.194490 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cqf8r" event={"ID":"044aa616-03a1-4123-b572-a1c43b4cd46f","Type":"ContainerDied","Data":"3f7fb17b6cbf08faa7e066fb5001ec469301c351dc329c9fc68dfd77b38921e7"} Dec 09 12:46:05 crc kubenswrapper[4703]: I1209 12:46:05.194514 4703 scope.go:117] "RemoveContainer" containerID="43a7e1d32619052e794743c47bce2de259ba16ed6e3d813bbf0577f6fcdc86ff" Dec 09 12:46:05 crc kubenswrapper[4703]: I1209 12:46:05.195625 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cqf8r" Dec 09 12:46:05 crc kubenswrapper[4703]: I1209 12:46:05.227080 4703 scope.go:117] "RemoveContainer" containerID="09773018b9a4e633727e5a3f7315f7c948eb2b4492e7ab8a0761734f36ccf90f" Dec 09 12:46:05 crc kubenswrapper[4703]: I1209 12:46:05.227993 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cqf8r"] Dec 09 12:46:05 crc kubenswrapper[4703]: I1209 12:46:05.240171 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-cqf8r"] Dec 09 12:46:05 crc kubenswrapper[4703]: I1209 12:46:05.256079 4703 scope.go:117] "RemoveContainer" containerID="27010c9c66875a54296eaa00b76d80f531ccbf9766606029bcf9ce3f0d9f72fa" Dec 09 12:46:05 crc kubenswrapper[4703]: I1209 12:46:05.300690 4703 scope.go:117] "RemoveContainer" containerID="43a7e1d32619052e794743c47bce2de259ba16ed6e3d813bbf0577f6fcdc86ff" Dec 09 12:46:05 crc kubenswrapper[4703]: E1209 12:46:05.301439 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43a7e1d32619052e794743c47bce2de259ba16ed6e3d813bbf0577f6fcdc86ff\": container with ID starting with 43a7e1d32619052e794743c47bce2de259ba16ed6e3d813bbf0577f6fcdc86ff not found: ID does not exist" containerID="43a7e1d32619052e794743c47bce2de259ba16ed6e3d813bbf0577f6fcdc86ff" Dec 09 12:46:05 crc kubenswrapper[4703]: I1209 12:46:05.301520 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43a7e1d32619052e794743c47bce2de259ba16ed6e3d813bbf0577f6fcdc86ff"} err="failed to get container status \"43a7e1d32619052e794743c47bce2de259ba16ed6e3d813bbf0577f6fcdc86ff\": rpc error: code = NotFound desc = could not find container \"43a7e1d32619052e794743c47bce2de259ba16ed6e3d813bbf0577f6fcdc86ff\": container with ID starting with 43a7e1d32619052e794743c47bce2de259ba16ed6e3d813bbf0577f6fcdc86ff not found: ID does not exist" Dec 09 12:46:05 crc kubenswrapper[4703]: I1209 12:46:05.301558 4703 scope.go:117] "RemoveContainer" containerID="09773018b9a4e633727e5a3f7315f7c948eb2b4492e7ab8a0761734f36ccf90f" Dec 09 12:46:05 crc kubenswrapper[4703]: E1209 12:46:05.302153 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09773018b9a4e633727e5a3f7315f7c948eb2b4492e7ab8a0761734f36ccf90f\": container with ID starting with 09773018b9a4e633727e5a3f7315f7c948eb2b4492e7ab8a0761734f36ccf90f not found: ID does not exist" containerID="09773018b9a4e633727e5a3f7315f7c948eb2b4492e7ab8a0761734f36ccf90f" Dec 09 12:46:05 crc kubenswrapper[4703]: I1209 12:46:05.302227 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09773018b9a4e633727e5a3f7315f7c948eb2b4492e7ab8a0761734f36ccf90f"} err="failed to get container status \"09773018b9a4e633727e5a3f7315f7c948eb2b4492e7ab8a0761734f36ccf90f\": rpc error: code = NotFound desc = could not find container \"09773018b9a4e633727e5a3f7315f7c948eb2b4492e7ab8a0761734f36ccf90f\": container with ID starting with 09773018b9a4e633727e5a3f7315f7c948eb2b4492e7ab8a0761734f36ccf90f not found: ID does not exist" Dec 09 12:46:05 crc kubenswrapper[4703]: I1209 12:46:05.302304 4703 scope.go:117] "RemoveContainer" containerID="27010c9c66875a54296eaa00b76d80f531ccbf9766606029bcf9ce3f0d9f72fa" Dec 09 12:46:05 crc kubenswrapper[4703]: E1209 12:46:05.302711 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27010c9c66875a54296eaa00b76d80f531ccbf9766606029bcf9ce3f0d9f72fa\": container with ID starting with 27010c9c66875a54296eaa00b76d80f531ccbf9766606029bcf9ce3f0d9f72fa not found: ID does not exist" containerID="27010c9c66875a54296eaa00b76d80f531ccbf9766606029bcf9ce3f0d9f72fa" Dec 09 12:46:05 crc kubenswrapper[4703]: I1209 12:46:05.302742 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27010c9c66875a54296eaa00b76d80f531ccbf9766606029bcf9ce3f0d9f72fa"} err="failed to get container status \"27010c9c66875a54296eaa00b76d80f531ccbf9766606029bcf9ce3f0d9f72fa\": rpc error: code = NotFound desc = could not find container \"27010c9c66875a54296eaa00b76d80f531ccbf9766606029bcf9ce3f0d9f72fa\": container with ID starting with 27010c9c66875a54296eaa00b76d80f531ccbf9766606029bcf9ce3f0d9f72fa not found: ID does not exist" Dec 09 12:46:07 crc kubenswrapper[4703]: I1209 12:46:07.083318 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="044aa616-03a1-4123-b572-a1c43b4cd46f" path="/var/lib/kubelet/pods/044aa616-03a1-4123-b572-a1c43b4cd46f/volumes" Dec 09 12:46:10 crc kubenswrapper[4703]: I1209 12:46:10.069857 4703 scope.go:117] "RemoveContainer" containerID="81e995d92400a3150feea876a90492238c9377534b0477b21807ac1cd60af1d5" Dec 09 12:46:10 crc kubenswrapper[4703]: E1209 12:46:10.070527 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 12:46:16 crc kubenswrapper[4703]: E1209 12:46:16.073093 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:46:18 crc kubenswrapper[4703]: E1209 12:46:18.072097 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:46:21 crc kubenswrapper[4703]: I1209 12:46:21.077589 4703 scope.go:117] "RemoveContainer" containerID="81e995d92400a3150feea876a90492238c9377534b0477b21807ac1cd60af1d5" Dec 09 12:46:21 crc kubenswrapper[4703]: E1209 12:46:21.078165 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 12:46:29 crc kubenswrapper[4703]: E1209 12:46:29.073227 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:46:31 crc kubenswrapper[4703]: E1209 12:46:31.089712 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:46:32 crc kubenswrapper[4703]: I1209 12:46:32.071578 4703 scope.go:117] "RemoveContainer" containerID="81e995d92400a3150feea876a90492238c9377534b0477b21807ac1cd60af1d5" Dec 09 12:46:32 crc kubenswrapper[4703]: E1209 12:46:32.071903 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 12:46:41 crc kubenswrapper[4703]: E1209 12:46:41.079008 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:46:45 crc kubenswrapper[4703]: E1209 12:46:45.072511 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:46:46 crc kubenswrapper[4703]: I1209 12:46:46.070494 4703 scope.go:117] "RemoveContainer" containerID="81e995d92400a3150feea876a90492238c9377534b0477b21807ac1cd60af1d5" Dec 09 12:46:46 crc kubenswrapper[4703]: E1209 12:46:46.071418 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 12:46:56 crc kubenswrapper[4703]: E1209 12:46:56.073314 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:46:57 crc kubenswrapper[4703]: E1209 12:46:57.074583 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:46:59 crc kubenswrapper[4703]: I1209 12:46:59.069932 4703 scope.go:117] "RemoveContainer" containerID="81e995d92400a3150feea876a90492238c9377534b0477b21807ac1cd60af1d5" Dec 09 12:46:59 crc kubenswrapper[4703]: E1209 12:46:59.070511 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 12:47:09 crc kubenswrapper[4703]: E1209 12:47:09.072327 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:47:10 crc kubenswrapper[4703]: E1209 12:47:10.073463 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:47:14 crc kubenswrapper[4703]: I1209 12:47:14.070586 4703 scope.go:117] "RemoveContainer" containerID="81e995d92400a3150feea876a90492238c9377534b0477b21807ac1cd60af1d5" Dec 09 12:47:14 crc kubenswrapper[4703]: I1209 12:47:14.916132 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" event={"ID":"32956ceb-8540-406e-8693-e86efb46cd42","Type":"ContainerStarted","Data":"29fa04997403c87f646b1159d85eab1c90c2af5a15739fcea56f5fd4e41a5e22"} Dec 09 12:47:21 crc kubenswrapper[4703]: E1209 12:47:21.097513 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:47:23 crc kubenswrapper[4703]: E1209 12:47:23.071333 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:47:34 crc kubenswrapper[4703]: E1209 12:47:34.078145 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:47:37 crc kubenswrapper[4703]: E1209 12:47:37.071807 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:47:44 crc kubenswrapper[4703]: I1209 12:47:44.777467 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-q7l82"] Dec 09 12:47:44 crc kubenswrapper[4703]: E1209 12:47:44.780794 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="044aa616-03a1-4123-b572-a1c43b4cd46f" containerName="registry-server" Dec 09 12:47:44 crc kubenswrapper[4703]: I1209 12:47:44.780922 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="044aa616-03a1-4123-b572-a1c43b4cd46f" containerName="registry-server" Dec 09 12:47:44 crc kubenswrapper[4703]: E1209 12:47:44.781021 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="044aa616-03a1-4123-b572-a1c43b4cd46f" containerName="extract-utilities" Dec 09 12:47:44 crc kubenswrapper[4703]: I1209 12:47:44.781110 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="044aa616-03a1-4123-b572-a1c43b4cd46f" containerName="extract-utilities" Dec 09 12:47:44 crc kubenswrapper[4703]: E1209 12:47:44.781217 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="044aa616-03a1-4123-b572-a1c43b4cd46f" containerName="extract-content" Dec 09 12:47:44 crc kubenswrapper[4703]: I1209 12:47:44.781548 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="044aa616-03a1-4123-b572-a1c43b4cd46f" containerName="extract-content" Dec 09 12:47:44 crc kubenswrapper[4703]: I1209 12:47:44.781969 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="044aa616-03a1-4123-b572-a1c43b4cd46f" containerName="registry-server" Dec 09 12:47:44 crc kubenswrapper[4703]: I1209 12:47:44.787447 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q7l82" Dec 09 12:47:44 crc kubenswrapper[4703]: I1209 12:47:44.805824 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q7l82"] Dec 09 12:47:44 crc kubenswrapper[4703]: I1209 12:47:44.819594 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1fc5a9a-a1e4-4e68-8297-b7d9480cad34-catalog-content\") pod \"community-operators-q7l82\" (UID: \"d1fc5a9a-a1e4-4e68-8297-b7d9480cad34\") " pod="openshift-marketplace/community-operators-q7l82" Dec 09 12:47:44 crc kubenswrapper[4703]: I1209 12:47:44.819704 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhs95\" (UniqueName: \"kubernetes.io/projected/d1fc5a9a-a1e4-4e68-8297-b7d9480cad34-kube-api-access-zhs95\") pod \"community-operators-q7l82\" (UID: \"d1fc5a9a-a1e4-4e68-8297-b7d9480cad34\") " pod="openshift-marketplace/community-operators-q7l82" Dec 09 12:47:44 crc kubenswrapper[4703]: I1209 12:47:44.819766 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1fc5a9a-a1e4-4e68-8297-b7d9480cad34-utilities\") pod \"community-operators-q7l82\" (UID: \"d1fc5a9a-a1e4-4e68-8297-b7d9480cad34\") " pod="openshift-marketplace/community-operators-q7l82" Dec 09 12:47:44 crc kubenswrapper[4703]: I1209 12:47:44.921672 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhs95\" (UniqueName: \"kubernetes.io/projected/d1fc5a9a-a1e4-4e68-8297-b7d9480cad34-kube-api-access-zhs95\") pod \"community-operators-q7l82\" (UID: \"d1fc5a9a-a1e4-4e68-8297-b7d9480cad34\") " pod="openshift-marketplace/community-operators-q7l82" Dec 09 12:47:44 crc kubenswrapper[4703]: I1209 12:47:44.921773 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1fc5a9a-a1e4-4e68-8297-b7d9480cad34-utilities\") pod \"community-operators-q7l82\" (UID: \"d1fc5a9a-a1e4-4e68-8297-b7d9480cad34\") " pod="openshift-marketplace/community-operators-q7l82" Dec 09 12:47:44 crc kubenswrapper[4703]: I1209 12:47:44.922015 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1fc5a9a-a1e4-4e68-8297-b7d9480cad34-catalog-content\") pod \"community-operators-q7l82\" (UID: \"d1fc5a9a-a1e4-4e68-8297-b7d9480cad34\") " pod="openshift-marketplace/community-operators-q7l82" Dec 09 12:47:44 crc kubenswrapper[4703]: I1209 12:47:44.922606 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1fc5a9a-a1e4-4e68-8297-b7d9480cad34-catalog-content\") pod \"community-operators-q7l82\" (UID: \"d1fc5a9a-a1e4-4e68-8297-b7d9480cad34\") " pod="openshift-marketplace/community-operators-q7l82" Dec 09 12:47:44 crc kubenswrapper[4703]: I1209 12:47:44.922890 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1fc5a9a-a1e4-4e68-8297-b7d9480cad34-utilities\") pod \"community-operators-q7l82\" (UID: \"d1fc5a9a-a1e4-4e68-8297-b7d9480cad34\") " pod="openshift-marketplace/community-operators-q7l82" Dec 09 12:47:44 crc kubenswrapper[4703]: I1209 12:47:44.953829 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhs95\" (UniqueName: \"kubernetes.io/projected/d1fc5a9a-a1e4-4e68-8297-b7d9480cad34-kube-api-access-zhs95\") pod \"community-operators-q7l82\" (UID: \"d1fc5a9a-a1e4-4e68-8297-b7d9480cad34\") " pod="openshift-marketplace/community-operators-q7l82" Dec 09 12:47:45 crc kubenswrapper[4703]: I1209 12:47:45.134290 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q7l82" Dec 09 12:47:45 crc kubenswrapper[4703]: I1209 12:47:45.782529 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q7l82"] Dec 09 12:47:46 crc kubenswrapper[4703]: E1209 12:47:46.072338 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:47:46 crc kubenswrapper[4703]: I1209 12:47:46.289848 4703 generic.go:334] "Generic (PLEG): container finished" podID="d1fc5a9a-a1e4-4e68-8297-b7d9480cad34" containerID="29935dca5e25fb8574411b54ae39af4d13857085729416dbae509208dc2493ef" exitCode=0 Dec 09 12:47:46 crc kubenswrapper[4703]: I1209 12:47:46.289930 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q7l82" event={"ID":"d1fc5a9a-a1e4-4e68-8297-b7d9480cad34","Type":"ContainerDied","Data":"29935dca5e25fb8574411b54ae39af4d13857085729416dbae509208dc2493ef"} Dec 09 12:47:46 crc kubenswrapper[4703]: I1209 12:47:46.289970 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q7l82" event={"ID":"d1fc5a9a-a1e4-4e68-8297-b7d9480cad34","Type":"ContainerStarted","Data":"600473a1e11ad09d315bb3b18030d2c40f578af97f19aff1a475648e7cdc190e"} Dec 09 12:47:47 crc kubenswrapper[4703]: I1209 12:47:47.318459 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q7l82" event={"ID":"d1fc5a9a-a1e4-4e68-8297-b7d9480cad34","Type":"ContainerStarted","Data":"fb70abd09b555b09539aa3651c51e12065961918e77010a2399666e1fca113c9"} Dec 09 12:47:48 crc kubenswrapper[4703]: I1209 12:47:48.332896 4703 generic.go:334] "Generic (PLEG): container finished" podID="d1fc5a9a-a1e4-4e68-8297-b7d9480cad34" containerID="fb70abd09b555b09539aa3651c51e12065961918e77010a2399666e1fca113c9" exitCode=0 Dec 09 12:47:48 crc kubenswrapper[4703]: I1209 12:47:48.332983 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q7l82" event={"ID":"d1fc5a9a-a1e4-4e68-8297-b7d9480cad34","Type":"ContainerDied","Data":"fb70abd09b555b09539aa3651c51e12065961918e77010a2399666e1fca113c9"} Dec 09 12:47:49 crc kubenswrapper[4703]: I1209 12:47:49.353473 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q7l82" event={"ID":"d1fc5a9a-a1e4-4e68-8297-b7d9480cad34","Type":"ContainerStarted","Data":"a7b350e83ac1cf2ab1b80f122c03bd591cc0e663e853690a18fa38541fbc2cfe"} Dec 09 12:47:49 crc kubenswrapper[4703]: I1209 12:47:49.376790 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-q7l82" podStartSLOduration=2.947816768 podStartE2EDuration="5.376766667s" podCreationTimestamp="2025-12-09 12:47:44 +0000 UTC" firstStartedPulling="2025-12-09 12:47:46.291919122 +0000 UTC m=+2565.540682641" lastFinishedPulling="2025-12-09 12:47:48.720869021 +0000 UTC m=+2567.969632540" observedRunningTime="2025-12-09 12:47:49.374702443 +0000 UTC m=+2568.623465972" watchObservedRunningTime="2025-12-09 12:47:49.376766667 +0000 UTC m=+2568.625530186" Dec 09 12:47:50 crc kubenswrapper[4703]: E1209 12:47:50.071531 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:47:55 crc kubenswrapper[4703]: I1209 12:47:55.134897 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-q7l82" Dec 09 12:47:55 crc kubenswrapper[4703]: I1209 12:47:55.135352 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-q7l82" Dec 09 12:47:55 crc kubenswrapper[4703]: I1209 12:47:55.207420 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-q7l82" Dec 09 12:47:55 crc kubenswrapper[4703]: I1209 12:47:55.473893 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-q7l82" Dec 09 12:47:55 crc kubenswrapper[4703]: I1209 12:47:55.536270 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q7l82"] Dec 09 12:47:57 crc kubenswrapper[4703]: I1209 12:47:57.434028 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-q7l82" podUID="d1fc5a9a-a1e4-4e68-8297-b7d9480cad34" containerName="registry-server" containerID="cri-o://a7b350e83ac1cf2ab1b80f122c03bd591cc0e663e853690a18fa38541fbc2cfe" gracePeriod=2 Dec 09 12:47:58 crc kubenswrapper[4703]: I1209 12:47:58.053023 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q7l82" Dec 09 12:47:58 crc kubenswrapper[4703]: I1209 12:47:58.169506 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zhs95\" (UniqueName: \"kubernetes.io/projected/d1fc5a9a-a1e4-4e68-8297-b7d9480cad34-kube-api-access-zhs95\") pod \"d1fc5a9a-a1e4-4e68-8297-b7d9480cad34\" (UID: \"d1fc5a9a-a1e4-4e68-8297-b7d9480cad34\") " Dec 09 12:47:58 crc kubenswrapper[4703]: I1209 12:47:58.169758 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1fc5a9a-a1e4-4e68-8297-b7d9480cad34-utilities\") pod \"d1fc5a9a-a1e4-4e68-8297-b7d9480cad34\" (UID: \"d1fc5a9a-a1e4-4e68-8297-b7d9480cad34\") " Dec 09 12:47:58 crc kubenswrapper[4703]: I1209 12:47:58.169896 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1fc5a9a-a1e4-4e68-8297-b7d9480cad34-catalog-content\") pod \"d1fc5a9a-a1e4-4e68-8297-b7d9480cad34\" (UID: \"d1fc5a9a-a1e4-4e68-8297-b7d9480cad34\") " Dec 09 12:47:58 crc kubenswrapper[4703]: I1209 12:47:58.171900 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1fc5a9a-a1e4-4e68-8297-b7d9480cad34-utilities" (OuterVolumeSpecName: "utilities") pod "d1fc5a9a-a1e4-4e68-8297-b7d9480cad34" (UID: "d1fc5a9a-a1e4-4e68-8297-b7d9480cad34"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:47:58 crc kubenswrapper[4703]: I1209 12:47:58.179542 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1fc5a9a-a1e4-4e68-8297-b7d9480cad34-kube-api-access-zhs95" (OuterVolumeSpecName: "kube-api-access-zhs95") pod "d1fc5a9a-a1e4-4e68-8297-b7d9480cad34" (UID: "d1fc5a9a-a1e4-4e68-8297-b7d9480cad34"). InnerVolumeSpecName "kube-api-access-zhs95". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:47:58 crc kubenswrapper[4703]: I1209 12:47:58.227502 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1fc5a9a-a1e4-4e68-8297-b7d9480cad34-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d1fc5a9a-a1e4-4e68-8297-b7d9480cad34" (UID: "d1fc5a9a-a1e4-4e68-8297-b7d9480cad34"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:47:58 crc kubenswrapper[4703]: I1209 12:47:58.274109 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zhs95\" (UniqueName: \"kubernetes.io/projected/d1fc5a9a-a1e4-4e68-8297-b7d9480cad34-kube-api-access-zhs95\") on node \"crc\" DevicePath \"\"" Dec 09 12:47:58 crc kubenswrapper[4703]: I1209 12:47:58.274167 4703 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1fc5a9a-a1e4-4e68-8297-b7d9480cad34-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 12:47:58 crc kubenswrapper[4703]: I1209 12:47:58.274180 4703 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1fc5a9a-a1e4-4e68-8297-b7d9480cad34-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 12:47:58 crc kubenswrapper[4703]: I1209 12:47:58.447790 4703 generic.go:334] "Generic (PLEG): container finished" podID="d1fc5a9a-a1e4-4e68-8297-b7d9480cad34" containerID="a7b350e83ac1cf2ab1b80f122c03bd591cc0e663e853690a18fa38541fbc2cfe" exitCode=0 Dec 09 12:47:58 crc kubenswrapper[4703]: I1209 12:47:58.447849 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q7l82" event={"ID":"d1fc5a9a-a1e4-4e68-8297-b7d9480cad34","Type":"ContainerDied","Data":"a7b350e83ac1cf2ab1b80f122c03bd591cc0e663e853690a18fa38541fbc2cfe"} Dec 09 12:47:58 crc kubenswrapper[4703]: I1209 12:47:58.447903 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q7l82" Dec 09 12:47:58 crc kubenswrapper[4703]: I1209 12:47:58.449320 4703 scope.go:117] "RemoveContainer" containerID="a7b350e83ac1cf2ab1b80f122c03bd591cc0e663e853690a18fa38541fbc2cfe" Dec 09 12:47:58 crc kubenswrapper[4703]: I1209 12:47:58.456246 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q7l82" event={"ID":"d1fc5a9a-a1e4-4e68-8297-b7d9480cad34","Type":"ContainerDied","Data":"600473a1e11ad09d315bb3b18030d2c40f578af97f19aff1a475648e7cdc190e"} Dec 09 12:47:58 crc kubenswrapper[4703]: I1209 12:47:58.491497 4703 scope.go:117] "RemoveContainer" containerID="fb70abd09b555b09539aa3651c51e12065961918e77010a2399666e1fca113c9" Dec 09 12:47:58 crc kubenswrapper[4703]: I1209 12:47:58.507437 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q7l82"] Dec 09 12:47:58 crc kubenswrapper[4703]: I1209 12:47:58.531629 4703 scope.go:117] "RemoveContainer" containerID="29935dca5e25fb8574411b54ae39af4d13857085729416dbae509208dc2493ef" Dec 09 12:47:58 crc kubenswrapper[4703]: I1209 12:47:58.539018 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-q7l82"] Dec 09 12:47:58 crc kubenswrapper[4703]: I1209 12:47:58.574867 4703 scope.go:117] "RemoveContainer" containerID="a7b350e83ac1cf2ab1b80f122c03bd591cc0e663e853690a18fa38541fbc2cfe" Dec 09 12:47:58 crc kubenswrapper[4703]: E1209 12:47:58.575471 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7b350e83ac1cf2ab1b80f122c03bd591cc0e663e853690a18fa38541fbc2cfe\": container with ID starting with a7b350e83ac1cf2ab1b80f122c03bd591cc0e663e853690a18fa38541fbc2cfe not found: ID does not exist" containerID="a7b350e83ac1cf2ab1b80f122c03bd591cc0e663e853690a18fa38541fbc2cfe" Dec 09 12:47:58 crc kubenswrapper[4703]: I1209 12:47:58.575511 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7b350e83ac1cf2ab1b80f122c03bd591cc0e663e853690a18fa38541fbc2cfe"} err="failed to get container status \"a7b350e83ac1cf2ab1b80f122c03bd591cc0e663e853690a18fa38541fbc2cfe\": rpc error: code = NotFound desc = could not find container \"a7b350e83ac1cf2ab1b80f122c03bd591cc0e663e853690a18fa38541fbc2cfe\": container with ID starting with a7b350e83ac1cf2ab1b80f122c03bd591cc0e663e853690a18fa38541fbc2cfe not found: ID does not exist" Dec 09 12:47:58 crc kubenswrapper[4703]: I1209 12:47:58.575540 4703 scope.go:117] "RemoveContainer" containerID="fb70abd09b555b09539aa3651c51e12065961918e77010a2399666e1fca113c9" Dec 09 12:47:58 crc kubenswrapper[4703]: E1209 12:47:58.575995 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb70abd09b555b09539aa3651c51e12065961918e77010a2399666e1fca113c9\": container with ID starting with fb70abd09b555b09539aa3651c51e12065961918e77010a2399666e1fca113c9 not found: ID does not exist" containerID="fb70abd09b555b09539aa3651c51e12065961918e77010a2399666e1fca113c9" Dec 09 12:47:58 crc kubenswrapper[4703]: I1209 12:47:58.576025 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb70abd09b555b09539aa3651c51e12065961918e77010a2399666e1fca113c9"} err="failed to get container status \"fb70abd09b555b09539aa3651c51e12065961918e77010a2399666e1fca113c9\": rpc error: code = NotFound desc = could not find container \"fb70abd09b555b09539aa3651c51e12065961918e77010a2399666e1fca113c9\": container with ID starting with fb70abd09b555b09539aa3651c51e12065961918e77010a2399666e1fca113c9 not found: ID does not exist" Dec 09 12:47:58 crc kubenswrapper[4703]: I1209 12:47:58.576044 4703 scope.go:117] "RemoveContainer" containerID="29935dca5e25fb8574411b54ae39af4d13857085729416dbae509208dc2493ef" Dec 09 12:47:58 crc kubenswrapper[4703]: E1209 12:47:58.576530 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29935dca5e25fb8574411b54ae39af4d13857085729416dbae509208dc2493ef\": container with ID starting with 29935dca5e25fb8574411b54ae39af4d13857085729416dbae509208dc2493ef not found: ID does not exist" containerID="29935dca5e25fb8574411b54ae39af4d13857085729416dbae509208dc2493ef" Dec 09 12:47:58 crc kubenswrapper[4703]: I1209 12:47:58.576563 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29935dca5e25fb8574411b54ae39af4d13857085729416dbae509208dc2493ef"} err="failed to get container status \"29935dca5e25fb8574411b54ae39af4d13857085729416dbae509208dc2493ef\": rpc error: code = NotFound desc = could not find container \"29935dca5e25fb8574411b54ae39af4d13857085729416dbae509208dc2493ef\": container with ID starting with 29935dca5e25fb8574411b54ae39af4d13857085729416dbae509208dc2493ef not found: ID does not exist" Dec 09 12:47:59 crc kubenswrapper[4703]: I1209 12:47:59.086592 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1fc5a9a-a1e4-4e68-8297-b7d9480cad34" path="/var/lib/kubelet/pods/d1fc5a9a-a1e4-4e68-8297-b7d9480cad34/volumes" Dec 09 12:48:01 crc kubenswrapper[4703]: E1209 12:48:01.080401 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:48:04 crc kubenswrapper[4703]: E1209 12:48:04.073030 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:48:15 crc kubenswrapper[4703]: E1209 12:48:15.085174 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:48:17 crc kubenswrapper[4703]: E1209 12:48:17.073630 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:48:27 crc kubenswrapper[4703]: E1209 12:48:27.073340 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:48:32 crc kubenswrapper[4703]: E1209 12:48:32.072540 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:48:41 crc kubenswrapper[4703]: I1209 12:48:41.080863 4703 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 09 12:48:41 crc kubenswrapper[4703]: E1209 12:48:41.164030 4703 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 12:48:41 crc kubenswrapper[4703]: E1209 12:48:41.164112 4703 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 12:48:41 crc kubenswrapper[4703]: E1209 12:48:41.164386 4703 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n565h678h645h65h685h657h568h564h8fhcfhc4h58fh66bh564h649h5d5hb5h67dh57fh97h58ch557h4h68h59fh5c8h65ch586h5bch5b8h5c6h7q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p7hbq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(ce42c586-f397-4f98-be45-f56d36115d7a): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 12:48:41 crc kubenswrapper[4703]: E1209 12:48:41.166396 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:48:44 crc kubenswrapper[4703]: E1209 12:48:44.075349 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:48:52 crc kubenswrapper[4703]: E1209 12:48:52.074164 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:48:55 crc kubenswrapper[4703]: E1209 12:48:55.208126 4703 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 09 12:48:55 crc kubenswrapper[4703]: E1209 12:48:55.208255 4703 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 09 12:48:55 crc kubenswrapper[4703]: E1209 12:48:55.208476 4703 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6kdkz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-6ljb8_openstack(862e9d91-760e-43af-aba3-a23255b0fd7a): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 12:48:55 crc kubenswrapper[4703]: E1209 12:48:55.209686 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:49:02 crc kubenswrapper[4703]: I1209 12:49:02.701322 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rhhtc"] Dec 09 12:49:02 crc kubenswrapper[4703]: E1209 12:49:02.702830 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1fc5a9a-a1e4-4e68-8297-b7d9480cad34" containerName="registry-server" Dec 09 12:49:02 crc kubenswrapper[4703]: I1209 12:49:02.702854 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1fc5a9a-a1e4-4e68-8297-b7d9480cad34" containerName="registry-server" Dec 09 12:49:02 crc kubenswrapper[4703]: E1209 12:49:02.702912 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1fc5a9a-a1e4-4e68-8297-b7d9480cad34" containerName="extract-utilities" Dec 09 12:49:02 crc kubenswrapper[4703]: I1209 12:49:02.702921 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1fc5a9a-a1e4-4e68-8297-b7d9480cad34" containerName="extract-utilities" Dec 09 12:49:02 crc kubenswrapper[4703]: E1209 12:49:02.702955 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1fc5a9a-a1e4-4e68-8297-b7d9480cad34" containerName="extract-content" Dec 09 12:49:02 crc kubenswrapper[4703]: I1209 12:49:02.702964 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1fc5a9a-a1e4-4e68-8297-b7d9480cad34" containerName="extract-content" Dec 09 12:49:02 crc kubenswrapper[4703]: I1209 12:49:02.703304 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1fc5a9a-a1e4-4e68-8297-b7d9480cad34" containerName="registry-server" Dec 09 12:49:02 crc kubenswrapper[4703]: I1209 12:49:02.705337 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rhhtc" Dec 09 12:49:02 crc kubenswrapper[4703]: I1209 12:49:02.716226 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rhhtc"] Dec 09 12:49:02 crc kubenswrapper[4703]: I1209 12:49:02.862571 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aac6f742-869d-4ac9-a221-c3552c7da7f8-catalog-content\") pod \"redhat-operators-rhhtc\" (UID: \"aac6f742-869d-4ac9-a221-c3552c7da7f8\") " pod="openshift-marketplace/redhat-operators-rhhtc" Dec 09 12:49:02 crc kubenswrapper[4703]: I1209 12:49:02.862864 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aac6f742-869d-4ac9-a221-c3552c7da7f8-utilities\") pod \"redhat-operators-rhhtc\" (UID: \"aac6f742-869d-4ac9-a221-c3552c7da7f8\") " pod="openshift-marketplace/redhat-operators-rhhtc" Dec 09 12:49:02 crc kubenswrapper[4703]: I1209 12:49:02.863381 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lv22\" (UniqueName: \"kubernetes.io/projected/aac6f742-869d-4ac9-a221-c3552c7da7f8-kube-api-access-5lv22\") pod \"redhat-operators-rhhtc\" (UID: \"aac6f742-869d-4ac9-a221-c3552c7da7f8\") " pod="openshift-marketplace/redhat-operators-rhhtc" Dec 09 12:49:02 crc kubenswrapper[4703]: I1209 12:49:02.965628 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lv22\" (UniqueName: \"kubernetes.io/projected/aac6f742-869d-4ac9-a221-c3552c7da7f8-kube-api-access-5lv22\") pod \"redhat-operators-rhhtc\" (UID: \"aac6f742-869d-4ac9-a221-c3552c7da7f8\") " pod="openshift-marketplace/redhat-operators-rhhtc" Dec 09 12:49:02 crc kubenswrapper[4703]: I1209 12:49:02.966218 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aac6f742-869d-4ac9-a221-c3552c7da7f8-catalog-content\") pod \"redhat-operators-rhhtc\" (UID: \"aac6f742-869d-4ac9-a221-c3552c7da7f8\") " pod="openshift-marketplace/redhat-operators-rhhtc" Dec 09 12:49:02 crc kubenswrapper[4703]: I1209 12:49:02.966310 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aac6f742-869d-4ac9-a221-c3552c7da7f8-utilities\") pod \"redhat-operators-rhhtc\" (UID: \"aac6f742-869d-4ac9-a221-c3552c7da7f8\") " pod="openshift-marketplace/redhat-operators-rhhtc" Dec 09 12:49:02 crc kubenswrapper[4703]: I1209 12:49:02.967174 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aac6f742-869d-4ac9-a221-c3552c7da7f8-catalog-content\") pod \"redhat-operators-rhhtc\" (UID: \"aac6f742-869d-4ac9-a221-c3552c7da7f8\") " pod="openshift-marketplace/redhat-operators-rhhtc" Dec 09 12:49:02 crc kubenswrapper[4703]: I1209 12:49:02.967183 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aac6f742-869d-4ac9-a221-c3552c7da7f8-utilities\") pod \"redhat-operators-rhhtc\" (UID: \"aac6f742-869d-4ac9-a221-c3552c7da7f8\") " pod="openshift-marketplace/redhat-operators-rhhtc" Dec 09 12:49:02 crc kubenswrapper[4703]: I1209 12:49:02.985753 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lv22\" (UniqueName: \"kubernetes.io/projected/aac6f742-869d-4ac9-a221-c3552c7da7f8-kube-api-access-5lv22\") pod \"redhat-operators-rhhtc\" (UID: \"aac6f742-869d-4ac9-a221-c3552c7da7f8\") " pod="openshift-marketplace/redhat-operators-rhhtc" Dec 09 12:49:03 crc kubenswrapper[4703]: I1209 12:49:03.034294 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rhhtc" Dec 09 12:49:03 crc kubenswrapper[4703]: I1209 12:49:03.643122 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rhhtc"] Dec 09 12:49:03 crc kubenswrapper[4703]: W1209 12:49:03.650678 4703 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaac6f742_869d_4ac9_a221_c3552c7da7f8.slice/crio-5e76d028515ed2f17f3acdc04e716d87044378e14fee5bf38165bd0b8ee4f90d WatchSource:0}: Error finding container 5e76d028515ed2f17f3acdc04e716d87044378e14fee5bf38165bd0b8ee4f90d: Status 404 returned error can't find the container with id 5e76d028515ed2f17f3acdc04e716d87044378e14fee5bf38165bd0b8ee4f90d Dec 09 12:49:04 crc kubenswrapper[4703]: E1209 12:49:04.071968 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:49:04 crc kubenswrapper[4703]: I1209 12:49:04.290927 4703 generic.go:334] "Generic (PLEG): container finished" podID="aac6f742-869d-4ac9-a221-c3552c7da7f8" containerID="c4943968f9cf5f415caab9df9ad539c6b8e6e53a2f8335b1a93c4350c3d4b326" exitCode=0 Dec 09 12:49:04 crc kubenswrapper[4703]: I1209 12:49:04.290979 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rhhtc" event={"ID":"aac6f742-869d-4ac9-a221-c3552c7da7f8","Type":"ContainerDied","Data":"c4943968f9cf5f415caab9df9ad539c6b8e6e53a2f8335b1a93c4350c3d4b326"} Dec 09 12:49:04 crc kubenswrapper[4703]: I1209 12:49:04.291012 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rhhtc" event={"ID":"aac6f742-869d-4ac9-a221-c3552c7da7f8","Type":"ContainerStarted","Data":"5e76d028515ed2f17f3acdc04e716d87044378e14fee5bf38165bd0b8ee4f90d"} Dec 09 12:49:05 crc kubenswrapper[4703]: I1209 12:49:05.305716 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rhhtc" event={"ID":"aac6f742-869d-4ac9-a221-c3552c7da7f8","Type":"ContainerStarted","Data":"a4d54b8de90874c06cfc3d21c20adb044457c23529710d99abca6e510ecfc844"} Dec 09 12:49:08 crc kubenswrapper[4703]: E1209 12:49:08.072493 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:49:10 crc kubenswrapper[4703]: I1209 12:49:10.415390 4703 generic.go:334] "Generic (PLEG): container finished" podID="aac6f742-869d-4ac9-a221-c3552c7da7f8" containerID="a4d54b8de90874c06cfc3d21c20adb044457c23529710d99abca6e510ecfc844" exitCode=0 Dec 09 12:49:10 crc kubenswrapper[4703]: I1209 12:49:10.415481 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rhhtc" event={"ID":"aac6f742-869d-4ac9-a221-c3552c7da7f8","Type":"ContainerDied","Data":"a4d54b8de90874c06cfc3d21c20adb044457c23529710d99abca6e510ecfc844"} Dec 09 12:49:11 crc kubenswrapper[4703]: I1209 12:49:11.430618 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rhhtc" event={"ID":"aac6f742-869d-4ac9-a221-c3552c7da7f8","Type":"ContainerStarted","Data":"fd70401eaca82b000b2f67202aa6bba88ce73d7cf92b105f1ed88a21884dcf30"} Dec 09 12:49:11 crc kubenswrapper[4703]: I1209 12:49:11.460871 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rhhtc" podStartSLOduration=2.867479382 podStartE2EDuration="9.460846363s" podCreationTimestamp="2025-12-09 12:49:02 +0000 UTC" firstStartedPulling="2025-12-09 12:49:04.292916153 +0000 UTC m=+2643.541679662" lastFinishedPulling="2025-12-09 12:49:10.886283124 +0000 UTC m=+2650.135046643" observedRunningTime="2025-12-09 12:49:11.452953305 +0000 UTC m=+2650.701716824" watchObservedRunningTime="2025-12-09 12:49:11.460846363 +0000 UTC m=+2650.709609882" Dec 09 12:49:13 crc kubenswrapper[4703]: I1209 12:49:13.035140 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rhhtc" Dec 09 12:49:13 crc kubenswrapper[4703]: I1209 12:49:13.035663 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rhhtc" Dec 09 12:49:14 crc kubenswrapper[4703]: I1209 12:49:14.092721 4703 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rhhtc" podUID="aac6f742-869d-4ac9-a221-c3552c7da7f8" containerName="registry-server" probeResult="failure" output=< Dec 09 12:49:14 crc kubenswrapper[4703]: timeout: failed to connect service ":50051" within 1s Dec 09 12:49:14 crc kubenswrapper[4703]: > Dec 09 12:49:18 crc kubenswrapper[4703]: E1209 12:49:18.073388 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:49:22 crc kubenswrapper[4703]: E1209 12:49:22.074817 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:49:23 crc kubenswrapper[4703]: I1209 12:49:23.105586 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rhhtc" Dec 09 12:49:23 crc kubenswrapper[4703]: I1209 12:49:23.169626 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rhhtc" Dec 09 12:49:23 crc kubenswrapper[4703]: I1209 12:49:23.354614 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rhhtc"] Dec 09 12:49:24 crc kubenswrapper[4703]: I1209 12:49:24.573327 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-rhhtc" podUID="aac6f742-869d-4ac9-a221-c3552c7da7f8" containerName="registry-server" containerID="cri-o://fd70401eaca82b000b2f67202aa6bba88ce73d7cf92b105f1ed88a21884dcf30" gracePeriod=2 Dec 09 12:49:25 crc kubenswrapper[4703]: I1209 12:49:25.384511 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rhhtc" Dec 09 12:49:25 crc kubenswrapper[4703]: I1209 12:49:25.524916 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aac6f742-869d-4ac9-a221-c3552c7da7f8-catalog-content\") pod \"aac6f742-869d-4ac9-a221-c3552c7da7f8\" (UID: \"aac6f742-869d-4ac9-a221-c3552c7da7f8\") " Dec 09 12:49:25 crc kubenswrapper[4703]: I1209 12:49:25.525009 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lv22\" (UniqueName: \"kubernetes.io/projected/aac6f742-869d-4ac9-a221-c3552c7da7f8-kube-api-access-5lv22\") pod \"aac6f742-869d-4ac9-a221-c3552c7da7f8\" (UID: \"aac6f742-869d-4ac9-a221-c3552c7da7f8\") " Dec 09 12:49:25 crc kubenswrapper[4703]: I1209 12:49:25.525094 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aac6f742-869d-4ac9-a221-c3552c7da7f8-utilities\") pod \"aac6f742-869d-4ac9-a221-c3552c7da7f8\" (UID: \"aac6f742-869d-4ac9-a221-c3552c7da7f8\") " Dec 09 12:49:25 crc kubenswrapper[4703]: I1209 12:49:25.526465 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aac6f742-869d-4ac9-a221-c3552c7da7f8-utilities" (OuterVolumeSpecName: "utilities") pod "aac6f742-869d-4ac9-a221-c3552c7da7f8" (UID: "aac6f742-869d-4ac9-a221-c3552c7da7f8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:49:25 crc kubenswrapper[4703]: I1209 12:49:25.536319 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aac6f742-869d-4ac9-a221-c3552c7da7f8-kube-api-access-5lv22" (OuterVolumeSpecName: "kube-api-access-5lv22") pod "aac6f742-869d-4ac9-a221-c3552c7da7f8" (UID: "aac6f742-869d-4ac9-a221-c3552c7da7f8"). InnerVolumeSpecName "kube-api-access-5lv22". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:49:25 crc kubenswrapper[4703]: I1209 12:49:25.598656 4703 generic.go:334] "Generic (PLEG): container finished" podID="aac6f742-869d-4ac9-a221-c3552c7da7f8" containerID="fd70401eaca82b000b2f67202aa6bba88ce73d7cf92b105f1ed88a21884dcf30" exitCode=0 Dec 09 12:49:25 crc kubenswrapper[4703]: I1209 12:49:25.599245 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rhhtc" Dec 09 12:49:25 crc kubenswrapper[4703]: I1209 12:49:25.600475 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rhhtc" event={"ID":"aac6f742-869d-4ac9-a221-c3552c7da7f8","Type":"ContainerDied","Data":"fd70401eaca82b000b2f67202aa6bba88ce73d7cf92b105f1ed88a21884dcf30"} Dec 09 12:49:25 crc kubenswrapper[4703]: I1209 12:49:25.600517 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rhhtc" event={"ID":"aac6f742-869d-4ac9-a221-c3552c7da7f8","Type":"ContainerDied","Data":"5e76d028515ed2f17f3acdc04e716d87044378e14fee5bf38165bd0b8ee4f90d"} Dec 09 12:49:25 crc kubenswrapper[4703]: I1209 12:49:25.600546 4703 scope.go:117] "RemoveContainer" containerID="fd70401eaca82b000b2f67202aa6bba88ce73d7cf92b105f1ed88a21884dcf30" Dec 09 12:49:25 crc kubenswrapper[4703]: I1209 12:49:25.629170 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5lv22\" (UniqueName: \"kubernetes.io/projected/aac6f742-869d-4ac9-a221-c3552c7da7f8-kube-api-access-5lv22\") on node \"crc\" DevicePath \"\"" Dec 09 12:49:25 crc kubenswrapper[4703]: I1209 12:49:25.629248 4703 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aac6f742-869d-4ac9-a221-c3552c7da7f8-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 12:49:25 crc kubenswrapper[4703]: I1209 12:49:25.641345 4703 scope.go:117] "RemoveContainer" containerID="a4d54b8de90874c06cfc3d21c20adb044457c23529710d99abca6e510ecfc844" Dec 09 12:49:25 crc kubenswrapper[4703]: I1209 12:49:25.672571 4703 scope.go:117] "RemoveContainer" containerID="c4943968f9cf5f415caab9df9ad539c6b8e6e53a2f8335b1a93c4350c3d4b326" Dec 09 12:49:25 crc kubenswrapper[4703]: I1209 12:49:25.692585 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aac6f742-869d-4ac9-a221-c3552c7da7f8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "aac6f742-869d-4ac9-a221-c3552c7da7f8" (UID: "aac6f742-869d-4ac9-a221-c3552c7da7f8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:49:25 crc kubenswrapper[4703]: I1209 12:49:25.720647 4703 scope.go:117] "RemoveContainer" containerID="fd70401eaca82b000b2f67202aa6bba88ce73d7cf92b105f1ed88a21884dcf30" Dec 09 12:49:25 crc kubenswrapper[4703]: E1209 12:49:25.721359 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd70401eaca82b000b2f67202aa6bba88ce73d7cf92b105f1ed88a21884dcf30\": container with ID starting with fd70401eaca82b000b2f67202aa6bba88ce73d7cf92b105f1ed88a21884dcf30 not found: ID does not exist" containerID="fd70401eaca82b000b2f67202aa6bba88ce73d7cf92b105f1ed88a21884dcf30" Dec 09 12:49:25 crc kubenswrapper[4703]: I1209 12:49:25.721404 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd70401eaca82b000b2f67202aa6bba88ce73d7cf92b105f1ed88a21884dcf30"} err="failed to get container status \"fd70401eaca82b000b2f67202aa6bba88ce73d7cf92b105f1ed88a21884dcf30\": rpc error: code = NotFound desc = could not find container \"fd70401eaca82b000b2f67202aa6bba88ce73d7cf92b105f1ed88a21884dcf30\": container with ID starting with fd70401eaca82b000b2f67202aa6bba88ce73d7cf92b105f1ed88a21884dcf30 not found: ID does not exist" Dec 09 12:49:25 crc kubenswrapper[4703]: I1209 12:49:25.721433 4703 scope.go:117] "RemoveContainer" containerID="a4d54b8de90874c06cfc3d21c20adb044457c23529710d99abca6e510ecfc844" Dec 09 12:49:25 crc kubenswrapper[4703]: E1209 12:49:25.721901 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4d54b8de90874c06cfc3d21c20adb044457c23529710d99abca6e510ecfc844\": container with ID starting with a4d54b8de90874c06cfc3d21c20adb044457c23529710d99abca6e510ecfc844 not found: ID does not exist" containerID="a4d54b8de90874c06cfc3d21c20adb044457c23529710d99abca6e510ecfc844" Dec 09 12:49:25 crc kubenswrapper[4703]: I1209 12:49:25.721934 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4d54b8de90874c06cfc3d21c20adb044457c23529710d99abca6e510ecfc844"} err="failed to get container status \"a4d54b8de90874c06cfc3d21c20adb044457c23529710d99abca6e510ecfc844\": rpc error: code = NotFound desc = could not find container \"a4d54b8de90874c06cfc3d21c20adb044457c23529710d99abca6e510ecfc844\": container with ID starting with a4d54b8de90874c06cfc3d21c20adb044457c23529710d99abca6e510ecfc844 not found: ID does not exist" Dec 09 12:49:25 crc kubenswrapper[4703]: I1209 12:49:25.721953 4703 scope.go:117] "RemoveContainer" containerID="c4943968f9cf5f415caab9df9ad539c6b8e6e53a2f8335b1a93c4350c3d4b326" Dec 09 12:49:25 crc kubenswrapper[4703]: E1209 12:49:25.722535 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4943968f9cf5f415caab9df9ad539c6b8e6e53a2f8335b1a93c4350c3d4b326\": container with ID starting with c4943968f9cf5f415caab9df9ad539c6b8e6e53a2f8335b1a93c4350c3d4b326 not found: ID does not exist" containerID="c4943968f9cf5f415caab9df9ad539c6b8e6e53a2f8335b1a93c4350c3d4b326" Dec 09 12:49:25 crc kubenswrapper[4703]: I1209 12:49:25.722587 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4943968f9cf5f415caab9df9ad539c6b8e6e53a2f8335b1a93c4350c3d4b326"} err="failed to get container status \"c4943968f9cf5f415caab9df9ad539c6b8e6e53a2f8335b1a93c4350c3d4b326\": rpc error: code = NotFound desc = could not find container \"c4943968f9cf5f415caab9df9ad539c6b8e6e53a2f8335b1a93c4350c3d4b326\": container with ID starting with c4943968f9cf5f415caab9df9ad539c6b8e6e53a2f8335b1a93c4350c3d4b326 not found: ID does not exist" Dec 09 12:49:25 crc kubenswrapper[4703]: I1209 12:49:25.736249 4703 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aac6f742-869d-4ac9-a221-c3552c7da7f8-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 12:49:25 crc kubenswrapper[4703]: I1209 12:49:25.941659 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rhhtc"] Dec 09 12:49:25 crc kubenswrapper[4703]: I1209 12:49:25.954111 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-rhhtc"] Dec 09 12:49:27 crc kubenswrapper[4703]: I1209 12:49:27.084042 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aac6f742-869d-4ac9-a221-c3552c7da7f8" path="/var/lib/kubelet/pods/aac6f742-869d-4ac9-a221-c3552c7da7f8/volumes" Dec 09 12:49:30 crc kubenswrapper[4703]: I1209 12:49:30.083114 4703 patch_prober.go:28] interesting pod/machine-config-daemon-q8sfk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 12:49:30 crc kubenswrapper[4703]: I1209 12:49:30.084244 4703 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 12:49:31 crc kubenswrapper[4703]: E1209 12:49:31.082216 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:49:36 crc kubenswrapper[4703]: E1209 12:49:36.072082 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:49:43 crc kubenswrapper[4703]: E1209 12:49:43.074316 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:49:50 crc kubenswrapper[4703]: E1209 12:49:50.073084 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:49:54 crc kubenswrapper[4703]: E1209 12:49:54.072711 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:50:00 crc kubenswrapper[4703]: I1209 12:50:00.084262 4703 patch_prober.go:28] interesting pod/machine-config-daemon-q8sfk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 12:50:00 crc kubenswrapper[4703]: I1209 12:50:00.085059 4703 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 12:50:05 crc kubenswrapper[4703]: E1209 12:50:05.072790 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:50:06 crc kubenswrapper[4703]: E1209 12:50:06.083298 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:50:19 crc kubenswrapper[4703]: E1209 12:50:19.072805 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:50:19 crc kubenswrapper[4703]: E1209 12:50:19.076348 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:50:30 crc kubenswrapper[4703]: E1209 12:50:30.074894 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:50:30 crc kubenswrapper[4703]: I1209 12:50:30.083322 4703 patch_prober.go:28] interesting pod/machine-config-daemon-q8sfk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 12:50:30 crc kubenswrapper[4703]: I1209 12:50:30.083371 4703 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 12:50:30 crc kubenswrapper[4703]: I1209 12:50:30.083413 4703 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" Dec 09 12:50:30 crc kubenswrapper[4703]: I1209 12:50:30.084023 4703 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"29fa04997403c87f646b1159d85eab1c90c2af5a15739fcea56f5fd4e41a5e22"} pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 12:50:30 crc kubenswrapper[4703]: I1209 12:50:30.084076 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" containerName="machine-config-daemon" containerID="cri-o://29fa04997403c87f646b1159d85eab1c90c2af5a15739fcea56f5fd4e41a5e22" gracePeriod=600 Dec 09 12:50:30 crc kubenswrapper[4703]: I1209 12:50:30.315176 4703 generic.go:334] "Generic (PLEG): container finished" podID="32956ceb-8540-406e-8693-e86efb46cd42" containerID="29fa04997403c87f646b1159d85eab1c90c2af5a15739fcea56f5fd4e41a5e22" exitCode=0 Dec 09 12:50:30 crc kubenswrapper[4703]: I1209 12:50:30.315229 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" event={"ID":"32956ceb-8540-406e-8693-e86efb46cd42","Type":"ContainerDied","Data":"29fa04997403c87f646b1159d85eab1c90c2af5a15739fcea56f5fd4e41a5e22"} Dec 09 12:50:30 crc kubenswrapper[4703]: I1209 12:50:30.315301 4703 scope.go:117] "RemoveContainer" containerID="81e995d92400a3150feea876a90492238c9377534b0477b21807ac1cd60af1d5" Dec 09 12:50:31 crc kubenswrapper[4703]: I1209 12:50:31.327923 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" event={"ID":"32956ceb-8540-406e-8693-e86efb46cd42","Type":"ContainerStarted","Data":"d00f1f478555417b562293d5f6f436f09e33ae076fa799dd8a6a17304f341bee"} Dec 09 12:50:33 crc kubenswrapper[4703]: E1209 12:50:33.073660 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:50:43 crc kubenswrapper[4703]: E1209 12:50:43.073379 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:50:47 crc kubenswrapper[4703]: E1209 12:50:47.072799 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:50:58 crc kubenswrapper[4703]: E1209 12:50:58.073901 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:51:01 crc kubenswrapper[4703]: E1209 12:51:01.082635 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:51:07 crc kubenswrapper[4703]: I1209 12:51:07.735899 4703 generic.go:334] "Generic (PLEG): container finished" podID="7340d18a-8eee-4c8f-88d0-13d7bb17a825" containerID="1e49ca9bf6f2027ca647a07c9453bb31b84f43f2e874733de3cdfda249b66d1b" exitCode=2 Dec 09 12:51:07 crc kubenswrapper[4703]: I1209 12:51:07.736026 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vnj2z" event={"ID":"7340d18a-8eee-4c8f-88d0-13d7bb17a825","Type":"ContainerDied","Data":"1e49ca9bf6f2027ca647a07c9453bb31b84f43f2e874733de3cdfda249b66d1b"} Dec 09 12:51:09 crc kubenswrapper[4703]: I1209 12:51:09.375635 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vnj2z" Dec 09 12:51:09 crc kubenswrapper[4703]: I1209 12:51:09.540786 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tvlpp\" (UniqueName: \"kubernetes.io/projected/7340d18a-8eee-4c8f-88d0-13d7bb17a825-kube-api-access-tvlpp\") pod \"7340d18a-8eee-4c8f-88d0-13d7bb17a825\" (UID: \"7340d18a-8eee-4c8f-88d0-13d7bb17a825\") " Dec 09 12:51:09 crc kubenswrapper[4703]: I1209 12:51:09.540910 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7340d18a-8eee-4c8f-88d0-13d7bb17a825-ssh-key\") pod \"7340d18a-8eee-4c8f-88d0-13d7bb17a825\" (UID: \"7340d18a-8eee-4c8f-88d0-13d7bb17a825\") " Dec 09 12:51:09 crc kubenswrapper[4703]: I1209 12:51:09.541266 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7340d18a-8eee-4c8f-88d0-13d7bb17a825-inventory\") pod \"7340d18a-8eee-4c8f-88d0-13d7bb17a825\" (UID: \"7340d18a-8eee-4c8f-88d0-13d7bb17a825\") " Dec 09 12:51:09 crc kubenswrapper[4703]: I1209 12:51:09.548777 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7340d18a-8eee-4c8f-88d0-13d7bb17a825-kube-api-access-tvlpp" (OuterVolumeSpecName: "kube-api-access-tvlpp") pod "7340d18a-8eee-4c8f-88d0-13d7bb17a825" (UID: "7340d18a-8eee-4c8f-88d0-13d7bb17a825"). InnerVolumeSpecName "kube-api-access-tvlpp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:51:09 crc kubenswrapper[4703]: I1209 12:51:09.576592 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7340d18a-8eee-4c8f-88d0-13d7bb17a825-inventory" (OuterVolumeSpecName: "inventory") pod "7340d18a-8eee-4c8f-88d0-13d7bb17a825" (UID: "7340d18a-8eee-4c8f-88d0-13d7bb17a825"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:51:09 crc kubenswrapper[4703]: I1209 12:51:09.577016 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7340d18a-8eee-4c8f-88d0-13d7bb17a825-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "7340d18a-8eee-4c8f-88d0-13d7bb17a825" (UID: "7340d18a-8eee-4c8f-88d0-13d7bb17a825"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:51:09 crc kubenswrapper[4703]: I1209 12:51:09.645566 4703 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7340d18a-8eee-4c8f-88d0-13d7bb17a825-inventory\") on node \"crc\" DevicePath \"\"" Dec 09 12:51:09 crc kubenswrapper[4703]: I1209 12:51:09.645621 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tvlpp\" (UniqueName: \"kubernetes.io/projected/7340d18a-8eee-4c8f-88d0-13d7bb17a825-kube-api-access-tvlpp\") on node \"crc\" DevicePath \"\"" Dec 09 12:51:09 crc kubenswrapper[4703]: I1209 12:51:09.645632 4703 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7340d18a-8eee-4c8f-88d0-13d7bb17a825-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 09 12:51:09 crc kubenswrapper[4703]: I1209 12:51:09.767045 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vnj2z" event={"ID":"7340d18a-8eee-4c8f-88d0-13d7bb17a825","Type":"ContainerDied","Data":"0dbdf39646352d10badaa68186c42c322891e56699b09c665f3579352d793276"} Dec 09 12:51:09 crc kubenswrapper[4703]: I1209 12:51:09.767108 4703 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0dbdf39646352d10badaa68186c42c322891e56699b09c665f3579352d793276" Dec 09 12:51:09 crc kubenswrapper[4703]: I1209 12:51:09.767139 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vnj2z" Dec 09 12:51:12 crc kubenswrapper[4703]: E1209 12:51:12.073601 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:51:12 crc kubenswrapper[4703]: E1209 12:51:12.073683 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:51:25 crc kubenswrapper[4703]: E1209 12:51:25.073625 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:51:27 crc kubenswrapper[4703]: I1209 12:51:27.044550 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tq6f4"] Dec 09 12:51:27 crc kubenswrapper[4703]: E1209 12:51:27.045711 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7340d18a-8eee-4c8f-88d0-13d7bb17a825" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 09 12:51:27 crc kubenswrapper[4703]: I1209 12:51:27.045734 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="7340d18a-8eee-4c8f-88d0-13d7bb17a825" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 09 12:51:27 crc kubenswrapper[4703]: E1209 12:51:27.045771 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aac6f742-869d-4ac9-a221-c3552c7da7f8" containerName="extract-utilities" Dec 09 12:51:27 crc kubenswrapper[4703]: I1209 12:51:27.045780 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="aac6f742-869d-4ac9-a221-c3552c7da7f8" containerName="extract-utilities" Dec 09 12:51:27 crc kubenswrapper[4703]: E1209 12:51:27.045823 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aac6f742-869d-4ac9-a221-c3552c7da7f8" containerName="registry-server" Dec 09 12:51:27 crc kubenswrapper[4703]: I1209 12:51:27.045832 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="aac6f742-869d-4ac9-a221-c3552c7da7f8" containerName="registry-server" Dec 09 12:51:27 crc kubenswrapper[4703]: E1209 12:51:27.045847 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aac6f742-869d-4ac9-a221-c3552c7da7f8" containerName="extract-content" Dec 09 12:51:27 crc kubenswrapper[4703]: I1209 12:51:27.045855 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="aac6f742-869d-4ac9-a221-c3552c7da7f8" containerName="extract-content" Dec 09 12:51:27 crc kubenswrapper[4703]: I1209 12:51:27.046170 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="aac6f742-869d-4ac9-a221-c3552c7da7f8" containerName="registry-server" Dec 09 12:51:27 crc kubenswrapper[4703]: I1209 12:51:27.046206 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="7340d18a-8eee-4c8f-88d0-13d7bb17a825" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 09 12:51:27 crc kubenswrapper[4703]: I1209 12:51:27.047491 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tq6f4" Dec 09 12:51:27 crc kubenswrapper[4703]: I1209 12:51:27.057586 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 09 12:51:27 crc kubenswrapper[4703]: I1209 12:51:27.057788 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 09 12:51:27 crc kubenswrapper[4703]: I1209 12:51:27.057819 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tq6f4"] Dec 09 12:51:27 crc kubenswrapper[4703]: I1209 12:51:27.057900 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-8xnzm" Dec 09 12:51:27 crc kubenswrapper[4703]: I1209 12:51:27.058006 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 09 12:51:27 crc kubenswrapper[4703]: E1209 12:51:27.082473 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:51:27 crc kubenswrapper[4703]: I1209 12:51:27.109495 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsfkw\" (UniqueName: \"kubernetes.io/projected/1e42421c-88e1-4119-a154-33226550fe4d-kube-api-access-vsfkw\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-tq6f4\" (UID: \"1e42421c-88e1-4119-a154-33226550fe4d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tq6f4" Dec 09 12:51:27 crc kubenswrapper[4703]: I1209 12:51:27.109679 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1e42421c-88e1-4119-a154-33226550fe4d-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-tq6f4\" (UID: \"1e42421c-88e1-4119-a154-33226550fe4d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tq6f4" Dec 09 12:51:27 crc kubenswrapper[4703]: I1209 12:51:27.110207 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1e42421c-88e1-4119-a154-33226550fe4d-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-tq6f4\" (UID: \"1e42421c-88e1-4119-a154-33226550fe4d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tq6f4" Dec 09 12:51:27 crc kubenswrapper[4703]: I1209 12:51:27.213407 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsfkw\" (UniqueName: \"kubernetes.io/projected/1e42421c-88e1-4119-a154-33226550fe4d-kube-api-access-vsfkw\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-tq6f4\" (UID: \"1e42421c-88e1-4119-a154-33226550fe4d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tq6f4" Dec 09 12:51:27 crc kubenswrapper[4703]: I1209 12:51:27.213726 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1e42421c-88e1-4119-a154-33226550fe4d-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-tq6f4\" (UID: \"1e42421c-88e1-4119-a154-33226550fe4d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tq6f4" Dec 09 12:51:27 crc kubenswrapper[4703]: I1209 12:51:27.213944 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1e42421c-88e1-4119-a154-33226550fe4d-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-tq6f4\" (UID: \"1e42421c-88e1-4119-a154-33226550fe4d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tq6f4" Dec 09 12:51:27 crc kubenswrapper[4703]: I1209 12:51:27.222130 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1e42421c-88e1-4119-a154-33226550fe4d-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-tq6f4\" (UID: \"1e42421c-88e1-4119-a154-33226550fe4d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tq6f4" Dec 09 12:51:27 crc kubenswrapper[4703]: I1209 12:51:27.222398 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1e42421c-88e1-4119-a154-33226550fe4d-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-tq6f4\" (UID: \"1e42421c-88e1-4119-a154-33226550fe4d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tq6f4" Dec 09 12:51:27 crc kubenswrapper[4703]: I1209 12:51:27.236427 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsfkw\" (UniqueName: \"kubernetes.io/projected/1e42421c-88e1-4119-a154-33226550fe4d-kube-api-access-vsfkw\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-tq6f4\" (UID: \"1e42421c-88e1-4119-a154-33226550fe4d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tq6f4" Dec 09 12:51:27 crc kubenswrapper[4703]: I1209 12:51:27.389158 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tq6f4" Dec 09 12:51:28 crc kubenswrapper[4703]: I1209 12:51:28.006715 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tq6f4"] Dec 09 12:51:29 crc kubenswrapper[4703]: I1209 12:51:29.063468 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tq6f4" event={"ID":"1e42421c-88e1-4119-a154-33226550fe4d","Type":"ContainerStarted","Data":"1ca67fd751d2fabda829351b21f51879b1494f736ac437d7c89ba740a5665ca1"} Dec 09 12:51:29 crc kubenswrapper[4703]: I1209 12:51:29.065349 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tq6f4" event={"ID":"1e42421c-88e1-4119-a154-33226550fe4d","Type":"ContainerStarted","Data":"fc12920fabe5c35c16dda1361c1d00512c5f2822687746040da4c613477c8371"} Dec 09 12:51:29 crc kubenswrapper[4703]: I1209 12:51:29.129015 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tq6f4" podStartSLOduration=1.7085264100000002 podStartE2EDuration="2.128985125s" podCreationTimestamp="2025-12-09 12:51:27 +0000 UTC" firstStartedPulling="2025-12-09 12:51:28.01602687 +0000 UTC m=+2787.264790399" lastFinishedPulling="2025-12-09 12:51:28.436485595 +0000 UTC m=+2787.685249114" observedRunningTime="2025-12-09 12:51:29.120675066 +0000 UTC m=+2788.369438585" watchObservedRunningTime="2025-12-09 12:51:29.128985125 +0000 UTC m=+2788.377748644" Dec 09 12:51:40 crc kubenswrapper[4703]: E1209 12:51:40.072368 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:51:42 crc kubenswrapper[4703]: E1209 12:51:42.072676 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:51:51 crc kubenswrapper[4703]: E1209 12:51:51.081407 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:51:57 crc kubenswrapper[4703]: E1209 12:51:57.073745 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:52:05 crc kubenswrapper[4703]: E1209 12:52:05.073167 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:52:08 crc kubenswrapper[4703]: E1209 12:52:08.074657 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:52:17 crc kubenswrapper[4703]: E1209 12:52:17.074277 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:52:20 crc kubenswrapper[4703]: E1209 12:52:20.072288 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:52:28 crc kubenswrapper[4703]: E1209 12:52:28.073816 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:52:30 crc kubenswrapper[4703]: I1209 12:52:30.084413 4703 patch_prober.go:28] interesting pod/machine-config-daemon-q8sfk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 12:52:30 crc kubenswrapper[4703]: I1209 12:52:30.084899 4703 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 12:52:33 crc kubenswrapper[4703]: E1209 12:52:33.074524 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:52:39 crc kubenswrapper[4703]: E1209 12:52:39.074360 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:52:44 crc kubenswrapper[4703]: E1209 12:52:44.072957 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:52:51 crc kubenswrapper[4703]: E1209 12:52:51.082418 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:52:56 crc kubenswrapper[4703]: E1209 12:52:56.072134 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:53:00 crc kubenswrapper[4703]: I1209 12:53:00.083702 4703 patch_prober.go:28] interesting pod/machine-config-daemon-q8sfk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 12:53:00 crc kubenswrapper[4703]: I1209 12:53:00.084599 4703 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 12:53:04 crc kubenswrapper[4703]: E1209 12:53:04.072498 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:53:07 crc kubenswrapper[4703]: E1209 12:53:07.073717 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:53:09 crc kubenswrapper[4703]: I1209 12:53:09.228363 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-f9hzj"] Dec 09 12:53:09 crc kubenswrapper[4703]: I1209 12:53:09.232406 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f9hzj" Dec 09 12:53:09 crc kubenswrapper[4703]: I1209 12:53:09.248252 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-f9hzj"] Dec 09 12:53:09 crc kubenswrapper[4703]: I1209 12:53:09.399646 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed943309-01ad-43ec-88b2-cfb8978af1ea-catalog-content\") pod \"redhat-marketplace-f9hzj\" (UID: \"ed943309-01ad-43ec-88b2-cfb8978af1ea\") " pod="openshift-marketplace/redhat-marketplace-f9hzj" Dec 09 12:53:09 crc kubenswrapper[4703]: I1209 12:53:09.399728 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5l4dg\" (UniqueName: \"kubernetes.io/projected/ed943309-01ad-43ec-88b2-cfb8978af1ea-kube-api-access-5l4dg\") pod \"redhat-marketplace-f9hzj\" (UID: \"ed943309-01ad-43ec-88b2-cfb8978af1ea\") " pod="openshift-marketplace/redhat-marketplace-f9hzj" Dec 09 12:53:09 crc kubenswrapper[4703]: I1209 12:53:09.399753 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed943309-01ad-43ec-88b2-cfb8978af1ea-utilities\") pod \"redhat-marketplace-f9hzj\" (UID: \"ed943309-01ad-43ec-88b2-cfb8978af1ea\") " pod="openshift-marketplace/redhat-marketplace-f9hzj" Dec 09 12:53:09 crc kubenswrapper[4703]: I1209 12:53:09.501899 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed943309-01ad-43ec-88b2-cfb8978af1ea-catalog-content\") pod \"redhat-marketplace-f9hzj\" (UID: \"ed943309-01ad-43ec-88b2-cfb8978af1ea\") " pod="openshift-marketplace/redhat-marketplace-f9hzj" Dec 09 12:53:09 crc kubenswrapper[4703]: I1209 12:53:09.501971 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5l4dg\" (UniqueName: \"kubernetes.io/projected/ed943309-01ad-43ec-88b2-cfb8978af1ea-kube-api-access-5l4dg\") pod \"redhat-marketplace-f9hzj\" (UID: \"ed943309-01ad-43ec-88b2-cfb8978af1ea\") " pod="openshift-marketplace/redhat-marketplace-f9hzj" Dec 09 12:53:09 crc kubenswrapper[4703]: I1209 12:53:09.501998 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed943309-01ad-43ec-88b2-cfb8978af1ea-utilities\") pod \"redhat-marketplace-f9hzj\" (UID: \"ed943309-01ad-43ec-88b2-cfb8978af1ea\") " pod="openshift-marketplace/redhat-marketplace-f9hzj" Dec 09 12:53:09 crc kubenswrapper[4703]: I1209 12:53:09.502753 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed943309-01ad-43ec-88b2-cfb8978af1ea-catalog-content\") pod \"redhat-marketplace-f9hzj\" (UID: \"ed943309-01ad-43ec-88b2-cfb8978af1ea\") " pod="openshift-marketplace/redhat-marketplace-f9hzj" Dec 09 12:53:09 crc kubenswrapper[4703]: I1209 12:53:09.502778 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed943309-01ad-43ec-88b2-cfb8978af1ea-utilities\") pod \"redhat-marketplace-f9hzj\" (UID: \"ed943309-01ad-43ec-88b2-cfb8978af1ea\") " pod="openshift-marketplace/redhat-marketplace-f9hzj" Dec 09 12:53:09 crc kubenswrapper[4703]: I1209 12:53:09.537016 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5l4dg\" (UniqueName: \"kubernetes.io/projected/ed943309-01ad-43ec-88b2-cfb8978af1ea-kube-api-access-5l4dg\") pod \"redhat-marketplace-f9hzj\" (UID: \"ed943309-01ad-43ec-88b2-cfb8978af1ea\") " pod="openshift-marketplace/redhat-marketplace-f9hzj" Dec 09 12:53:09 crc kubenswrapper[4703]: I1209 12:53:09.568291 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f9hzj" Dec 09 12:53:10 crc kubenswrapper[4703]: I1209 12:53:10.112625 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-f9hzj"] Dec 09 12:53:10 crc kubenswrapper[4703]: I1209 12:53:10.233439 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f9hzj" event={"ID":"ed943309-01ad-43ec-88b2-cfb8978af1ea","Type":"ContainerStarted","Data":"c66be2d164decf2b4de72681e48ea364044b55935cbf99ae185c24f298d975b3"} Dec 09 12:53:11 crc kubenswrapper[4703]: I1209 12:53:11.249477 4703 generic.go:334] "Generic (PLEG): container finished" podID="ed943309-01ad-43ec-88b2-cfb8978af1ea" containerID="0da880f322ab67362d09ad280963bad727b752c7d98818f6e5e6570e2d6058e8" exitCode=0 Dec 09 12:53:11 crc kubenswrapper[4703]: I1209 12:53:11.249957 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f9hzj" event={"ID":"ed943309-01ad-43ec-88b2-cfb8978af1ea","Type":"ContainerDied","Data":"0da880f322ab67362d09ad280963bad727b752c7d98818f6e5e6570e2d6058e8"} Dec 09 12:53:12 crc kubenswrapper[4703]: I1209 12:53:12.267966 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f9hzj" event={"ID":"ed943309-01ad-43ec-88b2-cfb8978af1ea","Type":"ContainerStarted","Data":"db7ca568da4d074502d6ef0c623ebd3f241b46a4719a2b9780d31ac824d1a3f3"} Dec 09 12:53:13 crc kubenswrapper[4703]: I1209 12:53:13.281438 4703 generic.go:334] "Generic (PLEG): container finished" podID="ed943309-01ad-43ec-88b2-cfb8978af1ea" containerID="db7ca568da4d074502d6ef0c623ebd3f241b46a4719a2b9780d31ac824d1a3f3" exitCode=0 Dec 09 12:53:13 crc kubenswrapper[4703]: I1209 12:53:13.281540 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f9hzj" event={"ID":"ed943309-01ad-43ec-88b2-cfb8978af1ea","Type":"ContainerDied","Data":"db7ca568da4d074502d6ef0c623ebd3f241b46a4719a2b9780d31ac824d1a3f3"} Dec 09 12:53:14 crc kubenswrapper[4703]: I1209 12:53:14.298410 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f9hzj" event={"ID":"ed943309-01ad-43ec-88b2-cfb8978af1ea","Type":"ContainerStarted","Data":"158cd9079b8182683f119e8aeb63e74a1109fb4562add57373f8ee61df68d2b9"} Dec 09 12:53:14 crc kubenswrapper[4703]: I1209 12:53:14.326604 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-f9hzj" podStartSLOduration=2.885849459 podStartE2EDuration="5.326580179s" podCreationTimestamp="2025-12-09 12:53:09 +0000 UTC" firstStartedPulling="2025-12-09 12:53:11.253037321 +0000 UTC m=+2890.501800840" lastFinishedPulling="2025-12-09 12:53:13.693768041 +0000 UTC m=+2892.942531560" observedRunningTime="2025-12-09 12:53:14.319917144 +0000 UTC m=+2893.568680663" watchObservedRunningTime="2025-12-09 12:53:14.326580179 +0000 UTC m=+2893.575343698" Dec 09 12:53:17 crc kubenswrapper[4703]: E1209 12:53:17.074600 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:53:19 crc kubenswrapper[4703]: I1209 12:53:19.568593 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-f9hzj" Dec 09 12:53:19 crc kubenswrapper[4703]: I1209 12:53:19.569186 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-f9hzj" Dec 09 12:53:19 crc kubenswrapper[4703]: I1209 12:53:19.644803 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-f9hzj" Dec 09 12:53:20 crc kubenswrapper[4703]: I1209 12:53:20.418091 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-f9hzj" Dec 09 12:53:20 crc kubenswrapper[4703]: I1209 12:53:20.487243 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-f9hzj"] Dec 09 12:53:21 crc kubenswrapper[4703]: E1209 12:53:21.079490 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:53:22 crc kubenswrapper[4703]: I1209 12:53:22.387998 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-f9hzj" podUID="ed943309-01ad-43ec-88b2-cfb8978af1ea" containerName="registry-server" containerID="cri-o://158cd9079b8182683f119e8aeb63e74a1109fb4562add57373f8ee61df68d2b9" gracePeriod=2 Dec 09 12:53:23 crc kubenswrapper[4703]: I1209 12:53:23.044734 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f9hzj" Dec 09 12:53:23 crc kubenswrapper[4703]: I1209 12:53:23.228468 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5l4dg\" (UniqueName: \"kubernetes.io/projected/ed943309-01ad-43ec-88b2-cfb8978af1ea-kube-api-access-5l4dg\") pod \"ed943309-01ad-43ec-88b2-cfb8978af1ea\" (UID: \"ed943309-01ad-43ec-88b2-cfb8978af1ea\") " Dec 09 12:53:23 crc kubenswrapper[4703]: I1209 12:53:23.229365 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed943309-01ad-43ec-88b2-cfb8978af1ea-utilities\") pod \"ed943309-01ad-43ec-88b2-cfb8978af1ea\" (UID: \"ed943309-01ad-43ec-88b2-cfb8978af1ea\") " Dec 09 12:53:23 crc kubenswrapper[4703]: I1209 12:53:23.229553 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed943309-01ad-43ec-88b2-cfb8978af1ea-catalog-content\") pod \"ed943309-01ad-43ec-88b2-cfb8978af1ea\" (UID: \"ed943309-01ad-43ec-88b2-cfb8978af1ea\") " Dec 09 12:53:23 crc kubenswrapper[4703]: I1209 12:53:23.230614 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed943309-01ad-43ec-88b2-cfb8978af1ea-utilities" (OuterVolumeSpecName: "utilities") pod "ed943309-01ad-43ec-88b2-cfb8978af1ea" (UID: "ed943309-01ad-43ec-88b2-cfb8978af1ea"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:53:23 crc kubenswrapper[4703]: I1209 12:53:23.234937 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed943309-01ad-43ec-88b2-cfb8978af1ea-kube-api-access-5l4dg" (OuterVolumeSpecName: "kube-api-access-5l4dg") pod "ed943309-01ad-43ec-88b2-cfb8978af1ea" (UID: "ed943309-01ad-43ec-88b2-cfb8978af1ea"). InnerVolumeSpecName "kube-api-access-5l4dg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:53:23 crc kubenswrapper[4703]: I1209 12:53:23.257166 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed943309-01ad-43ec-88b2-cfb8978af1ea-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ed943309-01ad-43ec-88b2-cfb8978af1ea" (UID: "ed943309-01ad-43ec-88b2-cfb8978af1ea"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:53:23 crc kubenswrapper[4703]: I1209 12:53:23.333949 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5l4dg\" (UniqueName: \"kubernetes.io/projected/ed943309-01ad-43ec-88b2-cfb8978af1ea-kube-api-access-5l4dg\") on node \"crc\" DevicePath \"\"" Dec 09 12:53:23 crc kubenswrapper[4703]: I1209 12:53:23.334000 4703 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed943309-01ad-43ec-88b2-cfb8978af1ea-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 12:53:23 crc kubenswrapper[4703]: I1209 12:53:23.334013 4703 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed943309-01ad-43ec-88b2-cfb8978af1ea-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 12:53:23 crc kubenswrapper[4703]: I1209 12:53:23.401342 4703 generic.go:334] "Generic (PLEG): container finished" podID="ed943309-01ad-43ec-88b2-cfb8978af1ea" containerID="158cd9079b8182683f119e8aeb63e74a1109fb4562add57373f8ee61df68d2b9" exitCode=0 Dec 09 12:53:23 crc kubenswrapper[4703]: I1209 12:53:23.401415 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f9hzj" event={"ID":"ed943309-01ad-43ec-88b2-cfb8978af1ea","Type":"ContainerDied","Data":"158cd9079b8182683f119e8aeb63e74a1109fb4562add57373f8ee61df68d2b9"} Dec 09 12:53:23 crc kubenswrapper[4703]: I1209 12:53:23.401473 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f9hzj" event={"ID":"ed943309-01ad-43ec-88b2-cfb8978af1ea","Type":"ContainerDied","Data":"c66be2d164decf2b4de72681e48ea364044b55935cbf99ae185c24f298d975b3"} Dec 09 12:53:23 crc kubenswrapper[4703]: I1209 12:53:23.401468 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f9hzj" Dec 09 12:53:23 crc kubenswrapper[4703]: I1209 12:53:23.401494 4703 scope.go:117] "RemoveContainer" containerID="158cd9079b8182683f119e8aeb63e74a1109fb4562add57373f8ee61df68d2b9" Dec 09 12:53:23 crc kubenswrapper[4703]: I1209 12:53:23.429026 4703 scope.go:117] "RemoveContainer" containerID="db7ca568da4d074502d6ef0c623ebd3f241b46a4719a2b9780d31ac824d1a3f3" Dec 09 12:53:23 crc kubenswrapper[4703]: I1209 12:53:23.452315 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-f9hzj"] Dec 09 12:53:23 crc kubenswrapper[4703]: I1209 12:53:23.465544 4703 scope.go:117] "RemoveContainer" containerID="0da880f322ab67362d09ad280963bad727b752c7d98818f6e5e6570e2d6058e8" Dec 09 12:53:23 crc kubenswrapper[4703]: I1209 12:53:23.467494 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-f9hzj"] Dec 09 12:53:23 crc kubenswrapper[4703]: I1209 12:53:23.528606 4703 scope.go:117] "RemoveContainer" containerID="158cd9079b8182683f119e8aeb63e74a1109fb4562add57373f8ee61df68d2b9" Dec 09 12:53:23 crc kubenswrapper[4703]: E1209 12:53:23.531734 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"158cd9079b8182683f119e8aeb63e74a1109fb4562add57373f8ee61df68d2b9\": container with ID starting with 158cd9079b8182683f119e8aeb63e74a1109fb4562add57373f8ee61df68d2b9 not found: ID does not exist" containerID="158cd9079b8182683f119e8aeb63e74a1109fb4562add57373f8ee61df68d2b9" Dec 09 12:53:23 crc kubenswrapper[4703]: I1209 12:53:23.531806 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"158cd9079b8182683f119e8aeb63e74a1109fb4562add57373f8ee61df68d2b9"} err="failed to get container status \"158cd9079b8182683f119e8aeb63e74a1109fb4562add57373f8ee61df68d2b9\": rpc error: code = NotFound desc = could not find container \"158cd9079b8182683f119e8aeb63e74a1109fb4562add57373f8ee61df68d2b9\": container with ID starting with 158cd9079b8182683f119e8aeb63e74a1109fb4562add57373f8ee61df68d2b9 not found: ID does not exist" Dec 09 12:53:23 crc kubenswrapper[4703]: I1209 12:53:23.531853 4703 scope.go:117] "RemoveContainer" containerID="db7ca568da4d074502d6ef0c623ebd3f241b46a4719a2b9780d31ac824d1a3f3" Dec 09 12:53:23 crc kubenswrapper[4703]: E1209 12:53:23.532342 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db7ca568da4d074502d6ef0c623ebd3f241b46a4719a2b9780d31ac824d1a3f3\": container with ID starting with db7ca568da4d074502d6ef0c623ebd3f241b46a4719a2b9780d31ac824d1a3f3 not found: ID does not exist" containerID="db7ca568da4d074502d6ef0c623ebd3f241b46a4719a2b9780d31ac824d1a3f3" Dec 09 12:53:23 crc kubenswrapper[4703]: I1209 12:53:23.532391 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db7ca568da4d074502d6ef0c623ebd3f241b46a4719a2b9780d31ac824d1a3f3"} err="failed to get container status \"db7ca568da4d074502d6ef0c623ebd3f241b46a4719a2b9780d31ac824d1a3f3\": rpc error: code = NotFound desc = could not find container \"db7ca568da4d074502d6ef0c623ebd3f241b46a4719a2b9780d31ac824d1a3f3\": container with ID starting with db7ca568da4d074502d6ef0c623ebd3f241b46a4719a2b9780d31ac824d1a3f3 not found: ID does not exist" Dec 09 12:53:23 crc kubenswrapper[4703]: I1209 12:53:23.532419 4703 scope.go:117] "RemoveContainer" containerID="0da880f322ab67362d09ad280963bad727b752c7d98818f6e5e6570e2d6058e8" Dec 09 12:53:23 crc kubenswrapper[4703]: E1209 12:53:23.532827 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0da880f322ab67362d09ad280963bad727b752c7d98818f6e5e6570e2d6058e8\": container with ID starting with 0da880f322ab67362d09ad280963bad727b752c7d98818f6e5e6570e2d6058e8 not found: ID does not exist" containerID="0da880f322ab67362d09ad280963bad727b752c7d98818f6e5e6570e2d6058e8" Dec 09 12:53:23 crc kubenswrapper[4703]: I1209 12:53:23.532870 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0da880f322ab67362d09ad280963bad727b752c7d98818f6e5e6570e2d6058e8"} err="failed to get container status \"0da880f322ab67362d09ad280963bad727b752c7d98818f6e5e6570e2d6058e8\": rpc error: code = NotFound desc = could not find container \"0da880f322ab67362d09ad280963bad727b752c7d98818f6e5e6570e2d6058e8\": container with ID starting with 0da880f322ab67362d09ad280963bad727b752c7d98818f6e5e6570e2d6058e8 not found: ID does not exist" Dec 09 12:53:25 crc kubenswrapper[4703]: I1209 12:53:25.089320 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed943309-01ad-43ec-88b2-cfb8978af1ea" path="/var/lib/kubelet/pods/ed943309-01ad-43ec-88b2-cfb8978af1ea/volumes" Dec 09 12:53:28 crc kubenswrapper[4703]: E1209 12:53:28.072612 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:53:30 crc kubenswrapper[4703]: I1209 12:53:30.083387 4703 patch_prober.go:28] interesting pod/machine-config-daemon-q8sfk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 12:53:30 crc kubenswrapper[4703]: I1209 12:53:30.083830 4703 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 12:53:30 crc kubenswrapper[4703]: I1209 12:53:30.083887 4703 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" Dec 09 12:53:30 crc kubenswrapper[4703]: I1209 12:53:30.085289 4703 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d00f1f478555417b562293d5f6f436f09e33ae076fa799dd8a6a17304f341bee"} pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 12:53:30 crc kubenswrapper[4703]: I1209 12:53:30.085370 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" containerName="machine-config-daemon" containerID="cri-o://d00f1f478555417b562293d5f6f436f09e33ae076fa799dd8a6a17304f341bee" gracePeriod=600 Dec 09 12:53:30 crc kubenswrapper[4703]: E1209 12:53:30.217800 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 12:53:30 crc kubenswrapper[4703]: I1209 12:53:30.477263 4703 generic.go:334] "Generic (PLEG): container finished" podID="32956ceb-8540-406e-8693-e86efb46cd42" containerID="d00f1f478555417b562293d5f6f436f09e33ae076fa799dd8a6a17304f341bee" exitCode=0 Dec 09 12:53:30 crc kubenswrapper[4703]: I1209 12:53:30.477360 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" event={"ID":"32956ceb-8540-406e-8693-e86efb46cd42","Type":"ContainerDied","Data":"d00f1f478555417b562293d5f6f436f09e33ae076fa799dd8a6a17304f341bee"} Dec 09 12:53:30 crc kubenswrapper[4703]: I1209 12:53:30.477843 4703 scope.go:117] "RemoveContainer" containerID="29fa04997403c87f646b1159d85eab1c90c2af5a15739fcea56f5fd4e41a5e22" Dec 09 12:53:30 crc kubenswrapper[4703]: I1209 12:53:30.479011 4703 scope.go:117] "RemoveContainer" containerID="d00f1f478555417b562293d5f6f436f09e33ae076fa799dd8a6a17304f341bee" Dec 09 12:53:30 crc kubenswrapper[4703]: E1209 12:53:30.479405 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 12:53:33 crc kubenswrapper[4703]: E1209 12:53:33.074282 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:53:42 crc kubenswrapper[4703]: I1209 12:53:42.073081 4703 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 09 12:53:42 crc kubenswrapper[4703]: E1209 12:53:42.197087 4703 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 12:53:42 crc kubenswrapper[4703]: E1209 12:53:42.197171 4703 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 12:53:42 crc kubenswrapper[4703]: E1209 12:53:42.197349 4703 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n565h678h645h65h685h657h568h564h8fhcfhc4h58fh66bh564h649h5d5hb5h67dh57fh97h58ch557h4h68h59fh5c8h65ch586h5bch5b8h5c6h7q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p7hbq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(ce42c586-f397-4f98-be45-f56d36115d7a): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 12:53:42 crc kubenswrapper[4703]: E1209 12:53:42.198494 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:53:43 crc kubenswrapper[4703]: I1209 12:53:43.071332 4703 scope.go:117] "RemoveContainer" containerID="d00f1f478555417b562293d5f6f436f09e33ae076fa799dd8a6a17304f341bee" Dec 09 12:53:43 crc kubenswrapper[4703]: E1209 12:53:43.071701 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 12:53:45 crc kubenswrapper[4703]: E1209 12:53:45.073508 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:53:55 crc kubenswrapper[4703]: I1209 12:53:55.070005 4703 scope.go:117] "RemoveContainer" containerID="d00f1f478555417b562293d5f6f436f09e33ae076fa799dd8a6a17304f341bee" Dec 09 12:53:55 crc kubenswrapper[4703]: E1209 12:53:55.071382 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 12:53:58 crc kubenswrapper[4703]: E1209 12:53:58.075773 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:53:58 crc kubenswrapper[4703]: E1209 12:53:58.204965 4703 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 09 12:53:58 crc kubenswrapper[4703]: E1209 12:53:58.205036 4703 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 09 12:53:58 crc kubenswrapper[4703]: E1209 12:53:58.205230 4703 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6kdkz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-6ljb8_openstack(862e9d91-760e-43af-aba3-a23255b0fd7a): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 12:53:58 crc kubenswrapper[4703]: E1209 12:53:58.206661 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:54:06 crc kubenswrapper[4703]: I1209 12:54:06.069622 4703 scope.go:117] "RemoveContainer" containerID="d00f1f478555417b562293d5f6f436f09e33ae076fa799dd8a6a17304f341bee" Dec 09 12:54:06 crc kubenswrapper[4703]: E1209 12:54:06.070513 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 12:54:10 crc kubenswrapper[4703]: E1209 12:54:10.072871 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:54:13 crc kubenswrapper[4703]: E1209 12:54:13.072684 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:54:20 crc kubenswrapper[4703]: I1209 12:54:20.069877 4703 scope.go:117] "RemoveContainer" containerID="d00f1f478555417b562293d5f6f436f09e33ae076fa799dd8a6a17304f341bee" Dec 09 12:54:20 crc kubenswrapper[4703]: E1209 12:54:20.070943 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 12:54:23 crc kubenswrapper[4703]: E1209 12:54:23.084669 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:54:27 crc kubenswrapper[4703]: E1209 12:54:27.072983 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:54:31 crc kubenswrapper[4703]: I1209 12:54:31.077511 4703 scope.go:117] "RemoveContainer" containerID="d00f1f478555417b562293d5f6f436f09e33ae076fa799dd8a6a17304f341bee" Dec 09 12:54:31 crc kubenswrapper[4703]: E1209 12:54:31.078914 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 12:54:34 crc kubenswrapper[4703]: E1209 12:54:34.073245 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:54:41 crc kubenswrapper[4703]: E1209 12:54:41.080643 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:54:45 crc kubenswrapper[4703]: I1209 12:54:45.070558 4703 scope.go:117] "RemoveContainer" containerID="d00f1f478555417b562293d5f6f436f09e33ae076fa799dd8a6a17304f341bee" Dec 09 12:54:45 crc kubenswrapper[4703]: E1209 12:54:45.071807 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 12:54:49 crc kubenswrapper[4703]: E1209 12:54:49.072422 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:54:53 crc kubenswrapper[4703]: E1209 12:54:53.072363 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:55:00 crc kubenswrapper[4703]: I1209 12:55:00.070031 4703 scope.go:117] "RemoveContainer" containerID="d00f1f478555417b562293d5f6f436f09e33ae076fa799dd8a6a17304f341bee" Dec 09 12:55:00 crc kubenswrapper[4703]: E1209 12:55:00.071347 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 12:55:00 crc kubenswrapper[4703]: E1209 12:55:00.072840 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:55:05 crc kubenswrapper[4703]: E1209 12:55:05.071999 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:55:13 crc kubenswrapper[4703]: I1209 12:55:13.072512 4703 scope.go:117] "RemoveContainer" containerID="d00f1f478555417b562293d5f6f436f09e33ae076fa799dd8a6a17304f341bee" Dec 09 12:55:13 crc kubenswrapper[4703]: E1209 12:55:13.074595 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 12:55:15 crc kubenswrapper[4703]: E1209 12:55:15.073070 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:55:16 crc kubenswrapper[4703]: E1209 12:55:16.072603 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:55:25 crc kubenswrapper[4703]: I1209 12:55:25.070099 4703 scope.go:117] "RemoveContainer" containerID="d00f1f478555417b562293d5f6f436f09e33ae076fa799dd8a6a17304f341bee" Dec 09 12:55:25 crc kubenswrapper[4703]: E1209 12:55:25.071383 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 12:55:28 crc kubenswrapper[4703]: E1209 12:55:28.073932 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:55:31 crc kubenswrapper[4703]: E1209 12:55:31.085171 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:55:39 crc kubenswrapper[4703]: I1209 12:55:39.070488 4703 scope.go:117] "RemoveContainer" containerID="d00f1f478555417b562293d5f6f436f09e33ae076fa799dd8a6a17304f341bee" Dec 09 12:55:39 crc kubenswrapper[4703]: E1209 12:55:39.071492 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 12:55:41 crc kubenswrapper[4703]: E1209 12:55:41.081735 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:55:46 crc kubenswrapper[4703]: E1209 12:55:46.073360 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:55:54 crc kubenswrapper[4703]: I1209 12:55:54.071695 4703 scope.go:117] "RemoveContainer" containerID="d00f1f478555417b562293d5f6f436f09e33ae076fa799dd8a6a17304f341bee" Dec 09 12:55:54 crc kubenswrapper[4703]: E1209 12:55:54.072907 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 12:55:55 crc kubenswrapper[4703]: E1209 12:55:55.073000 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:56:00 crc kubenswrapper[4703]: E1209 12:56:00.072256 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:56:06 crc kubenswrapper[4703]: E1209 12:56:06.073480 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:56:09 crc kubenswrapper[4703]: I1209 12:56:09.069982 4703 scope.go:117] "RemoveContainer" containerID="d00f1f478555417b562293d5f6f436f09e33ae076fa799dd8a6a17304f341bee" Dec 09 12:56:09 crc kubenswrapper[4703]: E1209 12:56:09.070904 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 12:56:14 crc kubenswrapper[4703]: E1209 12:56:14.073134 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:56:17 crc kubenswrapper[4703]: E1209 12:56:17.075028 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:56:20 crc kubenswrapper[4703]: I1209 12:56:20.070584 4703 scope.go:117] "RemoveContainer" containerID="d00f1f478555417b562293d5f6f436f09e33ae076fa799dd8a6a17304f341bee" Dec 09 12:56:20 crc kubenswrapper[4703]: E1209 12:56:20.071559 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 12:56:27 crc kubenswrapper[4703]: E1209 12:56:27.073974 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:56:32 crc kubenswrapper[4703]: E1209 12:56:32.072565 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:56:34 crc kubenswrapper[4703]: I1209 12:56:34.070894 4703 scope.go:117] "RemoveContainer" containerID="d00f1f478555417b562293d5f6f436f09e33ae076fa799dd8a6a17304f341bee" Dec 09 12:56:34 crc kubenswrapper[4703]: E1209 12:56:34.072156 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 12:56:42 crc kubenswrapper[4703]: E1209 12:56:42.072848 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:56:45 crc kubenswrapper[4703]: E1209 12:56:45.073007 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:56:48 crc kubenswrapper[4703]: I1209 12:56:48.070020 4703 scope.go:117] "RemoveContainer" containerID="d00f1f478555417b562293d5f6f436f09e33ae076fa799dd8a6a17304f341bee" Dec 09 12:56:48 crc kubenswrapper[4703]: E1209 12:56:48.070935 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 12:56:54 crc kubenswrapper[4703]: E1209 12:56:54.076377 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:56:56 crc kubenswrapper[4703]: E1209 12:56:56.072837 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:56:59 crc kubenswrapper[4703]: I1209 12:56:59.070628 4703 scope.go:117] "RemoveContainer" containerID="d00f1f478555417b562293d5f6f436f09e33ae076fa799dd8a6a17304f341bee" Dec 09 12:56:59 crc kubenswrapper[4703]: E1209 12:56:59.071862 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 12:57:07 crc kubenswrapper[4703]: E1209 12:57:07.073606 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:57:09 crc kubenswrapper[4703]: E1209 12:57:09.072286 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:57:12 crc kubenswrapper[4703]: I1209 12:57:12.071106 4703 scope.go:117] "RemoveContainer" containerID="d00f1f478555417b562293d5f6f436f09e33ae076fa799dd8a6a17304f341bee" Dec 09 12:57:12 crc kubenswrapper[4703]: E1209 12:57:12.071934 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 12:57:20 crc kubenswrapper[4703]: E1209 12:57:20.073619 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:57:22 crc kubenswrapper[4703]: E1209 12:57:22.073266 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:57:26 crc kubenswrapper[4703]: I1209 12:57:26.070048 4703 scope.go:117] "RemoveContainer" containerID="d00f1f478555417b562293d5f6f436f09e33ae076fa799dd8a6a17304f341bee" Dec 09 12:57:26 crc kubenswrapper[4703]: E1209 12:57:26.071257 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 12:57:33 crc kubenswrapper[4703]: E1209 12:57:33.073291 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:57:34 crc kubenswrapper[4703]: E1209 12:57:34.071942 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:57:41 crc kubenswrapper[4703]: I1209 12:57:41.078129 4703 scope.go:117] "RemoveContainer" containerID="d00f1f478555417b562293d5f6f436f09e33ae076fa799dd8a6a17304f341bee" Dec 09 12:57:41 crc kubenswrapper[4703]: E1209 12:57:41.079506 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 12:57:44 crc kubenswrapper[4703]: E1209 12:57:44.074227 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:57:49 crc kubenswrapper[4703]: E1209 12:57:49.073913 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:57:50 crc kubenswrapper[4703]: I1209 12:57:50.062741 4703 generic.go:334] "Generic (PLEG): container finished" podID="1e42421c-88e1-4119-a154-33226550fe4d" containerID="1ca67fd751d2fabda829351b21f51879b1494f736ac437d7c89ba740a5665ca1" exitCode=2 Dec 09 12:57:50 crc kubenswrapper[4703]: I1209 12:57:50.062820 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tq6f4" event={"ID":"1e42421c-88e1-4119-a154-33226550fe4d","Type":"ContainerDied","Data":"1ca67fd751d2fabda829351b21f51879b1494f736ac437d7c89ba740a5665ca1"} Dec 09 12:57:51 crc kubenswrapper[4703]: I1209 12:57:51.705935 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tq6f4" Dec 09 12:57:51 crc kubenswrapper[4703]: I1209 12:57:51.753008 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1e42421c-88e1-4119-a154-33226550fe4d-ssh-key\") pod \"1e42421c-88e1-4119-a154-33226550fe4d\" (UID: \"1e42421c-88e1-4119-a154-33226550fe4d\") " Dec 09 12:57:51 crc kubenswrapper[4703]: I1209 12:57:51.753105 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vsfkw\" (UniqueName: \"kubernetes.io/projected/1e42421c-88e1-4119-a154-33226550fe4d-kube-api-access-vsfkw\") pod \"1e42421c-88e1-4119-a154-33226550fe4d\" (UID: \"1e42421c-88e1-4119-a154-33226550fe4d\") " Dec 09 12:57:51 crc kubenswrapper[4703]: I1209 12:57:51.753174 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1e42421c-88e1-4119-a154-33226550fe4d-inventory\") pod \"1e42421c-88e1-4119-a154-33226550fe4d\" (UID: \"1e42421c-88e1-4119-a154-33226550fe4d\") " Dec 09 12:57:51 crc kubenswrapper[4703]: I1209 12:57:51.760922 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e42421c-88e1-4119-a154-33226550fe4d-kube-api-access-vsfkw" (OuterVolumeSpecName: "kube-api-access-vsfkw") pod "1e42421c-88e1-4119-a154-33226550fe4d" (UID: "1e42421c-88e1-4119-a154-33226550fe4d"). InnerVolumeSpecName "kube-api-access-vsfkw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:57:51 crc kubenswrapper[4703]: I1209 12:57:51.813484 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e42421c-88e1-4119-a154-33226550fe4d-inventory" (OuterVolumeSpecName: "inventory") pod "1e42421c-88e1-4119-a154-33226550fe4d" (UID: "1e42421c-88e1-4119-a154-33226550fe4d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:57:51 crc kubenswrapper[4703]: I1209 12:57:51.821677 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e42421c-88e1-4119-a154-33226550fe4d-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "1e42421c-88e1-4119-a154-33226550fe4d" (UID: "1e42421c-88e1-4119-a154-33226550fe4d"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:57:51 crc kubenswrapper[4703]: I1209 12:57:51.857150 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vsfkw\" (UniqueName: \"kubernetes.io/projected/1e42421c-88e1-4119-a154-33226550fe4d-kube-api-access-vsfkw\") on node \"crc\" DevicePath \"\"" Dec 09 12:57:51 crc kubenswrapper[4703]: I1209 12:57:51.857231 4703 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1e42421c-88e1-4119-a154-33226550fe4d-inventory\") on node \"crc\" DevicePath \"\"" Dec 09 12:57:51 crc kubenswrapper[4703]: I1209 12:57:51.857248 4703 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1e42421c-88e1-4119-a154-33226550fe4d-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 09 12:57:52 crc kubenswrapper[4703]: I1209 12:57:52.084024 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tq6f4" event={"ID":"1e42421c-88e1-4119-a154-33226550fe4d","Type":"ContainerDied","Data":"fc12920fabe5c35c16dda1361c1d00512c5f2822687746040da4c613477c8371"} Dec 09 12:57:52 crc kubenswrapper[4703]: I1209 12:57:52.084080 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tq6f4" Dec 09 12:57:52 crc kubenswrapper[4703]: I1209 12:57:52.084099 4703 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc12920fabe5c35c16dda1361c1d00512c5f2822687746040da4c613477c8371" Dec 09 12:57:56 crc kubenswrapper[4703]: I1209 12:57:56.070162 4703 scope.go:117] "RemoveContainer" containerID="d00f1f478555417b562293d5f6f436f09e33ae076fa799dd8a6a17304f341bee" Dec 09 12:57:56 crc kubenswrapper[4703]: E1209 12:57:56.071335 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 12:57:57 crc kubenswrapper[4703]: E1209 12:57:57.073161 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:58:04 crc kubenswrapper[4703]: E1209 12:58:04.074509 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:58:05 crc kubenswrapper[4703]: I1209 12:58:05.882971 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ft7gp"] Dec 09 12:58:05 crc kubenswrapper[4703]: E1209 12:58:05.884220 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed943309-01ad-43ec-88b2-cfb8978af1ea" containerName="extract-utilities" Dec 09 12:58:05 crc kubenswrapper[4703]: I1209 12:58:05.884241 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed943309-01ad-43ec-88b2-cfb8978af1ea" containerName="extract-utilities" Dec 09 12:58:05 crc kubenswrapper[4703]: E1209 12:58:05.884283 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e42421c-88e1-4119-a154-33226550fe4d" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 09 12:58:05 crc kubenswrapper[4703]: I1209 12:58:05.884297 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e42421c-88e1-4119-a154-33226550fe4d" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 09 12:58:05 crc kubenswrapper[4703]: E1209 12:58:05.884332 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed943309-01ad-43ec-88b2-cfb8978af1ea" containerName="extract-content" Dec 09 12:58:05 crc kubenswrapper[4703]: I1209 12:58:05.884341 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed943309-01ad-43ec-88b2-cfb8978af1ea" containerName="extract-content" Dec 09 12:58:05 crc kubenswrapper[4703]: E1209 12:58:05.884354 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed943309-01ad-43ec-88b2-cfb8978af1ea" containerName="registry-server" Dec 09 12:58:05 crc kubenswrapper[4703]: I1209 12:58:05.884362 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed943309-01ad-43ec-88b2-cfb8978af1ea" containerName="registry-server" Dec 09 12:58:05 crc kubenswrapper[4703]: I1209 12:58:05.884624 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed943309-01ad-43ec-88b2-cfb8978af1ea" containerName="registry-server" Dec 09 12:58:05 crc kubenswrapper[4703]: I1209 12:58:05.884643 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e42421c-88e1-4119-a154-33226550fe4d" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 09 12:58:05 crc kubenswrapper[4703]: I1209 12:58:05.887395 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ft7gp" Dec 09 12:58:05 crc kubenswrapper[4703]: I1209 12:58:05.899057 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ft7gp"] Dec 09 12:58:06 crc kubenswrapper[4703]: I1209 12:58:06.067307 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3529690c-d78c-439f-bcaa-f5dd54d44c8d-utilities\") pod \"community-operators-ft7gp\" (UID: \"3529690c-d78c-439f-bcaa-f5dd54d44c8d\") " pod="openshift-marketplace/community-operators-ft7gp" Dec 09 12:58:06 crc kubenswrapper[4703]: I1209 12:58:06.067786 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3529690c-d78c-439f-bcaa-f5dd54d44c8d-catalog-content\") pod \"community-operators-ft7gp\" (UID: \"3529690c-d78c-439f-bcaa-f5dd54d44c8d\") " pod="openshift-marketplace/community-operators-ft7gp" Dec 09 12:58:06 crc kubenswrapper[4703]: I1209 12:58:06.068015 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ks6x\" (UniqueName: \"kubernetes.io/projected/3529690c-d78c-439f-bcaa-f5dd54d44c8d-kube-api-access-7ks6x\") pod \"community-operators-ft7gp\" (UID: \"3529690c-d78c-439f-bcaa-f5dd54d44c8d\") " pod="openshift-marketplace/community-operators-ft7gp" Dec 09 12:58:06 crc kubenswrapper[4703]: I1209 12:58:06.171305 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3529690c-d78c-439f-bcaa-f5dd54d44c8d-catalog-content\") pod \"community-operators-ft7gp\" (UID: \"3529690c-d78c-439f-bcaa-f5dd54d44c8d\") " pod="openshift-marketplace/community-operators-ft7gp" Dec 09 12:58:06 crc kubenswrapper[4703]: I1209 12:58:06.171589 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ks6x\" (UniqueName: \"kubernetes.io/projected/3529690c-d78c-439f-bcaa-f5dd54d44c8d-kube-api-access-7ks6x\") pod \"community-operators-ft7gp\" (UID: \"3529690c-d78c-439f-bcaa-f5dd54d44c8d\") " pod="openshift-marketplace/community-operators-ft7gp" Dec 09 12:58:06 crc kubenswrapper[4703]: I1209 12:58:06.171658 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3529690c-d78c-439f-bcaa-f5dd54d44c8d-utilities\") pod \"community-operators-ft7gp\" (UID: \"3529690c-d78c-439f-bcaa-f5dd54d44c8d\") " pod="openshift-marketplace/community-operators-ft7gp" Dec 09 12:58:06 crc kubenswrapper[4703]: I1209 12:58:06.171961 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3529690c-d78c-439f-bcaa-f5dd54d44c8d-catalog-content\") pod \"community-operators-ft7gp\" (UID: \"3529690c-d78c-439f-bcaa-f5dd54d44c8d\") " pod="openshift-marketplace/community-operators-ft7gp" Dec 09 12:58:06 crc kubenswrapper[4703]: I1209 12:58:06.172082 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3529690c-d78c-439f-bcaa-f5dd54d44c8d-utilities\") pod \"community-operators-ft7gp\" (UID: \"3529690c-d78c-439f-bcaa-f5dd54d44c8d\") " pod="openshift-marketplace/community-operators-ft7gp" Dec 09 12:58:06 crc kubenswrapper[4703]: I1209 12:58:06.201691 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ks6x\" (UniqueName: \"kubernetes.io/projected/3529690c-d78c-439f-bcaa-f5dd54d44c8d-kube-api-access-7ks6x\") pod \"community-operators-ft7gp\" (UID: \"3529690c-d78c-439f-bcaa-f5dd54d44c8d\") " pod="openshift-marketplace/community-operators-ft7gp" Dec 09 12:58:06 crc kubenswrapper[4703]: I1209 12:58:06.218502 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ft7gp" Dec 09 12:58:06 crc kubenswrapper[4703]: I1209 12:58:06.886158 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ft7gp"] Dec 09 12:58:06 crc kubenswrapper[4703]: W1209 12:58:06.889674 4703 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3529690c_d78c_439f_bcaa_f5dd54d44c8d.slice/crio-21d6879fc9d9fc5dae3fafddee921e8c5265698cbd26c5572d017dbffe806642 WatchSource:0}: Error finding container 21d6879fc9d9fc5dae3fafddee921e8c5265698cbd26c5572d017dbffe806642: Status 404 returned error can't find the container with id 21d6879fc9d9fc5dae3fafddee921e8c5265698cbd26c5572d017dbffe806642 Dec 09 12:58:07 crc kubenswrapper[4703]: I1209 12:58:07.254308 4703 generic.go:334] "Generic (PLEG): container finished" podID="3529690c-d78c-439f-bcaa-f5dd54d44c8d" containerID="bf59c6fc85a48366e30f016f7f410da8b0ecc7606996d073c3f761f2f3d2fb21" exitCode=0 Dec 09 12:58:07 crc kubenswrapper[4703]: I1209 12:58:07.254503 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ft7gp" event={"ID":"3529690c-d78c-439f-bcaa-f5dd54d44c8d","Type":"ContainerDied","Data":"bf59c6fc85a48366e30f016f7f410da8b0ecc7606996d073c3f761f2f3d2fb21"} Dec 09 12:58:07 crc kubenswrapper[4703]: I1209 12:58:07.254829 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ft7gp" event={"ID":"3529690c-d78c-439f-bcaa-f5dd54d44c8d","Type":"ContainerStarted","Data":"21d6879fc9d9fc5dae3fafddee921e8c5265698cbd26c5572d017dbffe806642"} Dec 09 12:58:08 crc kubenswrapper[4703]: I1209 12:58:08.070347 4703 scope.go:117] "RemoveContainer" containerID="d00f1f478555417b562293d5f6f436f09e33ae076fa799dd8a6a17304f341bee" Dec 09 12:58:08 crc kubenswrapper[4703]: E1209 12:58:08.071002 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 12:58:08 crc kubenswrapper[4703]: I1209 12:58:08.268878 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ft7gp" event={"ID":"3529690c-d78c-439f-bcaa-f5dd54d44c8d","Type":"ContainerStarted","Data":"b6b870a6b859ba3daea777dd4a8566f3c6324ea4a90490e5f1a200a96b227ba5"} Dec 09 12:58:09 crc kubenswrapper[4703]: I1209 12:58:09.281547 4703 generic.go:334] "Generic (PLEG): container finished" podID="3529690c-d78c-439f-bcaa-f5dd54d44c8d" containerID="b6b870a6b859ba3daea777dd4a8566f3c6324ea4a90490e5f1a200a96b227ba5" exitCode=0 Dec 09 12:58:09 crc kubenswrapper[4703]: I1209 12:58:09.281639 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ft7gp" event={"ID":"3529690c-d78c-439f-bcaa-f5dd54d44c8d","Type":"ContainerDied","Data":"b6b870a6b859ba3daea777dd4a8566f3c6324ea4a90490e5f1a200a96b227ba5"} Dec 09 12:58:10 crc kubenswrapper[4703]: I1209 12:58:10.296561 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ft7gp" event={"ID":"3529690c-d78c-439f-bcaa-f5dd54d44c8d","Type":"ContainerStarted","Data":"46a3227053ee437dff50f9fa8a4bc385249757aaf3a2c9a5868779215afe2d4d"} Dec 09 12:58:10 crc kubenswrapper[4703]: I1209 12:58:10.330895 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ft7gp" podStartSLOduration=2.900805398 podStartE2EDuration="5.330875394s" podCreationTimestamp="2025-12-09 12:58:05 +0000 UTC" firstStartedPulling="2025-12-09 12:58:07.256806199 +0000 UTC m=+3186.505569718" lastFinishedPulling="2025-12-09 12:58:09.686876195 +0000 UTC m=+3188.935639714" observedRunningTime="2025-12-09 12:58:10.326942019 +0000 UTC m=+3189.575705538" watchObservedRunningTime="2025-12-09 12:58:10.330875394 +0000 UTC m=+3189.579638913" Dec 09 12:58:11 crc kubenswrapper[4703]: E1209 12:58:11.081314 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:58:16 crc kubenswrapper[4703]: I1209 12:58:16.218942 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ft7gp" Dec 09 12:58:16 crc kubenswrapper[4703]: I1209 12:58:16.219962 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ft7gp" Dec 09 12:58:16 crc kubenswrapper[4703]: I1209 12:58:16.285322 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ft7gp" Dec 09 12:58:16 crc kubenswrapper[4703]: I1209 12:58:16.423979 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ft7gp" Dec 09 12:58:16 crc kubenswrapper[4703]: I1209 12:58:16.528282 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ft7gp"] Dec 09 12:58:18 crc kubenswrapper[4703]: E1209 12:58:18.072304 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:58:18 crc kubenswrapper[4703]: I1209 12:58:18.394128 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ft7gp" podUID="3529690c-d78c-439f-bcaa-f5dd54d44c8d" containerName="registry-server" containerID="cri-o://46a3227053ee437dff50f9fa8a4bc385249757aaf3a2c9a5868779215afe2d4d" gracePeriod=2 Dec 09 12:58:18 crc kubenswrapper[4703]: I1209 12:58:18.996080 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ft7gp" Dec 09 12:58:19 crc kubenswrapper[4703]: I1209 12:58:19.070680 4703 scope.go:117] "RemoveContainer" containerID="d00f1f478555417b562293d5f6f436f09e33ae076fa799dd8a6a17304f341bee" Dec 09 12:58:19 crc kubenswrapper[4703]: E1209 12:58:19.071072 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 12:58:19 crc kubenswrapper[4703]: I1209 12:58:19.129575 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3529690c-d78c-439f-bcaa-f5dd54d44c8d-catalog-content\") pod \"3529690c-d78c-439f-bcaa-f5dd54d44c8d\" (UID: \"3529690c-d78c-439f-bcaa-f5dd54d44c8d\") " Dec 09 12:58:19 crc kubenswrapper[4703]: I1209 12:58:19.129782 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7ks6x\" (UniqueName: \"kubernetes.io/projected/3529690c-d78c-439f-bcaa-f5dd54d44c8d-kube-api-access-7ks6x\") pod \"3529690c-d78c-439f-bcaa-f5dd54d44c8d\" (UID: \"3529690c-d78c-439f-bcaa-f5dd54d44c8d\") " Dec 09 12:58:19 crc kubenswrapper[4703]: I1209 12:58:19.129845 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3529690c-d78c-439f-bcaa-f5dd54d44c8d-utilities\") pod \"3529690c-d78c-439f-bcaa-f5dd54d44c8d\" (UID: \"3529690c-d78c-439f-bcaa-f5dd54d44c8d\") " Dec 09 12:58:19 crc kubenswrapper[4703]: I1209 12:58:19.130933 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3529690c-d78c-439f-bcaa-f5dd54d44c8d-utilities" (OuterVolumeSpecName: "utilities") pod "3529690c-d78c-439f-bcaa-f5dd54d44c8d" (UID: "3529690c-d78c-439f-bcaa-f5dd54d44c8d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:58:19 crc kubenswrapper[4703]: I1209 12:58:19.137303 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3529690c-d78c-439f-bcaa-f5dd54d44c8d-kube-api-access-7ks6x" (OuterVolumeSpecName: "kube-api-access-7ks6x") pod "3529690c-d78c-439f-bcaa-f5dd54d44c8d" (UID: "3529690c-d78c-439f-bcaa-f5dd54d44c8d"). InnerVolumeSpecName "kube-api-access-7ks6x". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:58:19 crc kubenswrapper[4703]: I1209 12:58:19.182961 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3529690c-d78c-439f-bcaa-f5dd54d44c8d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3529690c-d78c-439f-bcaa-f5dd54d44c8d" (UID: "3529690c-d78c-439f-bcaa-f5dd54d44c8d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:58:19 crc kubenswrapper[4703]: I1209 12:58:19.237123 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7ks6x\" (UniqueName: \"kubernetes.io/projected/3529690c-d78c-439f-bcaa-f5dd54d44c8d-kube-api-access-7ks6x\") on node \"crc\" DevicePath \"\"" Dec 09 12:58:19 crc kubenswrapper[4703]: I1209 12:58:19.237174 4703 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3529690c-d78c-439f-bcaa-f5dd54d44c8d-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 12:58:19 crc kubenswrapper[4703]: I1209 12:58:19.237213 4703 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3529690c-d78c-439f-bcaa-f5dd54d44c8d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 12:58:19 crc kubenswrapper[4703]: I1209 12:58:19.407708 4703 generic.go:334] "Generic (PLEG): container finished" podID="3529690c-d78c-439f-bcaa-f5dd54d44c8d" containerID="46a3227053ee437dff50f9fa8a4bc385249757aaf3a2c9a5868779215afe2d4d" exitCode=0 Dec 09 12:58:19 crc kubenswrapper[4703]: I1209 12:58:19.407776 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ft7gp" event={"ID":"3529690c-d78c-439f-bcaa-f5dd54d44c8d","Type":"ContainerDied","Data":"46a3227053ee437dff50f9fa8a4bc385249757aaf3a2c9a5868779215afe2d4d"} Dec 09 12:58:19 crc kubenswrapper[4703]: I1209 12:58:19.407808 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ft7gp" Dec 09 12:58:19 crc kubenswrapper[4703]: I1209 12:58:19.407845 4703 scope.go:117] "RemoveContainer" containerID="46a3227053ee437dff50f9fa8a4bc385249757aaf3a2c9a5868779215afe2d4d" Dec 09 12:58:19 crc kubenswrapper[4703]: I1209 12:58:19.407829 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ft7gp" event={"ID":"3529690c-d78c-439f-bcaa-f5dd54d44c8d","Type":"ContainerDied","Data":"21d6879fc9d9fc5dae3fafddee921e8c5265698cbd26c5572d017dbffe806642"} Dec 09 12:58:19 crc kubenswrapper[4703]: I1209 12:58:19.431420 4703 scope.go:117] "RemoveContainer" containerID="b6b870a6b859ba3daea777dd4a8566f3c6324ea4a90490e5f1a200a96b227ba5" Dec 09 12:58:19 crc kubenswrapper[4703]: I1209 12:58:19.449702 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ft7gp"] Dec 09 12:58:19 crc kubenswrapper[4703]: I1209 12:58:19.460633 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ft7gp"] Dec 09 12:58:19 crc kubenswrapper[4703]: I1209 12:58:19.475842 4703 scope.go:117] "RemoveContainer" containerID="bf59c6fc85a48366e30f016f7f410da8b0ecc7606996d073c3f761f2f3d2fb21" Dec 09 12:58:19 crc kubenswrapper[4703]: I1209 12:58:19.517540 4703 scope.go:117] "RemoveContainer" containerID="46a3227053ee437dff50f9fa8a4bc385249757aaf3a2c9a5868779215afe2d4d" Dec 09 12:58:19 crc kubenswrapper[4703]: E1209 12:58:19.518145 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46a3227053ee437dff50f9fa8a4bc385249757aaf3a2c9a5868779215afe2d4d\": container with ID starting with 46a3227053ee437dff50f9fa8a4bc385249757aaf3a2c9a5868779215afe2d4d not found: ID does not exist" containerID="46a3227053ee437dff50f9fa8a4bc385249757aaf3a2c9a5868779215afe2d4d" Dec 09 12:58:19 crc kubenswrapper[4703]: I1209 12:58:19.518339 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46a3227053ee437dff50f9fa8a4bc385249757aaf3a2c9a5868779215afe2d4d"} err="failed to get container status \"46a3227053ee437dff50f9fa8a4bc385249757aaf3a2c9a5868779215afe2d4d\": rpc error: code = NotFound desc = could not find container \"46a3227053ee437dff50f9fa8a4bc385249757aaf3a2c9a5868779215afe2d4d\": container with ID starting with 46a3227053ee437dff50f9fa8a4bc385249757aaf3a2c9a5868779215afe2d4d not found: ID does not exist" Dec 09 12:58:19 crc kubenswrapper[4703]: I1209 12:58:19.518380 4703 scope.go:117] "RemoveContainer" containerID="b6b870a6b859ba3daea777dd4a8566f3c6324ea4a90490e5f1a200a96b227ba5" Dec 09 12:58:19 crc kubenswrapper[4703]: E1209 12:58:19.518985 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6b870a6b859ba3daea777dd4a8566f3c6324ea4a90490e5f1a200a96b227ba5\": container with ID starting with b6b870a6b859ba3daea777dd4a8566f3c6324ea4a90490e5f1a200a96b227ba5 not found: ID does not exist" containerID="b6b870a6b859ba3daea777dd4a8566f3c6324ea4a90490e5f1a200a96b227ba5" Dec 09 12:58:19 crc kubenswrapper[4703]: I1209 12:58:19.519037 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6b870a6b859ba3daea777dd4a8566f3c6324ea4a90490e5f1a200a96b227ba5"} err="failed to get container status \"b6b870a6b859ba3daea777dd4a8566f3c6324ea4a90490e5f1a200a96b227ba5\": rpc error: code = NotFound desc = could not find container \"b6b870a6b859ba3daea777dd4a8566f3c6324ea4a90490e5f1a200a96b227ba5\": container with ID starting with b6b870a6b859ba3daea777dd4a8566f3c6324ea4a90490e5f1a200a96b227ba5 not found: ID does not exist" Dec 09 12:58:19 crc kubenswrapper[4703]: I1209 12:58:19.519084 4703 scope.go:117] "RemoveContainer" containerID="bf59c6fc85a48366e30f016f7f410da8b0ecc7606996d073c3f761f2f3d2fb21" Dec 09 12:58:19 crc kubenswrapper[4703]: E1209 12:58:19.519502 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf59c6fc85a48366e30f016f7f410da8b0ecc7606996d073c3f761f2f3d2fb21\": container with ID starting with bf59c6fc85a48366e30f016f7f410da8b0ecc7606996d073c3f761f2f3d2fb21 not found: ID does not exist" containerID="bf59c6fc85a48366e30f016f7f410da8b0ecc7606996d073c3f761f2f3d2fb21" Dec 09 12:58:19 crc kubenswrapper[4703]: I1209 12:58:19.519533 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf59c6fc85a48366e30f016f7f410da8b0ecc7606996d073c3f761f2f3d2fb21"} err="failed to get container status \"bf59c6fc85a48366e30f016f7f410da8b0ecc7606996d073c3f761f2f3d2fb21\": rpc error: code = NotFound desc = could not find container \"bf59c6fc85a48366e30f016f7f410da8b0ecc7606996d073c3f761f2f3d2fb21\": container with ID starting with bf59c6fc85a48366e30f016f7f410da8b0ecc7606996d073c3f761f2f3d2fb21 not found: ID does not exist" Dec 09 12:58:21 crc kubenswrapper[4703]: I1209 12:58:21.095021 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3529690c-d78c-439f-bcaa-f5dd54d44c8d" path="/var/lib/kubelet/pods/3529690c-d78c-439f-bcaa-f5dd54d44c8d/volumes" Dec 09 12:58:22 crc kubenswrapper[4703]: E1209 12:58:22.073483 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:58:29 crc kubenswrapper[4703]: I1209 12:58:29.038019 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cf88n"] Dec 09 12:58:29 crc kubenswrapper[4703]: E1209 12:58:29.039395 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3529690c-d78c-439f-bcaa-f5dd54d44c8d" containerName="extract-utilities" Dec 09 12:58:29 crc kubenswrapper[4703]: I1209 12:58:29.039411 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="3529690c-d78c-439f-bcaa-f5dd54d44c8d" containerName="extract-utilities" Dec 09 12:58:29 crc kubenswrapper[4703]: E1209 12:58:29.039434 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3529690c-d78c-439f-bcaa-f5dd54d44c8d" containerName="extract-content" Dec 09 12:58:29 crc kubenswrapper[4703]: I1209 12:58:29.039440 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="3529690c-d78c-439f-bcaa-f5dd54d44c8d" containerName="extract-content" Dec 09 12:58:29 crc kubenswrapper[4703]: E1209 12:58:29.039454 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3529690c-d78c-439f-bcaa-f5dd54d44c8d" containerName="registry-server" Dec 09 12:58:29 crc kubenswrapper[4703]: I1209 12:58:29.039460 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="3529690c-d78c-439f-bcaa-f5dd54d44c8d" containerName="registry-server" Dec 09 12:58:29 crc kubenswrapper[4703]: I1209 12:58:29.039718 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="3529690c-d78c-439f-bcaa-f5dd54d44c8d" containerName="registry-server" Dec 09 12:58:29 crc kubenswrapper[4703]: I1209 12:58:29.041802 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cf88n" Dec 09 12:58:29 crc kubenswrapper[4703]: I1209 12:58:29.047324 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 09 12:58:29 crc kubenswrapper[4703]: I1209 12:58:29.048415 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 09 12:58:29 crc kubenswrapper[4703]: I1209 12:58:29.048657 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-8xnzm" Dec 09 12:58:29 crc kubenswrapper[4703]: I1209 12:58:29.048778 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 09 12:58:29 crc kubenswrapper[4703]: I1209 12:58:29.061997 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cf88n"] Dec 09 12:58:29 crc kubenswrapper[4703]: E1209 12:58:29.072773 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:58:29 crc kubenswrapper[4703]: I1209 12:58:29.127084 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfshh\" (UniqueName: \"kubernetes.io/projected/cbef74eb-61bc-4efa-8621-6e089311a571-kube-api-access-rfshh\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-cf88n\" (UID: \"cbef74eb-61bc-4efa-8621-6e089311a571\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cf88n" Dec 09 12:58:29 crc kubenswrapper[4703]: I1209 12:58:29.127151 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cbef74eb-61bc-4efa-8621-6e089311a571-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-cf88n\" (UID: \"cbef74eb-61bc-4efa-8621-6e089311a571\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cf88n" Dec 09 12:58:29 crc kubenswrapper[4703]: I1209 12:58:29.127406 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cbef74eb-61bc-4efa-8621-6e089311a571-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-cf88n\" (UID: \"cbef74eb-61bc-4efa-8621-6e089311a571\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cf88n" Dec 09 12:58:29 crc kubenswrapper[4703]: I1209 12:58:29.231406 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfshh\" (UniqueName: \"kubernetes.io/projected/cbef74eb-61bc-4efa-8621-6e089311a571-kube-api-access-rfshh\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-cf88n\" (UID: \"cbef74eb-61bc-4efa-8621-6e089311a571\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cf88n" Dec 09 12:58:29 crc kubenswrapper[4703]: I1209 12:58:29.231498 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cbef74eb-61bc-4efa-8621-6e089311a571-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-cf88n\" (UID: \"cbef74eb-61bc-4efa-8621-6e089311a571\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cf88n" Dec 09 12:58:29 crc kubenswrapper[4703]: I1209 12:58:29.231558 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cbef74eb-61bc-4efa-8621-6e089311a571-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-cf88n\" (UID: \"cbef74eb-61bc-4efa-8621-6e089311a571\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cf88n" Dec 09 12:58:29 crc kubenswrapper[4703]: I1209 12:58:29.239615 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cbef74eb-61bc-4efa-8621-6e089311a571-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-cf88n\" (UID: \"cbef74eb-61bc-4efa-8621-6e089311a571\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cf88n" Dec 09 12:58:29 crc kubenswrapper[4703]: I1209 12:58:29.239968 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cbef74eb-61bc-4efa-8621-6e089311a571-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-cf88n\" (UID: \"cbef74eb-61bc-4efa-8621-6e089311a571\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cf88n" Dec 09 12:58:29 crc kubenswrapper[4703]: I1209 12:58:29.251132 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfshh\" (UniqueName: \"kubernetes.io/projected/cbef74eb-61bc-4efa-8621-6e089311a571-kube-api-access-rfshh\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-cf88n\" (UID: \"cbef74eb-61bc-4efa-8621-6e089311a571\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cf88n" Dec 09 12:58:29 crc kubenswrapper[4703]: I1209 12:58:29.378253 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cf88n" Dec 09 12:58:30 crc kubenswrapper[4703]: I1209 12:58:30.007048 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cf88n"] Dec 09 12:58:30 crc kubenswrapper[4703]: I1209 12:58:30.536359 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cf88n" event={"ID":"cbef74eb-61bc-4efa-8621-6e089311a571","Type":"ContainerStarted","Data":"dec4204b21ebcfbbe5f8f4d1c9fbd5df875e4e2de0c7f6ed51357713cba1c388"} Dec 09 12:58:31 crc kubenswrapper[4703]: I1209 12:58:31.552319 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cf88n" event={"ID":"cbef74eb-61bc-4efa-8621-6e089311a571","Type":"ContainerStarted","Data":"ae9d9adcd217469896806fcfa805bec8eeff82d4a4a57c38e2f177119c11f668"} Dec 09 12:58:31 crc kubenswrapper[4703]: I1209 12:58:31.582737 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cf88n" podStartSLOduration=2.053466213 podStartE2EDuration="2.582716498s" podCreationTimestamp="2025-12-09 12:58:29 +0000 UTC" firstStartedPulling="2025-12-09 12:58:30.02067113 +0000 UTC m=+3209.269434659" lastFinishedPulling="2025-12-09 12:58:30.549921425 +0000 UTC m=+3209.798684944" observedRunningTime="2025-12-09 12:58:31.578966049 +0000 UTC m=+3210.827729578" watchObservedRunningTime="2025-12-09 12:58:31.582716498 +0000 UTC m=+3210.831480017" Dec 09 12:58:33 crc kubenswrapper[4703]: I1209 12:58:33.069874 4703 scope.go:117] "RemoveContainer" containerID="d00f1f478555417b562293d5f6f436f09e33ae076fa799dd8a6a17304f341bee" Dec 09 12:58:33 crc kubenswrapper[4703]: I1209 12:58:33.576983 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" event={"ID":"32956ceb-8540-406e-8693-e86efb46cd42","Type":"ContainerStarted","Data":"d82dddf25115b97eacea097c2d89d3dbbeaff47f141eed9d2816fe7e25a4ddb7"} Dec 09 12:58:34 crc kubenswrapper[4703]: E1209 12:58:34.073915 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:58:40 crc kubenswrapper[4703]: E1209 12:58:40.072671 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:58:46 crc kubenswrapper[4703]: I1209 12:58:46.074241 4703 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 09 12:58:46 crc kubenswrapper[4703]: E1209 12:58:46.204670 4703 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 12:58:46 crc kubenswrapper[4703]: E1209 12:58:46.204762 4703 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 12:58:46 crc kubenswrapper[4703]: E1209 12:58:46.204992 4703 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n565h678h645h65h685h657h568h564h8fhcfhc4h58fh66bh564h649h5d5hb5h67dh57fh97h58ch557h4h68h59fh5c8h65ch586h5bch5b8h5c6h7q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p7hbq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(ce42c586-f397-4f98-be45-f56d36115d7a): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 12:58:46 crc kubenswrapper[4703]: E1209 12:58:46.206561 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:58:54 crc kubenswrapper[4703]: E1209 12:58:54.075991 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:58:57 crc kubenswrapper[4703]: E1209 12:58:57.071943 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:59:08 crc kubenswrapper[4703]: E1209 12:59:08.200333 4703 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 09 12:59:08 crc kubenswrapper[4703]: E1209 12:59:08.201178 4703 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 09 12:59:08 crc kubenswrapper[4703]: E1209 12:59:08.201377 4703 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6kdkz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-6ljb8_openstack(862e9d91-760e-43af-aba3-a23255b0fd7a): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 12:59:08 crc kubenswrapper[4703]: E1209 12:59:08.202611 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:59:09 crc kubenswrapper[4703]: E1209 12:59:09.073368 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:59:19 crc kubenswrapper[4703]: E1209 12:59:19.076452 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:59:20 crc kubenswrapper[4703]: E1209 12:59:20.071073 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:59:31 crc kubenswrapper[4703]: I1209 12:59:31.765572 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2cd52"] Dec 09 12:59:31 crc kubenswrapper[4703]: I1209 12:59:31.769414 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2cd52" Dec 09 12:59:31 crc kubenswrapper[4703]: I1209 12:59:31.784797 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2cd52"] Dec 09 12:59:31 crc kubenswrapper[4703]: I1209 12:59:31.867654 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlfjt\" (UniqueName: \"kubernetes.io/projected/21472530-8baf-4e12-bb1e-fdbce28afaf8-kube-api-access-rlfjt\") pod \"redhat-operators-2cd52\" (UID: \"21472530-8baf-4e12-bb1e-fdbce28afaf8\") " pod="openshift-marketplace/redhat-operators-2cd52" Dec 09 12:59:31 crc kubenswrapper[4703]: I1209 12:59:31.867725 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21472530-8baf-4e12-bb1e-fdbce28afaf8-catalog-content\") pod \"redhat-operators-2cd52\" (UID: \"21472530-8baf-4e12-bb1e-fdbce28afaf8\") " pod="openshift-marketplace/redhat-operators-2cd52" Dec 09 12:59:31 crc kubenswrapper[4703]: I1209 12:59:31.868292 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21472530-8baf-4e12-bb1e-fdbce28afaf8-utilities\") pod \"redhat-operators-2cd52\" (UID: \"21472530-8baf-4e12-bb1e-fdbce28afaf8\") " pod="openshift-marketplace/redhat-operators-2cd52" Dec 09 12:59:31 crc kubenswrapper[4703]: I1209 12:59:31.971150 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlfjt\" (UniqueName: \"kubernetes.io/projected/21472530-8baf-4e12-bb1e-fdbce28afaf8-kube-api-access-rlfjt\") pod \"redhat-operators-2cd52\" (UID: \"21472530-8baf-4e12-bb1e-fdbce28afaf8\") " pod="openshift-marketplace/redhat-operators-2cd52" Dec 09 12:59:31 crc kubenswrapper[4703]: I1209 12:59:31.971282 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21472530-8baf-4e12-bb1e-fdbce28afaf8-catalog-content\") pod \"redhat-operators-2cd52\" (UID: \"21472530-8baf-4e12-bb1e-fdbce28afaf8\") " pod="openshift-marketplace/redhat-operators-2cd52" Dec 09 12:59:31 crc kubenswrapper[4703]: I1209 12:59:31.971430 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21472530-8baf-4e12-bb1e-fdbce28afaf8-utilities\") pod \"redhat-operators-2cd52\" (UID: \"21472530-8baf-4e12-bb1e-fdbce28afaf8\") " pod="openshift-marketplace/redhat-operators-2cd52" Dec 09 12:59:31 crc kubenswrapper[4703]: I1209 12:59:31.971995 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21472530-8baf-4e12-bb1e-fdbce28afaf8-catalog-content\") pod \"redhat-operators-2cd52\" (UID: \"21472530-8baf-4e12-bb1e-fdbce28afaf8\") " pod="openshift-marketplace/redhat-operators-2cd52" Dec 09 12:59:31 crc kubenswrapper[4703]: I1209 12:59:31.972060 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21472530-8baf-4e12-bb1e-fdbce28afaf8-utilities\") pod \"redhat-operators-2cd52\" (UID: \"21472530-8baf-4e12-bb1e-fdbce28afaf8\") " pod="openshift-marketplace/redhat-operators-2cd52" Dec 09 12:59:32 crc kubenswrapper[4703]: I1209 12:59:32.001370 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlfjt\" (UniqueName: \"kubernetes.io/projected/21472530-8baf-4e12-bb1e-fdbce28afaf8-kube-api-access-rlfjt\") pod \"redhat-operators-2cd52\" (UID: \"21472530-8baf-4e12-bb1e-fdbce28afaf8\") " pod="openshift-marketplace/redhat-operators-2cd52" Dec 09 12:59:32 crc kubenswrapper[4703]: I1209 12:59:32.102695 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2cd52" Dec 09 12:59:32 crc kubenswrapper[4703]: I1209 12:59:32.643252 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2cd52"] Dec 09 12:59:33 crc kubenswrapper[4703]: E1209 12:59:33.086085 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:59:33 crc kubenswrapper[4703]: E1209 12:59:33.088214 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:59:33 crc kubenswrapper[4703]: I1209 12:59:33.349412 4703 generic.go:334] "Generic (PLEG): container finished" podID="21472530-8baf-4e12-bb1e-fdbce28afaf8" containerID="e8f65727cbfd865ba21c3492049bf31685bfefea6847412dde814f80d2c875e1" exitCode=0 Dec 09 12:59:33 crc kubenswrapper[4703]: I1209 12:59:33.350002 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2cd52" event={"ID":"21472530-8baf-4e12-bb1e-fdbce28afaf8","Type":"ContainerDied","Data":"e8f65727cbfd865ba21c3492049bf31685bfefea6847412dde814f80d2c875e1"} Dec 09 12:59:33 crc kubenswrapper[4703]: I1209 12:59:33.350158 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2cd52" event={"ID":"21472530-8baf-4e12-bb1e-fdbce28afaf8","Type":"ContainerStarted","Data":"cc618f3c50115eece254d6a8f56115d9559c95601789f2feca87b9b57af25f50"} Dec 09 12:59:34 crc kubenswrapper[4703]: I1209 12:59:34.363642 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2cd52" event={"ID":"21472530-8baf-4e12-bb1e-fdbce28afaf8","Type":"ContainerStarted","Data":"f6700db69c24cf2fb5e2d9ce649f393138dfa9a407689d91966de758d28f347f"} Dec 09 12:59:36 crc kubenswrapper[4703]: I1209 12:59:36.387356 4703 generic.go:334] "Generic (PLEG): container finished" podID="21472530-8baf-4e12-bb1e-fdbce28afaf8" containerID="f6700db69c24cf2fb5e2d9ce649f393138dfa9a407689d91966de758d28f347f" exitCode=0 Dec 09 12:59:36 crc kubenswrapper[4703]: I1209 12:59:36.387444 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2cd52" event={"ID":"21472530-8baf-4e12-bb1e-fdbce28afaf8","Type":"ContainerDied","Data":"f6700db69c24cf2fb5e2d9ce649f393138dfa9a407689d91966de758d28f347f"} Dec 09 12:59:37 crc kubenswrapper[4703]: I1209 12:59:37.402869 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2cd52" event={"ID":"21472530-8baf-4e12-bb1e-fdbce28afaf8","Type":"ContainerStarted","Data":"359367ecc926b3b13448022bfd579e4166b727cd72a531f7e40543427542823a"} Dec 09 12:59:37 crc kubenswrapper[4703]: I1209 12:59:37.432590 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2cd52" podStartSLOduration=3.023584138 podStartE2EDuration="6.432564222s" podCreationTimestamp="2025-12-09 12:59:31 +0000 UTC" firstStartedPulling="2025-12-09 12:59:33.354034364 +0000 UTC m=+3272.602797883" lastFinishedPulling="2025-12-09 12:59:36.763014448 +0000 UTC m=+3276.011777967" observedRunningTime="2025-12-09 12:59:37.423531822 +0000 UTC m=+3276.672295341" watchObservedRunningTime="2025-12-09 12:59:37.432564222 +0000 UTC m=+3276.681327741" Dec 09 12:59:42 crc kubenswrapper[4703]: I1209 12:59:42.103216 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2cd52" Dec 09 12:59:42 crc kubenswrapper[4703]: I1209 12:59:42.104089 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2cd52" Dec 09 12:59:43 crc kubenswrapper[4703]: I1209 12:59:43.154427 4703 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2cd52" podUID="21472530-8baf-4e12-bb1e-fdbce28afaf8" containerName="registry-server" probeResult="failure" output=< Dec 09 12:59:43 crc kubenswrapper[4703]: timeout: failed to connect service ":50051" within 1s Dec 09 12:59:43 crc kubenswrapper[4703]: > Dec 09 12:59:45 crc kubenswrapper[4703]: E1209 12:59:45.073553 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:59:45 crc kubenswrapper[4703]: E1209 12:59:45.073617 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:59:49 crc kubenswrapper[4703]: I1209 12:59:49.576183 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hpmp5"] Dec 09 12:59:49 crc kubenswrapper[4703]: I1209 12:59:49.579972 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hpmp5" Dec 09 12:59:49 crc kubenswrapper[4703]: I1209 12:59:49.588126 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hpmp5"] Dec 09 12:59:49 crc kubenswrapper[4703]: I1209 12:59:49.590872 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6a8911a-e665-4d98-8d51-bdc26bc12702-utilities\") pod \"certified-operators-hpmp5\" (UID: \"c6a8911a-e665-4d98-8d51-bdc26bc12702\") " pod="openshift-marketplace/certified-operators-hpmp5" Dec 09 12:59:49 crc kubenswrapper[4703]: I1209 12:59:49.591026 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hf5zz\" (UniqueName: \"kubernetes.io/projected/c6a8911a-e665-4d98-8d51-bdc26bc12702-kube-api-access-hf5zz\") pod \"certified-operators-hpmp5\" (UID: \"c6a8911a-e665-4d98-8d51-bdc26bc12702\") " pod="openshift-marketplace/certified-operators-hpmp5" Dec 09 12:59:49 crc kubenswrapper[4703]: I1209 12:59:49.591099 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6a8911a-e665-4d98-8d51-bdc26bc12702-catalog-content\") pod \"certified-operators-hpmp5\" (UID: \"c6a8911a-e665-4d98-8d51-bdc26bc12702\") " pod="openshift-marketplace/certified-operators-hpmp5" Dec 09 12:59:49 crc kubenswrapper[4703]: I1209 12:59:49.693626 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6a8911a-e665-4d98-8d51-bdc26bc12702-utilities\") pod \"certified-operators-hpmp5\" (UID: \"c6a8911a-e665-4d98-8d51-bdc26bc12702\") " pod="openshift-marketplace/certified-operators-hpmp5" Dec 09 12:59:49 crc kubenswrapper[4703]: I1209 12:59:49.693897 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hf5zz\" (UniqueName: \"kubernetes.io/projected/c6a8911a-e665-4d98-8d51-bdc26bc12702-kube-api-access-hf5zz\") pod \"certified-operators-hpmp5\" (UID: \"c6a8911a-e665-4d98-8d51-bdc26bc12702\") " pod="openshift-marketplace/certified-operators-hpmp5" Dec 09 12:59:49 crc kubenswrapper[4703]: I1209 12:59:49.694005 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6a8911a-e665-4d98-8d51-bdc26bc12702-catalog-content\") pod \"certified-operators-hpmp5\" (UID: \"c6a8911a-e665-4d98-8d51-bdc26bc12702\") " pod="openshift-marketplace/certified-operators-hpmp5" Dec 09 12:59:49 crc kubenswrapper[4703]: I1209 12:59:49.694431 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6a8911a-e665-4d98-8d51-bdc26bc12702-utilities\") pod \"certified-operators-hpmp5\" (UID: \"c6a8911a-e665-4d98-8d51-bdc26bc12702\") " pod="openshift-marketplace/certified-operators-hpmp5" Dec 09 12:59:49 crc kubenswrapper[4703]: I1209 12:59:49.694573 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6a8911a-e665-4d98-8d51-bdc26bc12702-catalog-content\") pod \"certified-operators-hpmp5\" (UID: \"c6a8911a-e665-4d98-8d51-bdc26bc12702\") " pod="openshift-marketplace/certified-operators-hpmp5" Dec 09 12:59:49 crc kubenswrapper[4703]: I1209 12:59:49.717274 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hf5zz\" (UniqueName: \"kubernetes.io/projected/c6a8911a-e665-4d98-8d51-bdc26bc12702-kube-api-access-hf5zz\") pod \"certified-operators-hpmp5\" (UID: \"c6a8911a-e665-4d98-8d51-bdc26bc12702\") " pod="openshift-marketplace/certified-operators-hpmp5" Dec 09 12:59:49 crc kubenswrapper[4703]: I1209 12:59:49.957170 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hpmp5" Dec 09 12:59:50 crc kubenswrapper[4703]: I1209 12:59:50.597428 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hpmp5"] Dec 09 12:59:50 crc kubenswrapper[4703]: W1209 12:59:50.609317 4703 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc6a8911a_e665_4d98_8d51_bdc26bc12702.slice/crio-a99971d329905e01282656a159e0abd4e7b136de443bde0b45c3f4dfdb79cb63 WatchSource:0}: Error finding container a99971d329905e01282656a159e0abd4e7b136de443bde0b45c3f4dfdb79cb63: Status 404 returned error can't find the container with id a99971d329905e01282656a159e0abd4e7b136de443bde0b45c3f4dfdb79cb63 Dec 09 12:59:51 crc kubenswrapper[4703]: I1209 12:59:51.552568 4703 generic.go:334] "Generic (PLEG): container finished" podID="c6a8911a-e665-4d98-8d51-bdc26bc12702" containerID="ff2cea7fefa21e074fcc8263223aa19ba22bdf7417078118c264e72955ed0b20" exitCode=0 Dec 09 12:59:51 crc kubenswrapper[4703]: I1209 12:59:51.552710 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hpmp5" event={"ID":"c6a8911a-e665-4d98-8d51-bdc26bc12702","Type":"ContainerDied","Data":"ff2cea7fefa21e074fcc8263223aa19ba22bdf7417078118c264e72955ed0b20"} Dec 09 12:59:51 crc kubenswrapper[4703]: I1209 12:59:51.552993 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hpmp5" event={"ID":"c6a8911a-e665-4d98-8d51-bdc26bc12702","Type":"ContainerStarted","Data":"a99971d329905e01282656a159e0abd4e7b136de443bde0b45c3f4dfdb79cb63"} Dec 09 12:59:52 crc kubenswrapper[4703]: I1209 12:59:52.151699 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2cd52" Dec 09 12:59:52 crc kubenswrapper[4703]: I1209 12:59:52.210757 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2cd52" Dec 09 12:59:52 crc kubenswrapper[4703]: I1209 12:59:52.569107 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hpmp5" event={"ID":"c6a8911a-e665-4d98-8d51-bdc26bc12702","Type":"ContainerStarted","Data":"9680940a41d820f49df457dd102ab80d05143147d0fe40d28963c60dc1600406"} Dec 09 12:59:52 crc kubenswrapper[4703]: I1209 12:59:52.760014 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2cd52"] Dec 09 12:59:53 crc kubenswrapper[4703]: I1209 12:59:53.582180 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2cd52" podUID="21472530-8baf-4e12-bb1e-fdbce28afaf8" containerName="registry-server" containerID="cri-o://359367ecc926b3b13448022bfd579e4166b727cd72a531f7e40543427542823a" gracePeriod=2 Dec 09 12:59:54 crc kubenswrapper[4703]: I1209 12:59:54.604061 4703 generic.go:334] "Generic (PLEG): container finished" podID="c6a8911a-e665-4d98-8d51-bdc26bc12702" containerID="9680940a41d820f49df457dd102ab80d05143147d0fe40d28963c60dc1600406" exitCode=0 Dec 09 12:59:54 crc kubenswrapper[4703]: I1209 12:59:54.604145 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hpmp5" event={"ID":"c6a8911a-e665-4d98-8d51-bdc26bc12702","Type":"ContainerDied","Data":"9680940a41d820f49df457dd102ab80d05143147d0fe40d28963c60dc1600406"} Dec 09 12:59:54 crc kubenswrapper[4703]: I1209 12:59:54.614147 4703 generic.go:334] "Generic (PLEG): container finished" podID="21472530-8baf-4e12-bb1e-fdbce28afaf8" containerID="359367ecc926b3b13448022bfd579e4166b727cd72a531f7e40543427542823a" exitCode=0 Dec 09 12:59:54 crc kubenswrapper[4703]: I1209 12:59:54.614211 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2cd52" event={"ID":"21472530-8baf-4e12-bb1e-fdbce28afaf8","Type":"ContainerDied","Data":"359367ecc926b3b13448022bfd579e4166b727cd72a531f7e40543427542823a"} Dec 09 12:59:54 crc kubenswrapper[4703]: I1209 12:59:54.746374 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2cd52" Dec 09 12:59:54 crc kubenswrapper[4703]: I1209 12:59:54.941480 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21472530-8baf-4e12-bb1e-fdbce28afaf8-utilities\") pod \"21472530-8baf-4e12-bb1e-fdbce28afaf8\" (UID: \"21472530-8baf-4e12-bb1e-fdbce28afaf8\") " Dec 09 12:59:54 crc kubenswrapper[4703]: I1209 12:59:54.942457 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21472530-8baf-4e12-bb1e-fdbce28afaf8-catalog-content\") pod \"21472530-8baf-4e12-bb1e-fdbce28afaf8\" (UID: \"21472530-8baf-4e12-bb1e-fdbce28afaf8\") " Dec 09 12:59:54 crc kubenswrapper[4703]: I1209 12:59:54.942576 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rlfjt\" (UniqueName: \"kubernetes.io/projected/21472530-8baf-4e12-bb1e-fdbce28afaf8-kube-api-access-rlfjt\") pod \"21472530-8baf-4e12-bb1e-fdbce28afaf8\" (UID: \"21472530-8baf-4e12-bb1e-fdbce28afaf8\") " Dec 09 12:59:54 crc kubenswrapper[4703]: I1209 12:59:54.942974 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/21472530-8baf-4e12-bb1e-fdbce28afaf8-utilities" (OuterVolumeSpecName: "utilities") pod "21472530-8baf-4e12-bb1e-fdbce28afaf8" (UID: "21472530-8baf-4e12-bb1e-fdbce28afaf8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:59:54 crc kubenswrapper[4703]: I1209 12:59:54.946132 4703 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21472530-8baf-4e12-bb1e-fdbce28afaf8-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 12:59:54 crc kubenswrapper[4703]: I1209 12:59:54.952546 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21472530-8baf-4e12-bb1e-fdbce28afaf8-kube-api-access-rlfjt" (OuterVolumeSpecName: "kube-api-access-rlfjt") pod "21472530-8baf-4e12-bb1e-fdbce28afaf8" (UID: "21472530-8baf-4e12-bb1e-fdbce28afaf8"). InnerVolumeSpecName "kube-api-access-rlfjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:59:55 crc kubenswrapper[4703]: I1209 12:59:55.051337 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rlfjt\" (UniqueName: \"kubernetes.io/projected/21472530-8baf-4e12-bb1e-fdbce28afaf8-kube-api-access-rlfjt\") on node \"crc\" DevicePath \"\"" Dec 09 12:59:55 crc kubenswrapper[4703]: I1209 12:59:55.087048 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/21472530-8baf-4e12-bb1e-fdbce28afaf8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "21472530-8baf-4e12-bb1e-fdbce28afaf8" (UID: "21472530-8baf-4e12-bb1e-fdbce28afaf8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:59:55 crc kubenswrapper[4703]: I1209 12:59:55.152988 4703 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21472530-8baf-4e12-bb1e-fdbce28afaf8-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 12:59:55 crc kubenswrapper[4703]: I1209 12:59:55.628535 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hpmp5" event={"ID":"c6a8911a-e665-4d98-8d51-bdc26bc12702","Type":"ContainerStarted","Data":"f95da1dddcf0c39b6574b76fe3d6724eaee7b931d1d51fd30023e060f49e0322"} Dec 09 12:59:55 crc kubenswrapper[4703]: I1209 12:59:55.632321 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2cd52" event={"ID":"21472530-8baf-4e12-bb1e-fdbce28afaf8","Type":"ContainerDied","Data":"cc618f3c50115eece254d6a8f56115d9559c95601789f2feca87b9b57af25f50"} Dec 09 12:59:55 crc kubenswrapper[4703]: I1209 12:59:55.632517 4703 scope.go:117] "RemoveContainer" containerID="359367ecc926b3b13448022bfd579e4166b727cd72a531f7e40543427542823a" Dec 09 12:59:55 crc kubenswrapper[4703]: I1209 12:59:55.632405 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2cd52" Dec 09 12:59:55 crc kubenswrapper[4703]: I1209 12:59:55.655289 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hpmp5" podStartSLOduration=3.200211767 podStartE2EDuration="6.655264928s" podCreationTimestamp="2025-12-09 12:59:49 +0000 UTC" firstStartedPulling="2025-12-09 12:59:51.556152957 +0000 UTC m=+3290.804916476" lastFinishedPulling="2025-12-09 12:59:55.011206128 +0000 UTC m=+3294.259969637" observedRunningTime="2025-12-09 12:59:55.652107305 +0000 UTC m=+3294.900870834" watchObservedRunningTime="2025-12-09 12:59:55.655264928 +0000 UTC m=+3294.904028437" Dec 09 12:59:55 crc kubenswrapper[4703]: I1209 12:59:55.658445 4703 scope.go:117] "RemoveContainer" containerID="f6700db69c24cf2fb5e2d9ce649f393138dfa9a407689d91966de758d28f347f" Dec 09 12:59:55 crc kubenswrapper[4703]: I1209 12:59:55.684486 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2cd52"] Dec 09 12:59:55 crc kubenswrapper[4703]: I1209 12:59:55.696436 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2cd52"] Dec 09 12:59:55 crc kubenswrapper[4703]: I1209 12:59:55.698887 4703 scope.go:117] "RemoveContainer" containerID="e8f65727cbfd865ba21c3492049bf31685bfefea6847412dde814f80d2c875e1" Dec 09 12:59:56 crc kubenswrapper[4703]: E1209 12:59:56.071526 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 12:59:57 crc kubenswrapper[4703]: E1209 12:59:57.072251 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 12:59:57 crc kubenswrapper[4703]: I1209 12:59:57.085595 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21472530-8baf-4e12-bb1e-fdbce28afaf8" path="/var/lib/kubelet/pods/21472530-8baf-4e12-bb1e-fdbce28afaf8/volumes" Dec 09 12:59:59 crc kubenswrapper[4703]: I1209 12:59:59.958443 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hpmp5" Dec 09 12:59:59 crc kubenswrapper[4703]: I1209 12:59:59.959242 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hpmp5" Dec 09 13:00:00 crc kubenswrapper[4703]: I1209 13:00:00.042803 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hpmp5" Dec 09 13:00:00 crc kubenswrapper[4703]: I1209 13:00:00.158662 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421420-m6qgw"] Dec 09 13:00:00 crc kubenswrapper[4703]: E1209 13:00:00.159574 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21472530-8baf-4e12-bb1e-fdbce28afaf8" containerName="extract-utilities" Dec 09 13:00:00 crc kubenswrapper[4703]: I1209 13:00:00.159599 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="21472530-8baf-4e12-bb1e-fdbce28afaf8" containerName="extract-utilities" Dec 09 13:00:00 crc kubenswrapper[4703]: E1209 13:00:00.159614 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21472530-8baf-4e12-bb1e-fdbce28afaf8" containerName="registry-server" Dec 09 13:00:00 crc kubenswrapper[4703]: I1209 13:00:00.159622 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="21472530-8baf-4e12-bb1e-fdbce28afaf8" containerName="registry-server" Dec 09 13:00:00 crc kubenswrapper[4703]: E1209 13:00:00.159653 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21472530-8baf-4e12-bb1e-fdbce28afaf8" containerName="extract-content" Dec 09 13:00:00 crc kubenswrapper[4703]: I1209 13:00:00.159659 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="21472530-8baf-4e12-bb1e-fdbce28afaf8" containerName="extract-content" Dec 09 13:00:00 crc kubenswrapper[4703]: I1209 13:00:00.159954 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="21472530-8baf-4e12-bb1e-fdbce28afaf8" containerName="registry-server" Dec 09 13:00:00 crc kubenswrapper[4703]: I1209 13:00:00.161283 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421420-m6qgw" Dec 09 13:00:00 crc kubenswrapper[4703]: I1209 13:00:00.163968 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 09 13:00:00 crc kubenswrapper[4703]: I1209 13:00:00.166037 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 09 13:00:00 crc kubenswrapper[4703]: I1209 13:00:00.176815 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421420-m6qgw"] Dec 09 13:00:00 crc kubenswrapper[4703]: I1209 13:00:00.305112 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x722h\" (UniqueName: \"kubernetes.io/projected/39cb7561-4275-41f0-914d-7a0ea0653e27-kube-api-access-x722h\") pod \"collect-profiles-29421420-m6qgw\" (UID: \"39cb7561-4275-41f0-914d-7a0ea0653e27\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421420-m6qgw" Dec 09 13:00:00 crc kubenswrapper[4703]: I1209 13:00:00.305180 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/39cb7561-4275-41f0-914d-7a0ea0653e27-secret-volume\") pod \"collect-profiles-29421420-m6qgw\" (UID: \"39cb7561-4275-41f0-914d-7a0ea0653e27\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421420-m6qgw" Dec 09 13:00:00 crc kubenswrapper[4703]: I1209 13:00:00.305244 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/39cb7561-4275-41f0-914d-7a0ea0653e27-config-volume\") pod \"collect-profiles-29421420-m6qgw\" (UID: \"39cb7561-4275-41f0-914d-7a0ea0653e27\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421420-m6qgw" Dec 09 13:00:00 crc kubenswrapper[4703]: I1209 13:00:00.407805 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x722h\" (UniqueName: \"kubernetes.io/projected/39cb7561-4275-41f0-914d-7a0ea0653e27-kube-api-access-x722h\") pod \"collect-profiles-29421420-m6qgw\" (UID: \"39cb7561-4275-41f0-914d-7a0ea0653e27\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421420-m6qgw" Dec 09 13:00:00 crc kubenswrapper[4703]: I1209 13:00:00.407871 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/39cb7561-4275-41f0-914d-7a0ea0653e27-secret-volume\") pod \"collect-profiles-29421420-m6qgw\" (UID: \"39cb7561-4275-41f0-914d-7a0ea0653e27\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421420-m6qgw" Dec 09 13:00:00 crc kubenswrapper[4703]: I1209 13:00:00.407924 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/39cb7561-4275-41f0-914d-7a0ea0653e27-config-volume\") pod \"collect-profiles-29421420-m6qgw\" (UID: \"39cb7561-4275-41f0-914d-7a0ea0653e27\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421420-m6qgw" Dec 09 13:00:00 crc kubenswrapper[4703]: I1209 13:00:00.408904 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/39cb7561-4275-41f0-914d-7a0ea0653e27-config-volume\") pod \"collect-profiles-29421420-m6qgw\" (UID: \"39cb7561-4275-41f0-914d-7a0ea0653e27\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421420-m6qgw" Dec 09 13:00:00 crc kubenswrapper[4703]: I1209 13:00:00.428911 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/39cb7561-4275-41f0-914d-7a0ea0653e27-secret-volume\") pod \"collect-profiles-29421420-m6qgw\" (UID: \"39cb7561-4275-41f0-914d-7a0ea0653e27\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421420-m6qgw" Dec 09 13:00:00 crc kubenswrapper[4703]: I1209 13:00:00.433094 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x722h\" (UniqueName: \"kubernetes.io/projected/39cb7561-4275-41f0-914d-7a0ea0653e27-kube-api-access-x722h\") pod \"collect-profiles-29421420-m6qgw\" (UID: \"39cb7561-4275-41f0-914d-7a0ea0653e27\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421420-m6qgw" Dec 09 13:00:00 crc kubenswrapper[4703]: I1209 13:00:00.484036 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421420-m6qgw" Dec 09 13:00:00 crc kubenswrapper[4703]: I1209 13:00:00.792434 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hpmp5" Dec 09 13:00:01 crc kubenswrapper[4703]: I1209 13:00:01.008849 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421420-m6qgw"] Dec 09 13:00:01 crc kubenswrapper[4703]: I1209 13:00:01.168099 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hpmp5"] Dec 09 13:00:01 crc kubenswrapper[4703]: I1209 13:00:01.722498 4703 generic.go:334] "Generic (PLEG): container finished" podID="39cb7561-4275-41f0-914d-7a0ea0653e27" containerID="c4a1699dc61b1b1723b2b8a4513847ca72ecb8a36b3f7feb37ff08a1cb125a74" exitCode=0 Dec 09 13:00:01 crc kubenswrapper[4703]: I1209 13:00:01.722651 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421420-m6qgw" event={"ID":"39cb7561-4275-41f0-914d-7a0ea0653e27","Type":"ContainerDied","Data":"c4a1699dc61b1b1723b2b8a4513847ca72ecb8a36b3f7feb37ff08a1cb125a74"} Dec 09 13:00:01 crc kubenswrapper[4703]: I1209 13:00:01.722701 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421420-m6qgw" event={"ID":"39cb7561-4275-41f0-914d-7a0ea0653e27","Type":"ContainerStarted","Data":"2feb17582330c1d7be806b8275bd7c5d14ba39dfc224b6ee42418041df53bfb6"} Dec 09 13:00:02 crc kubenswrapper[4703]: I1209 13:00:02.732436 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hpmp5" podUID="c6a8911a-e665-4d98-8d51-bdc26bc12702" containerName="registry-server" containerID="cri-o://f95da1dddcf0c39b6574b76fe3d6724eaee7b931d1d51fd30023e060f49e0322" gracePeriod=2 Dec 09 13:00:03 crc kubenswrapper[4703]: I1209 13:00:03.236316 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421420-m6qgw" Dec 09 13:00:03 crc kubenswrapper[4703]: I1209 13:00:03.408614 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/39cb7561-4275-41f0-914d-7a0ea0653e27-secret-volume\") pod \"39cb7561-4275-41f0-914d-7a0ea0653e27\" (UID: \"39cb7561-4275-41f0-914d-7a0ea0653e27\") " Dec 09 13:00:03 crc kubenswrapper[4703]: I1209 13:00:03.410295 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x722h\" (UniqueName: \"kubernetes.io/projected/39cb7561-4275-41f0-914d-7a0ea0653e27-kube-api-access-x722h\") pod \"39cb7561-4275-41f0-914d-7a0ea0653e27\" (UID: \"39cb7561-4275-41f0-914d-7a0ea0653e27\") " Dec 09 13:00:03 crc kubenswrapper[4703]: I1209 13:00:03.410782 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/39cb7561-4275-41f0-914d-7a0ea0653e27-config-volume\") pod \"39cb7561-4275-41f0-914d-7a0ea0653e27\" (UID: \"39cb7561-4275-41f0-914d-7a0ea0653e27\") " Dec 09 13:00:03 crc kubenswrapper[4703]: I1209 13:00:03.412002 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39cb7561-4275-41f0-914d-7a0ea0653e27-config-volume" (OuterVolumeSpecName: "config-volume") pod "39cb7561-4275-41f0-914d-7a0ea0653e27" (UID: "39cb7561-4275-41f0-914d-7a0ea0653e27"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 13:00:03 crc kubenswrapper[4703]: I1209 13:00:03.417100 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39cb7561-4275-41f0-914d-7a0ea0653e27-kube-api-access-x722h" (OuterVolumeSpecName: "kube-api-access-x722h") pod "39cb7561-4275-41f0-914d-7a0ea0653e27" (UID: "39cb7561-4275-41f0-914d-7a0ea0653e27"). InnerVolumeSpecName "kube-api-access-x722h". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 13:00:03 crc kubenswrapper[4703]: I1209 13:00:03.417321 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39cb7561-4275-41f0-914d-7a0ea0653e27-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "39cb7561-4275-41f0-914d-7a0ea0653e27" (UID: "39cb7561-4275-41f0-914d-7a0ea0653e27"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 13:00:03 crc kubenswrapper[4703]: I1209 13:00:03.463423 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hpmp5" Dec 09 13:00:03 crc kubenswrapper[4703]: I1209 13:00:03.513668 4703 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/39cb7561-4275-41f0-914d-7a0ea0653e27-config-volume\") on node \"crc\" DevicePath \"\"" Dec 09 13:00:03 crc kubenswrapper[4703]: I1209 13:00:03.513723 4703 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/39cb7561-4275-41f0-914d-7a0ea0653e27-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 09 13:00:03 crc kubenswrapper[4703]: I1209 13:00:03.513740 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x722h\" (UniqueName: \"kubernetes.io/projected/39cb7561-4275-41f0-914d-7a0ea0653e27-kube-api-access-x722h\") on node \"crc\" DevicePath \"\"" Dec 09 13:00:03 crc kubenswrapper[4703]: I1209 13:00:03.615660 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hf5zz\" (UniqueName: \"kubernetes.io/projected/c6a8911a-e665-4d98-8d51-bdc26bc12702-kube-api-access-hf5zz\") pod \"c6a8911a-e665-4d98-8d51-bdc26bc12702\" (UID: \"c6a8911a-e665-4d98-8d51-bdc26bc12702\") " Dec 09 13:00:03 crc kubenswrapper[4703]: I1209 13:00:03.615783 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6a8911a-e665-4d98-8d51-bdc26bc12702-utilities\") pod \"c6a8911a-e665-4d98-8d51-bdc26bc12702\" (UID: \"c6a8911a-e665-4d98-8d51-bdc26bc12702\") " Dec 09 13:00:03 crc kubenswrapper[4703]: I1209 13:00:03.616041 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6a8911a-e665-4d98-8d51-bdc26bc12702-catalog-content\") pod \"c6a8911a-e665-4d98-8d51-bdc26bc12702\" (UID: \"c6a8911a-e665-4d98-8d51-bdc26bc12702\") " Dec 09 13:00:03 crc kubenswrapper[4703]: I1209 13:00:03.617078 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6a8911a-e665-4d98-8d51-bdc26bc12702-utilities" (OuterVolumeSpecName: "utilities") pod "c6a8911a-e665-4d98-8d51-bdc26bc12702" (UID: "c6a8911a-e665-4d98-8d51-bdc26bc12702"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 13:00:03 crc kubenswrapper[4703]: I1209 13:00:03.620348 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6a8911a-e665-4d98-8d51-bdc26bc12702-kube-api-access-hf5zz" (OuterVolumeSpecName: "kube-api-access-hf5zz") pod "c6a8911a-e665-4d98-8d51-bdc26bc12702" (UID: "c6a8911a-e665-4d98-8d51-bdc26bc12702"). InnerVolumeSpecName "kube-api-access-hf5zz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 13:00:03 crc kubenswrapper[4703]: I1209 13:00:03.675493 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6a8911a-e665-4d98-8d51-bdc26bc12702-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c6a8911a-e665-4d98-8d51-bdc26bc12702" (UID: "c6a8911a-e665-4d98-8d51-bdc26bc12702"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 13:00:03 crc kubenswrapper[4703]: I1209 13:00:03.719657 4703 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6a8911a-e665-4d98-8d51-bdc26bc12702-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 13:00:03 crc kubenswrapper[4703]: I1209 13:00:03.719702 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hf5zz\" (UniqueName: \"kubernetes.io/projected/c6a8911a-e665-4d98-8d51-bdc26bc12702-kube-api-access-hf5zz\") on node \"crc\" DevicePath \"\"" Dec 09 13:00:03 crc kubenswrapper[4703]: I1209 13:00:03.719716 4703 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6a8911a-e665-4d98-8d51-bdc26bc12702-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 13:00:03 crc kubenswrapper[4703]: I1209 13:00:03.755912 4703 generic.go:334] "Generic (PLEG): container finished" podID="c6a8911a-e665-4d98-8d51-bdc26bc12702" containerID="f95da1dddcf0c39b6574b76fe3d6724eaee7b931d1d51fd30023e060f49e0322" exitCode=0 Dec 09 13:00:03 crc kubenswrapper[4703]: I1209 13:00:03.756940 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hpmp5" Dec 09 13:00:03 crc kubenswrapper[4703]: I1209 13:00:03.756979 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hpmp5" event={"ID":"c6a8911a-e665-4d98-8d51-bdc26bc12702","Type":"ContainerDied","Data":"f95da1dddcf0c39b6574b76fe3d6724eaee7b931d1d51fd30023e060f49e0322"} Dec 09 13:00:03 crc kubenswrapper[4703]: I1209 13:00:03.761383 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hpmp5" event={"ID":"c6a8911a-e665-4d98-8d51-bdc26bc12702","Type":"ContainerDied","Data":"a99971d329905e01282656a159e0abd4e7b136de443bde0b45c3f4dfdb79cb63"} Dec 09 13:00:03 crc kubenswrapper[4703]: I1209 13:00:03.761415 4703 scope.go:117] "RemoveContainer" containerID="f95da1dddcf0c39b6574b76fe3d6724eaee7b931d1d51fd30023e060f49e0322" Dec 09 13:00:03 crc kubenswrapper[4703]: I1209 13:00:03.764613 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421420-m6qgw" event={"ID":"39cb7561-4275-41f0-914d-7a0ea0653e27","Type":"ContainerDied","Data":"2feb17582330c1d7be806b8275bd7c5d14ba39dfc224b6ee42418041df53bfb6"} Dec 09 13:00:03 crc kubenswrapper[4703]: I1209 13:00:03.764644 4703 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2feb17582330c1d7be806b8275bd7c5d14ba39dfc224b6ee42418041df53bfb6" Dec 09 13:00:03 crc kubenswrapper[4703]: I1209 13:00:03.764704 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421420-m6qgw" Dec 09 13:00:03 crc kubenswrapper[4703]: I1209 13:00:03.799604 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hpmp5"] Dec 09 13:00:03 crc kubenswrapper[4703]: I1209 13:00:03.800174 4703 scope.go:117] "RemoveContainer" containerID="9680940a41d820f49df457dd102ab80d05143147d0fe40d28963c60dc1600406" Dec 09 13:00:03 crc kubenswrapper[4703]: I1209 13:00:03.808946 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hpmp5"] Dec 09 13:00:03 crc kubenswrapper[4703]: I1209 13:00:03.821504 4703 scope.go:117] "RemoveContainer" containerID="ff2cea7fefa21e074fcc8263223aa19ba22bdf7417078118c264e72955ed0b20" Dec 09 13:00:03 crc kubenswrapper[4703]: I1209 13:00:03.841686 4703 scope.go:117] "RemoveContainer" containerID="f95da1dddcf0c39b6574b76fe3d6724eaee7b931d1d51fd30023e060f49e0322" Dec 09 13:00:03 crc kubenswrapper[4703]: E1209 13:00:03.842112 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f95da1dddcf0c39b6574b76fe3d6724eaee7b931d1d51fd30023e060f49e0322\": container with ID starting with f95da1dddcf0c39b6574b76fe3d6724eaee7b931d1d51fd30023e060f49e0322 not found: ID does not exist" containerID="f95da1dddcf0c39b6574b76fe3d6724eaee7b931d1d51fd30023e060f49e0322" Dec 09 13:00:03 crc kubenswrapper[4703]: I1209 13:00:03.842152 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f95da1dddcf0c39b6574b76fe3d6724eaee7b931d1d51fd30023e060f49e0322"} err="failed to get container status \"f95da1dddcf0c39b6574b76fe3d6724eaee7b931d1d51fd30023e060f49e0322\": rpc error: code = NotFound desc = could not find container \"f95da1dddcf0c39b6574b76fe3d6724eaee7b931d1d51fd30023e060f49e0322\": container with ID starting with f95da1dddcf0c39b6574b76fe3d6724eaee7b931d1d51fd30023e060f49e0322 not found: ID does not exist" Dec 09 13:00:03 crc kubenswrapper[4703]: I1209 13:00:03.842183 4703 scope.go:117] "RemoveContainer" containerID="9680940a41d820f49df457dd102ab80d05143147d0fe40d28963c60dc1600406" Dec 09 13:00:03 crc kubenswrapper[4703]: E1209 13:00:03.842573 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9680940a41d820f49df457dd102ab80d05143147d0fe40d28963c60dc1600406\": container with ID starting with 9680940a41d820f49df457dd102ab80d05143147d0fe40d28963c60dc1600406 not found: ID does not exist" containerID="9680940a41d820f49df457dd102ab80d05143147d0fe40d28963c60dc1600406" Dec 09 13:00:03 crc kubenswrapper[4703]: I1209 13:00:03.842613 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9680940a41d820f49df457dd102ab80d05143147d0fe40d28963c60dc1600406"} err="failed to get container status \"9680940a41d820f49df457dd102ab80d05143147d0fe40d28963c60dc1600406\": rpc error: code = NotFound desc = could not find container \"9680940a41d820f49df457dd102ab80d05143147d0fe40d28963c60dc1600406\": container with ID starting with 9680940a41d820f49df457dd102ab80d05143147d0fe40d28963c60dc1600406 not found: ID does not exist" Dec 09 13:00:03 crc kubenswrapper[4703]: I1209 13:00:03.842628 4703 scope.go:117] "RemoveContainer" containerID="ff2cea7fefa21e074fcc8263223aa19ba22bdf7417078118c264e72955ed0b20" Dec 09 13:00:03 crc kubenswrapper[4703]: E1209 13:00:03.842930 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff2cea7fefa21e074fcc8263223aa19ba22bdf7417078118c264e72955ed0b20\": container with ID starting with ff2cea7fefa21e074fcc8263223aa19ba22bdf7417078118c264e72955ed0b20 not found: ID does not exist" containerID="ff2cea7fefa21e074fcc8263223aa19ba22bdf7417078118c264e72955ed0b20" Dec 09 13:00:03 crc kubenswrapper[4703]: I1209 13:00:03.842974 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff2cea7fefa21e074fcc8263223aa19ba22bdf7417078118c264e72955ed0b20"} err="failed to get container status \"ff2cea7fefa21e074fcc8263223aa19ba22bdf7417078118c264e72955ed0b20\": rpc error: code = NotFound desc = could not find container \"ff2cea7fefa21e074fcc8263223aa19ba22bdf7417078118c264e72955ed0b20\": container with ID starting with ff2cea7fefa21e074fcc8263223aa19ba22bdf7417078118c264e72955ed0b20 not found: ID does not exist" Dec 09 13:00:04 crc kubenswrapper[4703]: I1209 13:00:04.336633 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421375-wzghh"] Dec 09 13:00:04 crc kubenswrapper[4703]: I1209 13:00:04.347608 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421375-wzghh"] Dec 09 13:00:05 crc kubenswrapper[4703]: I1209 13:00:05.091319 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03dacb45-3e46-407f-8679-66e59131494c" path="/var/lib/kubelet/pods/03dacb45-3e46-407f-8679-66e59131494c/volumes" Dec 09 13:00:05 crc kubenswrapper[4703]: I1209 13:00:05.094658 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6a8911a-e665-4d98-8d51-bdc26bc12702" path="/var/lib/kubelet/pods/c6a8911a-e665-4d98-8d51-bdc26bc12702/volumes" Dec 09 13:00:08 crc kubenswrapper[4703]: E1209 13:00:08.073148 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:00:10 crc kubenswrapper[4703]: E1209 13:00:10.073683 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:00:22 crc kubenswrapper[4703]: E1209 13:00:22.073947 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:00:23 crc kubenswrapper[4703]: I1209 13:00:23.432488 4703 scope.go:117] "RemoveContainer" containerID="5d7249e747e0119dfdb8cd04a272497c1316736bd31cd231504ce79f521a2944" Dec 09 13:00:25 crc kubenswrapper[4703]: E1209 13:00:25.072049 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:00:36 crc kubenswrapper[4703]: E1209 13:00:36.072969 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:00:37 crc kubenswrapper[4703]: E1209 13:00:37.072001 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:00:47 crc kubenswrapper[4703]: E1209 13:00:47.073291 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:00:51 crc kubenswrapper[4703]: E1209 13:00:51.079776 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:01:00 crc kubenswrapper[4703]: I1209 13:01:00.083290 4703 patch_prober.go:28] interesting pod/machine-config-daemon-q8sfk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 13:01:00 crc kubenswrapper[4703]: I1209 13:01:00.083993 4703 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 13:01:00 crc kubenswrapper[4703]: I1209 13:01:00.179695 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29421421-mc2kd"] Dec 09 13:01:00 crc kubenswrapper[4703]: E1209 13:01:00.180547 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6a8911a-e665-4d98-8d51-bdc26bc12702" containerName="registry-server" Dec 09 13:01:00 crc kubenswrapper[4703]: I1209 13:01:00.180575 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6a8911a-e665-4d98-8d51-bdc26bc12702" containerName="registry-server" Dec 09 13:01:00 crc kubenswrapper[4703]: E1209 13:01:00.180616 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6a8911a-e665-4d98-8d51-bdc26bc12702" containerName="extract-content" Dec 09 13:01:00 crc kubenswrapper[4703]: I1209 13:01:00.180629 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6a8911a-e665-4d98-8d51-bdc26bc12702" containerName="extract-content" Dec 09 13:01:00 crc kubenswrapper[4703]: E1209 13:01:00.180653 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6a8911a-e665-4d98-8d51-bdc26bc12702" containerName="extract-utilities" Dec 09 13:01:00 crc kubenswrapper[4703]: I1209 13:01:00.180661 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6a8911a-e665-4d98-8d51-bdc26bc12702" containerName="extract-utilities" Dec 09 13:01:00 crc kubenswrapper[4703]: E1209 13:01:00.180692 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39cb7561-4275-41f0-914d-7a0ea0653e27" containerName="collect-profiles" Dec 09 13:01:00 crc kubenswrapper[4703]: I1209 13:01:00.180701 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="39cb7561-4275-41f0-914d-7a0ea0653e27" containerName="collect-profiles" Dec 09 13:01:00 crc kubenswrapper[4703]: I1209 13:01:00.181015 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6a8911a-e665-4d98-8d51-bdc26bc12702" containerName="registry-server" Dec 09 13:01:00 crc kubenswrapper[4703]: I1209 13:01:00.181041 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="39cb7561-4275-41f0-914d-7a0ea0653e27" containerName="collect-profiles" Dec 09 13:01:00 crc kubenswrapper[4703]: I1209 13:01:00.182262 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29421421-mc2kd" Dec 09 13:01:00 crc kubenswrapper[4703]: I1209 13:01:00.194277 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29421421-mc2kd"] Dec 09 13:01:00 crc kubenswrapper[4703]: I1209 13:01:00.322255 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95l42\" (UniqueName: \"kubernetes.io/projected/e081ba82-5ecf-49af-9c37-aa05d051ee71-kube-api-access-95l42\") pod \"keystone-cron-29421421-mc2kd\" (UID: \"e081ba82-5ecf-49af-9c37-aa05d051ee71\") " pod="openstack/keystone-cron-29421421-mc2kd" Dec 09 13:01:00 crc kubenswrapper[4703]: I1209 13:01:00.322352 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e081ba82-5ecf-49af-9c37-aa05d051ee71-fernet-keys\") pod \"keystone-cron-29421421-mc2kd\" (UID: \"e081ba82-5ecf-49af-9c37-aa05d051ee71\") " pod="openstack/keystone-cron-29421421-mc2kd" Dec 09 13:01:00 crc kubenswrapper[4703]: I1209 13:01:00.322413 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e081ba82-5ecf-49af-9c37-aa05d051ee71-config-data\") pod \"keystone-cron-29421421-mc2kd\" (UID: \"e081ba82-5ecf-49af-9c37-aa05d051ee71\") " pod="openstack/keystone-cron-29421421-mc2kd" Dec 09 13:01:00 crc kubenswrapper[4703]: I1209 13:01:00.323862 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e081ba82-5ecf-49af-9c37-aa05d051ee71-combined-ca-bundle\") pod \"keystone-cron-29421421-mc2kd\" (UID: \"e081ba82-5ecf-49af-9c37-aa05d051ee71\") " pod="openstack/keystone-cron-29421421-mc2kd" Dec 09 13:01:00 crc kubenswrapper[4703]: I1209 13:01:00.425657 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95l42\" (UniqueName: \"kubernetes.io/projected/e081ba82-5ecf-49af-9c37-aa05d051ee71-kube-api-access-95l42\") pod \"keystone-cron-29421421-mc2kd\" (UID: \"e081ba82-5ecf-49af-9c37-aa05d051ee71\") " pod="openstack/keystone-cron-29421421-mc2kd" Dec 09 13:01:00 crc kubenswrapper[4703]: I1209 13:01:00.425762 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e081ba82-5ecf-49af-9c37-aa05d051ee71-fernet-keys\") pod \"keystone-cron-29421421-mc2kd\" (UID: \"e081ba82-5ecf-49af-9c37-aa05d051ee71\") " pod="openstack/keystone-cron-29421421-mc2kd" Dec 09 13:01:00 crc kubenswrapper[4703]: I1209 13:01:00.425814 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e081ba82-5ecf-49af-9c37-aa05d051ee71-config-data\") pod \"keystone-cron-29421421-mc2kd\" (UID: \"e081ba82-5ecf-49af-9c37-aa05d051ee71\") " pod="openstack/keystone-cron-29421421-mc2kd" Dec 09 13:01:00 crc kubenswrapper[4703]: I1209 13:01:00.425868 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e081ba82-5ecf-49af-9c37-aa05d051ee71-combined-ca-bundle\") pod \"keystone-cron-29421421-mc2kd\" (UID: \"e081ba82-5ecf-49af-9c37-aa05d051ee71\") " pod="openstack/keystone-cron-29421421-mc2kd" Dec 09 13:01:00 crc kubenswrapper[4703]: I1209 13:01:00.433483 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e081ba82-5ecf-49af-9c37-aa05d051ee71-fernet-keys\") pod \"keystone-cron-29421421-mc2kd\" (UID: \"e081ba82-5ecf-49af-9c37-aa05d051ee71\") " pod="openstack/keystone-cron-29421421-mc2kd" Dec 09 13:01:00 crc kubenswrapper[4703]: I1209 13:01:00.433713 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e081ba82-5ecf-49af-9c37-aa05d051ee71-combined-ca-bundle\") pod \"keystone-cron-29421421-mc2kd\" (UID: \"e081ba82-5ecf-49af-9c37-aa05d051ee71\") " pod="openstack/keystone-cron-29421421-mc2kd" Dec 09 13:01:00 crc kubenswrapper[4703]: I1209 13:01:00.439030 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e081ba82-5ecf-49af-9c37-aa05d051ee71-config-data\") pod \"keystone-cron-29421421-mc2kd\" (UID: \"e081ba82-5ecf-49af-9c37-aa05d051ee71\") " pod="openstack/keystone-cron-29421421-mc2kd" Dec 09 13:01:00 crc kubenswrapper[4703]: I1209 13:01:00.443544 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95l42\" (UniqueName: \"kubernetes.io/projected/e081ba82-5ecf-49af-9c37-aa05d051ee71-kube-api-access-95l42\") pod \"keystone-cron-29421421-mc2kd\" (UID: \"e081ba82-5ecf-49af-9c37-aa05d051ee71\") " pod="openstack/keystone-cron-29421421-mc2kd" Dec 09 13:01:00 crc kubenswrapper[4703]: I1209 13:01:00.503737 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29421421-mc2kd" Dec 09 13:01:01 crc kubenswrapper[4703]: I1209 13:01:01.066956 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29421421-mc2kd"] Dec 09 13:01:01 crc kubenswrapper[4703]: I1209 13:01:01.415294 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29421421-mc2kd" event={"ID":"e081ba82-5ecf-49af-9c37-aa05d051ee71","Type":"ContainerStarted","Data":"73193569de89aa53f4038c1aad54818f5d833600aa89e6af925cf569dcd3d590"} Dec 09 13:01:01 crc kubenswrapper[4703]: I1209 13:01:01.415798 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29421421-mc2kd" event={"ID":"e081ba82-5ecf-49af-9c37-aa05d051ee71","Type":"ContainerStarted","Data":"322623ad4667e54c74db14554efaa250576bb206e9188304814a1c502b518f36"} Dec 09 13:01:01 crc kubenswrapper[4703]: I1209 13:01:01.438484 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29421421-mc2kd" podStartSLOduration=1.438464282 podStartE2EDuration="1.438464282s" podCreationTimestamp="2025-12-09 13:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 13:01:01.433538791 +0000 UTC m=+3360.682302330" watchObservedRunningTime="2025-12-09 13:01:01.438464282 +0000 UTC m=+3360.687227801" Dec 09 13:01:02 crc kubenswrapper[4703]: E1209 13:01:02.072223 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:01:03 crc kubenswrapper[4703]: E1209 13:01:03.072982 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:01:05 crc kubenswrapper[4703]: I1209 13:01:05.461696 4703 generic.go:334] "Generic (PLEG): container finished" podID="e081ba82-5ecf-49af-9c37-aa05d051ee71" containerID="73193569de89aa53f4038c1aad54818f5d833600aa89e6af925cf569dcd3d590" exitCode=0 Dec 09 13:01:05 crc kubenswrapper[4703]: I1209 13:01:05.461818 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29421421-mc2kd" event={"ID":"e081ba82-5ecf-49af-9c37-aa05d051ee71","Type":"ContainerDied","Data":"73193569de89aa53f4038c1aad54818f5d833600aa89e6af925cf569dcd3d590"} Dec 09 13:01:07 crc kubenswrapper[4703]: I1209 13:01:07.134344 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29421421-mc2kd" Dec 09 13:01:07 crc kubenswrapper[4703]: I1209 13:01:07.267761 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-95l42\" (UniqueName: \"kubernetes.io/projected/e081ba82-5ecf-49af-9c37-aa05d051ee71-kube-api-access-95l42\") pod \"e081ba82-5ecf-49af-9c37-aa05d051ee71\" (UID: \"e081ba82-5ecf-49af-9c37-aa05d051ee71\") " Dec 09 13:01:07 crc kubenswrapper[4703]: I1209 13:01:07.268039 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e081ba82-5ecf-49af-9c37-aa05d051ee71-fernet-keys\") pod \"e081ba82-5ecf-49af-9c37-aa05d051ee71\" (UID: \"e081ba82-5ecf-49af-9c37-aa05d051ee71\") " Dec 09 13:01:07 crc kubenswrapper[4703]: I1209 13:01:07.268082 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e081ba82-5ecf-49af-9c37-aa05d051ee71-combined-ca-bundle\") pod \"e081ba82-5ecf-49af-9c37-aa05d051ee71\" (UID: \"e081ba82-5ecf-49af-9c37-aa05d051ee71\") " Dec 09 13:01:07 crc kubenswrapper[4703]: I1209 13:01:07.268164 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e081ba82-5ecf-49af-9c37-aa05d051ee71-config-data\") pod \"e081ba82-5ecf-49af-9c37-aa05d051ee71\" (UID: \"e081ba82-5ecf-49af-9c37-aa05d051ee71\") " Dec 09 13:01:07 crc kubenswrapper[4703]: I1209 13:01:07.276843 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e081ba82-5ecf-49af-9c37-aa05d051ee71-kube-api-access-95l42" (OuterVolumeSpecName: "kube-api-access-95l42") pod "e081ba82-5ecf-49af-9c37-aa05d051ee71" (UID: "e081ba82-5ecf-49af-9c37-aa05d051ee71"). InnerVolumeSpecName "kube-api-access-95l42". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 13:01:07 crc kubenswrapper[4703]: I1209 13:01:07.277381 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e081ba82-5ecf-49af-9c37-aa05d051ee71-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "e081ba82-5ecf-49af-9c37-aa05d051ee71" (UID: "e081ba82-5ecf-49af-9c37-aa05d051ee71"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 13:01:07 crc kubenswrapper[4703]: I1209 13:01:07.313762 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e081ba82-5ecf-49af-9c37-aa05d051ee71-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e081ba82-5ecf-49af-9c37-aa05d051ee71" (UID: "e081ba82-5ecf-49af-9c37-aa05d051ee71"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 13:01:07 crc kubenswrapper[4703]: I1209 13:01:07.339917 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e081ba82-5ecf-49af-9c37-aa05d051ee71-config-data" (OuterVolumeSpecName: "config-data") pod "e081ba82-5ecf-49af-9c37-aa05d051ee71" (UID: "e081ba82-5ecf-49af-9c37-aa05d051ee71"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 13:01:07 crc kubenswrapper[4703]: I1209 13:01:07.369885 4703 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e081ba82-5ecf-49af-9c37-aa05d051ee71-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 09 13:01:07 crc kubenswrapper[4703]: I1209 13:01:07.369926 4703 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e081ba82-5ecf-49af-9c37-aa05d051ee71-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 13:01:07 crc kubenswrapper[4703]: I1209 13:01:07.369940 4703 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e081ba82-5ecf-49af-9c37-aa05d051ee71-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 13:01:07 crc kubenswrapper[4703]: I1209 13:01:07.369949 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-95l42\" (UniqueName: \"kubernetes.io/projected/e081ba82-5ecf-49af-9c37-aa05d051ee71-kube-api-access-95l42\") on node \"crc\" DevicePath \"\"" Dec 09 13:01:07 crc kubenswrapper[4703]: I1209 13:01:07.490634 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29421421-mc2kd" event={"ID":"e081ba82-5ecf-49af-9c37-aa05d051ee71","Type":"ContainerDied","Data":"322623ad4667e54c74db14554efaa250576bb206e9188304814a1c502b518f36"} Dec 09 13:01:07 crc kubenswrapper[4703]: I1209 13:01:07.490686 4703 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="322623ad4667e54c74db14554efaa250576bb206e9188304814a1c502b518f36" Dec 09 13:01:07 crc kubenswrapper[4703]: I1209 13:01:07.490767 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29421421-mc2kd" Dec 09 13:01:14 crc kubenswrapper[4703]: E1209 13:01:14.073728 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:01:18 crc kubenswrapper[4703]: E1209 13:01:18.073893 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:01:26 crc kubenswrapper[4703]: E1209 13:01:26.074133 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:01:30 crc kubenswrapper[4703]: I1209 13:01:30.083541 4703 patch_prober.go:28] interesting pod/machine-config-daemon-q8sfk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 13:01:30 crc kubenswrapper[4703]: I1209 13:01:30.084308 4703 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 13:01:33 crc kubenswrapper[4703]: E1209 13:01:33.073581 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:01:41 crc kubenswrapper[4703]: E1209 13:01:41.082749 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:01:46 crc kubenswrapper[4703]: E1209 13:01:46.072384 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:01:55 crc kubenswrapper[4703]: E1209 13:01:55.073829 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:01:59 crc kubenswrapper[4703]: E1209 13:01:59.073454 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:02:00 crc kubenswrapper[4703]: I1209 13:02:00.084158 4703 patch_prober.go:28] interesting pod/machine-config-daemon-q8sfk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 13:02:00 crc kubenswrapper[4703]: I1209 13:02:00.084264 4703 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 13:02:00 crc kubenswrapper[4703]: I1209 13:02:00.084319 4703 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" Dec 09 13:02:00 crc kubenswrapper[4703]: I1209 13:02:00.085247 4703 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d82dddf25115b97eacea097c2d89d3dbbeaff47f141eed9d2816fe7e25a4ddb7"} pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 13:02:00 crc kubenswrapper[4703]: I1209 13:02:00.085308 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" containerName="machine-config-daemon" containerID="cri-o://d82dddf25115b97eacea097c2d89d3dbbeaff47f141eed9d2816fe7e25a4ddb7" gracePeriod=600 Dec 09 13:02:01 crc kubenswrapper[4703]: I1209 13:02:01.103768 4703 generic.go:334] "Generic (PLEG): container finished" podID="32956ceb-8540-406e-8693-e86efb46cd42" containerID="d82dddf25115b97eacea097c2d89d3dbbeaff47f141eed9d2816fe7e25a4ddb7" exitCode=0 Dec 09 13:02:01 crc kubenswrapper[4703]: I1209 13:02:01.103850 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" event={"ID":"32956ceb-8540-406e-8693-e86efb46cd42","Type":"ContainerDied","Data":"d82dddf25115b97eacea097c2d89d3dbbeaff47f141eed9d2816fe7e25a4ddb7"} Dec 09 13:02:01 crc kubenswrapper[4703]: I1209 13:02:01.104598 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" event={"ID":"32956ceb-8540-406e-8693-e86efb46cd42","Type":"ContainerStarted","Data":"9fa2385da1baff876428e5756a19c3d3ddf508cad92b88df98cc322f0c8079d9"} Dec 09 13:02:01 crc kubenswrapper[4703]: I1209 13:02:01.104641 4703 scope.go:117] "RemoveContainer" containerID="d00f1f478555417b562293d5f6f436f09e33ae076fa799dd8a6a17304f341bee" Dec 09 13:02:10 crc kubenswrapper[4703]: E1209 13:02:10.074019 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:02:13 crc kubenswrapper[4703]: E1209 13:02:13.072899 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:02:21 crc kubenswrapper[4703]: E1209 13:02:21.088229 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:02:24 crc kubenswrapper[4703]: E1209 13:02:24.073257 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:02:36 crc kubenswrapper[4703]: E1209 13:02:36.073774 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:02:37 crc kubenswrapper[4703]: E1209 13:02:37.071888 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:02:48 crc kubenswrapper[4703]: E1209 13:02:48.072859 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:02:49 crc kubenswrapper[4703]: E1209 13:02:49.073086 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:03:00 crc kubenswrapper[4703]: E1209 13:03:00.074034 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:03:03 crc kubenswrapper[4703]: E1209 13:03:03.073379 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:03:13 crc kubenswrapper[4703]: E1209 13:03:13.075349 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:03:15 crc kubenswrapper[4703]: E1209 13:03:15.072623 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:03:23 crc kubenswrapper[4703]: I1209 13:03:23.550260 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6b4vd"] Dec 09 13:03:23 crc kubenswrapper[4703]: E1209 13:03:23.551610 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e081ba82-5ecf-49af-9c37-aa05d051ee71" containerName="keystone-cron" Dec 09 13:03:23 crc kubenswrapper[4703]: I1209 13:03:23.551630 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="e081ba82-5ecf-49af-9c37-aa05d051ee71" containerName="keystone-cron" Dec 09 13:03:23 crc kubenswrapper[4703]: I1209 13:03:23.551937 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="e081ba82-5ecf-49af-9c37-aa05d051ee71" containerName="keystone-cron" Dec 09 13:03:23 crc kubenswrapper[4703]: I1209 13:03:23.555751 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6b4vd" Dec 09 13:03:23 crc kubenswrapper[4703]: I1209 13:03:23.587946 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6b4vd"] Dec 09 13:03:23 crc kubenswrapper[4703]: I1209 13:03:23.660751 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19f50df7-14bc-4b46-8171-a032fa5932ed-utilities\") pod \"redhat-marketplace-6b4vd\" (UID: \"19f50df7-14bc-4b46-8171-a032fa5932ed\") " pod="openshift-marketplace/redhat-marketplace-6b4vd" Dec 09 13:03:23 crc kubenswrapper[4703]: I1209 13:03:23.660932 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8tx7\" (UniqueName: \"kubernetes.io/projected/19f50df7-14bc-4b46-8171-a032fa5932ed-kube-api-access-q8tx7\") pod \"redhat-marketplace-6b4vd\" (UID: \"19f50df7-14bc-4b46-8171-a032fa5932ed\") " pod="openshift-marketplace/redhat-marketplace-6b4vd" Dec 09 13:03:23 crc kubenswrapper[4703]: I1209 13:03:23.661020 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19f50df7-14bc-4b46-8171-a032fa5932ed-catalog-content\") pod \"redhat-marketplace-6b4vd\" (UID: \"19f50df7-14bc-4b46-8171-a032fa5932ed\") " pod="openshift-marketplace/redhat-marketplace-6b4vd" Dec 09 13:03:23 crc kubenswrapper[4703]: I1209 13:03:23.763261 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19f50df7-14bc-4b46-8171-a032fa5932ed-utilities\") pod \"redhat-marketplace-6b4vd\" (UID: \"19f50df7-14bc-4b46-8171-a032fa5932ed\") " pod="openshift-marketplace/redhat-marketplace-6b4vd" Dec 09 13:03:23 crc kubenswrapper[4703]: I1209 13:03:23.763578 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8tx7\" (UniqueName: \"kubernetes.io/projected/19f50df7-14bc-4b46-8171-a032fa5932ed-kube-api-access-q8tx7\") pod \"redhat-marketplace-6b4vd\" (UID: \"19f50df7-14bc-4b46-8171-a032fa5932ed\") " pod="openshift-marketplace/redhat-marketplace-6b4vd" Dec 09 13:03:23 crc kubenswrapper[4703]: I1209 13:03:23.763643 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19f50df7-14bc-4b46-8171-a032fa5932ed-catalog-content\") pod \"redhat-marketplace-6b4vd\" (UID: \"19f50df7-14bc-4b46-8171-a032fa5932ed\") " pod="openshift-marketplace/redhat-marketplace-6b4vd" Dec 09 13:03:23 crc kubenswrapper[4703]: I1209 13:03:23.764061 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19f50df7-14bc-4b46-8171-a032fa5932ed-utilities\") pod \"redhat-marketplace-6b4vd\" (UID: \"19f50df7-14bc-4b46-8171-a032fa5932ed\") " pod="openshift-marketplace/redhat-marketplace-6b4vd" Dec 09 13:03:23 crc kubenswrapper[4703]: I1209 13:03:23.764093 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19f50df7-14bc-4b46-8171-a032fa5932ed-catalog-content\") pod \"redhat-marketplace-6b4vd\" (UID: \"19f50df7-14bc-4b46-8171-a032fa5932ed\") " pod="openshift-marketplace/redhat-marketplace-6b4vd" Dec 09 13:03:23 crc kubenswrapper[4703]: I1209 13:03:23.792316 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8tx7\" (UniqueName: \"kubernetes.io/projected/19f50df7-14bc-4b46-8171-a032fa5932ed-kube-api-access-q8tx7\") pod \"redhat-marketplace-6b4vd\" (UID: \"19f50df7-14bc-4b46-8171-a032fa5932ed\") " pod="openshift-marketplace/redhat-marketplace-6b4vd" Dec 09 13:03:23 crc kubenswrapper[4703]: I1209 13:03:23.893440 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6b4vd" Dec 09 13:03:24 crc kubenswrapper[4703]: E1209 13:03:24.078560 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:03:24 crc kubenswrapper[4703]: I1209 13:03:24.422585 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6b4vd"] Dec 09 13:03:25 crc kubenswrapper[4703]: I1209 13:03:25.109785 4703 generic.go:334] "Generic (PLEG): container finished" podID="19f50df7-14bc-4b46-8171-a032fa5932ed" containerID="6a883815f466eb029d99a426179d0f5f912ca5a7adb8945690428544334a05ca" exitCode=0 Dec 09 13:03:25 crc kubenswrapper[4703]: I1209 13:03:25.109862 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6b4vd" event={"ID":"19f50df7-14bc-4b46-8171-a032fa5932ed","Type":"ContainerDied","Data":"6a883815f466eb029d99a426179d0f5f912ca5a7adb8945690428544334a05ca"} Dec 09 13:03:25 crc kubenswrapper[4703]: I1209 13:03:25.110250 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6b4vd" event={"ID":"19f50df7-14bc-4b46-8171-a032fa5932ed","Type":"ContainerStarted","Data":"6db88f1ca1f791d32c1ddcb18a0730f1c37593d855e72b3adfa5081f03340bf3"} Dec 09 13:03:27 crc kubenswrapper[4703]: I1209 13:03:27.134290 4703 generic.go:334] "Generic (PLEG): container finished" podID="19f50df7-14bc-4b46-8171-a032fa5932ed" containerID="caf6e88a9fb53f3ba7638bd90ef80be218640252330d4233ba29c493044ed14d" exitCode=0 Dec 09 13:03:27 crc kubenswrapper[4703]: I1209 13:03:27.134358 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6b4vd" event={"ID":"19f50df7-14bc-4b46-8171-a032fa5932ed","Type":"ContainerDied","Data":"caf6e88a9fb53f3ba7638bd90ef80be218640252330d4233ba29c493044ed14d"} Dec 09 13:03:28 crc kubenswrapper[4703]: I1209 13:03:28.150208 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6b4vd" event={"ID":"19f50df7-14bc-4b46-8171-a032fa5932ed","Type":"ContainerStarted","Data":"bca97a4f45bd05604f71db2a1c2bb4e71cab18992f761ff294eb34bf311ff6d9"} Dec 09 13:03:28 crc kubenswrapper[4703]: I1209 13:03:28.188855 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6b4vd" podStartSLOduration=2.766254885 podStartE2EDuration="5.188819783s" podCreationTimestamp="2025-12-09 13:03:23 +0000 UTC" firstStartedPulling="2025-12-09 13:03:25.112185376 +0000 UTC m=+3504.360948895" lastFinishedPulling="2025-12-09 13:03:27.534750274 +0000 UTC m=+3506.783513793" observedRunningTime="2025-12-09 13:03:28.175514791 +0000 UTC m=+3507.424278320" watchObservedRunningTime="2025-12-09 13:03:28.188819783 +0000 UTC m=+3507.437583292" Dec 09 13:03:29 crc kubenswrapper[4703]: E1209 13:03:29.073155 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:03:33 crc kubenswrapper[4703]: I1209 13:03:33.893900 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6b4vd" Dec 09 13:03:33 crc kubenswrapper[4703]: I1209 13:03:33.894544 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6b4vd" Dec 09 13:03:33 crc kubenswrapper[4703]: I1209 13:03:33.959271 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6b4vd" Dec 09 13:03:34 crc kubenswrapper[4703]: I1209 13:03:34.298277 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6b4vd" Dec 09 13:03:34 crc kubenswrapper[4703]: I1209 13:03:34.380062 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6b4vd"] Dec 09 13:03:36 crc kubenswrapper[4703]: I1209 13:03:36.237665 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6b4vd" podUID="19f50df7-14bc-4b46-8171-a032fa5932ed" containerName="registry-server" containerID="cri-o://bca97a4f45bd05604f71db2a1c2bb4e71cab18992f761ff294eb34bf311ff6d9" gracePeriod=2 Dec 09 13:03:36 crc kubenswrapper[4703]: I1209 13:03:36.992029 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6b4vd" Dec 09 13:03:37 crc kubenswrapper[4703]: E1209 13:03:37.079597 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:03:37 crc kubenswrapper[4703]: I1209 13:03:37.183010 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19f50df7-14bc-4b46-8171-a032fa5932ed-catalog-content\") pod \"19f50df7-14bc-4b46-8171-a032fa5932ed\" (UID: \"19f50df7-14bc-4b46-8171-a032fa5932ed\") " Dec 09 13:03:37 crc kubenswrapper[4703]: I1209 13:03:37.183277 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q8tx7\" (UniqueName: \"kubernetes.io/projected/19f50df7-14bc-4b46-8171-a032fa5932ed-kube-api-access-q8tx7\") pod \"19f50df7-14bc-4b46-8171-a032fa5932ed\" (UID: \"19f50df7-14bc-4b46-8171-a032fa5932ed\") " Dec 09 13:03:37 crc kubenswrapper[4703]: I1209 13:03:37.183345 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19f50df7-14bc-4b46-8171-a032fa5932ed-utilities\") pod \"19f50df7-14bc-4b46-8171-a032fa5932ed\" (UID: \"19f50df7-14bc-4b46-8171-a032fa5932ed\") " Dec 09 13:03:37 crc kubenswrapper[4703]: I1209 13:03:37.185030 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19f50df7-14bc-4b46-8171-a032fa5932ed-utilities" (OuterVolumeSpecName: "utilities") pod "19f50df7-14bc-4b46-8171-a032fa5932ed" (UID: "19f50df7-14bc-4b46-8171-a032fa5932ed"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 13:03:37 crc kubenswrapper[4703]: I1209 13:03:37.188725 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19f50df7-14bc-4b46-8171-a032fa5932ed-kube-api-access-q8tx7" (OuterVolumeSpecName: "kube-api-access-q8tx7") pod "19f50df7-14bc-4b46-8171-a032fa5932ed" (UID: "19f50df7-14bc-4b46-8171-a032fa5932ed"). InnerVolumeSpecName "kube-api-access-q8tx7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 13:03:37 crc kubenswrapper[4703]: I1209 13:03:37.210507 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19f50df7-14bc-4b46-8171-a032fa5932ed-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "19f50df7-14bc-4b46-8171-a032fa5932ed" (UID: "19f50df7-14bc-4b46-8171-a032fa5932ed"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 13:03:37 crc kubenswrapper[4703]: I1209 13:03:37.256891 4703 generic.go:334] "Generic (PLEG): container finished" podID="19f50df7-14bc-4b46-8171-a032fa5932ed" containerID="bca97a4f45bd05604f71db2a1c2bb4e71cab18992f761ff294eb34bf311ff6d9" exitCode=0 Dec 09 13:03:37 crc kubenswrapper[4703]: I1209 13:03:37.256945 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6b4vd" event={"ID":"19f50df7-14bc-4b46-8171-a032fa5932ed","Type":"ContainerDied","Data":"bca97a4f45bd05604f71db2a1c2bb4e71cab18992f761ff294eb34bf311ff6d9"} Dec 09 13:03:37 crc kubenswrapper[4703]: I1209 13:03:37.257900 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6b4vd" event={"ID":"19f50df7-14bc-4b46-8171-a032fa5932ed","Type":"ContainerDied","Data":"6db88f1ca1f791d32c1ddcb18a0730f1c37593d855e72b3adfa5081f03340bf3"} Dec 09 13:03:37 crc kubenswrapper[4703]: I1209 13:03:37.258073 4703 scope.go:117] "RemoveContainer" containerID="bca97a4f45bd05604f71db2a1c2bb4e71cab18992f761ff294eb34bf311ff6d9" Dec 09 13:03:37 crc kubenswrapper[4703]: I1209 13:03:37.256989 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6b4vd" Dec 09 13:03:37 crc kubenswrapper[4703]: I1209 13:03:37.291144 4703 scope.go:117] "RemoveContainer" containerID="caf6e88a9fb53f3ba7638bd90ef80be218640252330d4233ba29c493044ed14d" Dec 09 13:03:37 crc kubenswrapper[4703]: I1209 13:03:37.291178 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q8tx7\" (UniqueName: \"kubernetes.io/projected/19f50df7-14bc-4b46-8171-a032fa5932ed-kube-api-access-q8tx7\") on node \"crc\" DevicePath \"\"" Dec 09 13:03:37 crc kubenswrapper[4703]: I1209 13:03:37.291241 4703 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19f50df7-14bc-4b46-8171-a032fa5932ed-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 13:03:37 crc kubenswrapper[4703]: I1209 13:03:37.291254 4703 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19f50df7-14bc-4b46-8171-a032fa5932ed-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 13:03:37 crc kubenswrapper[4703]: I1209 13:03:37.308210 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6b4vd"] Dec 09 13:03:37 crc kubenswrapper[4703]: I1209 13:03:37.321018 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6b4vd"] Dec 09 13:03:37 crc kubenswrapper[4703]: I1209 13:03:37.323592 4703 scope.go:117] "RemoveContainer" containerID="6a883815f466eb029d99a426179d0f5f912ca5a7adb8945690428544334a05ca" Dec 09 13:03:37 crc kubenswrapper[4703]: I1209 13:03:37.376445 4703 scope.go:117] "RemoveContainer" containerID="bca97a4f45bd05604f71db2a1c2bb4e71cab18992f761ff294eb34bf311ff6d9" Dec 09 13:03:37 crc kubenswrapper[4703]: E1209 13:03:37.377256 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bca97a4f45bd05604f71db2a1c2bb4e71cab18992f761ff294eb34bf311ff6d9\": container with ID starting with bca97a4f45bd05604f71db2a1c2bb4e71cab18992f761ff294eb34bf311ff6d9 not found: ID does not exist" containerID="bca97a4f45bd05604f71db2a1c2bb4e71cab18992f761ff294eb34bf311ff6d9" Dec 09 13:03:37 crc kubenswrapper[4703]: I1209 13:03:37.377332 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bca97a4f45bd05604f71db2a1c2bb4e71cab18992f761ff294eb34bf311ff6d9"} err="failed to get container status \"bca97a4f45bd05604f71db2a1c2bb4e71cab18992f761ff294eb34bf311ff6d9\": rpc error: code = NotFound desc = could not find container \"bca97a4f45bd05604f71db2a1c2bb4e71cab18992f761ff294eb34bf311ff6d9\": container with ID starting with bca97a4f45bd05604f71db2a1c2bb4e71cab18992f761ff294eb34bf311ff6d9 not found: ID does not exist" Dec 09 13:03:37 crc kubenswrapper[4703]: I1209 13:03:37.377365 4703 scope.go:117] "RemoveContainer" containerID="caf6e88a9fb53f3ba7638bd90ef80be218640252330d4233ba29c493044ed14d" Dec 09 13:03:37 crc kubenswrapper[4703]: E1209 13:03:37.377839 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"caf6e88a9fb53f3ba7638bd90ef80be218640252330d4233ba29c493044ed14d\": container with ID starting with caf6e88a9fb53f3ba7638bd90ef80be218640252330d4233ba29c493044ed14d not found: ID does not exist" containerID="caf6e88a9fb53f3ba7638bd90ef80be218640252330d4233ba29c493044ed14d" Dec 09 13:03:37 crc kubenswrapper[4703]: I1209 13:03:37.377866 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"caf6e88a9fb53f3ba7638bd90ef80be218640252330d4233ba29c493044ed14d"} err="failed to get container status \"caf6e88a9fb53f3ba7638bd90ef80be218640252330d4233ba29c493044ed14d\": rpc error: code = NotFound desc = could not find container \"caf6e88a9fb53f3ba7638bd90ef80be218640252330d4233ba29c493044ed14d\": container with ID starting with caf6e88a9fb53f3ba7638bd90ef80be218640252330d4233ba29c493044ed14d not found: ID does not exist" Dec 09 13:03:37 crc kubenswrapper[4703]: I1209 13:03:37.377880 4703 scope.go:117] "RemoveContainer" containerID="6a883815f466eb029d99a426179d0f5f912ca5a7adb8945690428544334a05ca" Dec 09 13:03:37 crc kubenswrapper[4703]: E1209 13:03:37.378358 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a883815f466eb029d99a426179d0f5f912ca5a7adb8945690428544334a05ca\": container with ID starting with 6a883815f466eb029d99a426179d0f5f912ca5a7adb8945690428544334a05ca not found: ID does not exist" containerID="6a883815f466eb029d99a426179d0f5f912ca5a7adb8945690428544334a05ca" Dec 09 13:03:37 crc kubenswrapper[4703]: I1209 13:03:37.378408 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a883815f466eb029d99a426179d0f5f912ca5a7adb8945690428544334a05ca"} err="failed to get container status \"6a883815f466eb029d99a426179d0f5f912ca5a7adb8945690428544334a05ca\": rpc error: code = NotFound desc = could not find container \"6a883815f466eb029d99a426179d0f5f912ca5a7adb8945690428544334a05ca\": container with ID starting with 6a883815f466eb029d99a426179d0f5f912ca5a7adb8945690428544334a05ca not found: ID does not exist" Dec 09 13:03:39 crc kubenswrapper[4703]: I1209 13:03:39.087320 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19f50df7-14bc-4b46-8171-a032fa5932ed" path="/var/lib/kubelet/pods/19f50df7-14bc-4b46-8171-a032fa5932ed/volumes" Dec 09 13:03:43 crc kubenswrapper[4703]: E1209 13:03:43.074517 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:03:50 crc kubenswrapper[4703]: E1209 13:03:50.072962 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:03:55 crc kubenswrapper[4703]: I1209 13:03:55.073073 4703 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 09 13:03:55 crc kubenswrapper[4703]: E1209 13:03:55.203841 4703 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 13:03:55 crc kubenswrapper[4703]: E1209 13:03:55.203942 4703 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 13:03:55 crc kubenswrapper[4703]: E1209 13:03:55.204219 4703 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n565h678h645h65h685h657h568h564h8fhcfhc4h58fh66bh564h649h5d5hb5h67dh57fh97h58ch557h4h68h59fh5c8h65ch586h5bch5b8h5c6h7q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p7hbq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(ce42c586-f397-4f98-be45-f56d36115d7a): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 13:03:55 crc kubenswrapper[4703]: E1209 13:03:55.205483 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:04:00 crc kubenswrapper[4703]: I1209 13:04:00.084004 4703 patch_prober.go:28] interesting pod/machine-config-daemon-q8sfk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 13:04:00 crc kubenswrapper[4703]: I1209 13:04:00.084591 4703 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 13:04:05 crc kubenswrapper[4703]: E1209 13:04:05.072946 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:04:09 crc kubenswrapper[4703]: E1209 13:04:09.076243 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:04:20 crc kubenswrapper[4703]: E1209 13:04:20.188164 4703 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 09 13:04:20 crc kubenswrapper[4703]: E1209 13:04:20.189002 4703 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 09 13:04:20 crc kubenswrapper[4703]: E1209 13:04:20.189208 4703 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6kdkz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-6ljb8_openstack(862e9d91-760e-43af-aba3-a23255b0fd7a): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 13:04:20 crc kubenswrapper[4703]: E1209 13:04:20.190456 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:04:24 crc kubenswrapper[4703]: E1209 13:04:24.075455 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:04:30 crc kubenswrapper[4703]: I1209 13:04:30.083859 4703 patch_prober.go:28] interesting pod/machine-config-daemon-q8sfk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 13:04:30 crc kubenswrapper[4703]: I1209 13:04:30.084594 4703 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 13:04:31 crc kubenswrapper[4703]: E1209 13:04:31.080121 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:04:36 crc kubenswrapper[4703]: E1209 13:04:36.072680 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:04:44 crc kubenswrapper[4703]: E1209 13:04:44.074363 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:04:51 crc kubenswrapper[4703]: E1209 13:04:51.085171 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:04:51 crc kubenswrapper[4703]: I1209 13:04:51.091418 4703 generic.go:334] "Generic (PLEG): container finished" podID="cbef74eb-61bc-4efa-8621-6e089311a571" containerID="ae9d9adcd217469896806fcfa805bec8eeff82d4a4a57c38e2f177119c11f668" exitCode=2 Dec 09 13:04:51 crc kubenswrapper[4703]: I1209 13:04:51.091471 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cf88n" event={"ID":"cbef74eb-61bc-4efa-8621-6e089311a571","Type":"ContainerDied","Data":"ae9d9adcd217469896806fcfa805bec8eeff82d4a4a57c38e2f177119c11f668"} Dec 09 13:04:52 crc kubenswrapper[4703]: I1209 13:04:52.622037 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cf88n" Dec 09 13:04:52 crc kubenswrapper[4703]: I1209 13:04:52.707494 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rfshh\" (UniqueName: \"kubernetes.io/projected/cbef74eb-61bc-4efa-8621-6e089311a571-kube-api-access-rfshh\") pod \"cbef74eb-61bc-4efa-8621-6e089311a571\" (UID: \"cbef74eb-61bc-4efa-8621-6e089311a571\") " Dec 09 13:04:52 crc kubenswrapper[4703]: I1209 13:04:52.708163 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cbef74eb-61bc-4efa-8621-6e089311a571-inventory\") pod \"cbef74eb-61bc-4efa-8621-6e089311a571\" (UID: \"cbef74eb-61bc-4efa-8621-6e089311a571\") " Dec 09 13:04:52 crc kubenswrapper[4703]: I1209 13:04:52.708551 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cbef74eb-61bc-4efa-8621-6e089311a571-ssh-key\") pod \"cbef74eb-61bc-4efa-8621-6e089311a571\" (UID: \"cbef74eb-61bc-4efa-8621-6e089311a571\") " Dec 09 13:04:52 crc kubenswrapper[4703]: I1209 13:04:52.715430 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbef74eb-61bc-4efa-8621-6e089311a571-kube-api-access-rfshh" (OuterVolumeSpecName: "kube-api-access-rfshh") pod "cbef74eb-61bc-4efa-8621-6e089311a571" (UID: "cbef74eb-61bc-4efa-8621-6e089311a571"). InnerVolumeSpecName "kube-api-access-rfshh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 13:04:52 crc kubenswrapper[4703]: I1209 13:04:52.742413 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbef74eb-61bc-4efa-8621-6e089311a571-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "cbef74eb-61bc-4efa-8621-6e089311a571" (UID: "cbef74eb-61bc-4efa-8621-6e089311a571"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 13:04:52 crc kubenswrapper[4703]: I1209 13:04:52.743659 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbef74eb-61bc-4efa-8621-6e089311a571-inventory" (OuterVolumeSpecName: "inventory") pod "cbef74eb-61bc-4efa-8621-6e089311a571" (UID: "cbef74eb-61bc-4efa-8621-6e089311a571"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 13:04:52 crc kubenswrapper[4703]: I1209 13:04:52.812133 4703 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cbef74eb-61bc-4efa-8621-6e089311a571-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 09 13:04:52 crc kubenswrapper[4703]: I1209 13:04:52.812206 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rfshh\" (UniqueName: \"kubernetes.io/projected/cbef74eb-61bc-4efa-8621-6e089311a571-kube-api-access-rfshh\") on node \"crc\" DevicePath \"\"" Dec 09 13:04:52 crc kubenswrapper[4703]: I1209 13:04:52.812223 4703 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cbef74eb-61bc-4efa-8621-6e089311a571-inventory\") on node \"crc\" DevicePath \"\"" Dec 09 13:04:53 crc kubenswrapper[4703]: I1209 13:04:53.114718 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cf88n" event={"ID":"cbef74eb-61bc-4efa-8621-6e089311a571","Type":"ContainerDied","Data":"dec4204b21ebcfbbe5f8f4d1c9fbd5df875e4e2de0c7f6ed51357713cba1c388"} Dec 09 13:04:53 crc kubenswrapper[4703]: I1209 13:04:53.114791 4703 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dec4204b21ebcfbbe5f8f4d1c9fbd5df875e4e2de0c7f6ed51357713cba1c388" Dec 09 13:04:53 crc kubenswrapper[4703]: I1209 13:04:53.114834 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cf88n" Dec 09 13:04:57 crc kubenswrapper[4703]: E1209 13:04:57.073738 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:05:00 crc kubenswrapper[4703]: I1209 13:05:00.083883 4703 patch_prober.go:28] interesting pod/machine-config-daemon-q8sfk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 13:05:00 crc kubenswrapper[4703]: I1209 13:05:00.084256 4703 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 13:05:00 crc kubenswrapper[4703]: I1209 13:05:00.084314 4703 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" Dec 09 13:05:00 crc kubenswrapper[4703]: I1209 13:05:00.085137 4703 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9fa2385da1baff876428e5756a19c3d3ddf508cad92b88df98cc322f0c8079d9"} pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 13:05:00 crc kubenswrapper[4703]: I1209 13:05:00.085264 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" containerName="machine-config-daemon" containerID="cri-o://9fa2385da1baff876428e5756a19c3d3ddf508cad92b88df98cc322f0c8079d9" gracePeriod=600 Dec 09 13:05:00 crc kubenswrapper[4703]: E1209 13:05:00.210676 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 13:05:01 crc kubenswrapper[4703]: I1209 13:05:01.207543 4703 generic.go:334] "Generic (PLEG): container finished" podID="32956ceb-8540-406e-8693-e86efb46cd42" containerID="9fa2385da1baff876428e5756a19c3d3ddf508cad92b88df98cc322f0c8079d9" exitCode=0 Dec 09 13:05:01 crc kubenswrapper[4703]: I1209 13:05:01.207760 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" event={"ID":"32956ceb-8540-406e-8693-e86efb46cd42","Type":"ContainerDied","Data":"9fa2385da1baff876428e5756a19c3d3ddf508cad92b88df98cc322f0c8079d9"} Dec 09 13:05:01 crc kubenswrapper[4703]: I1209 13:05:01.207892 4703 scope.go:117] "RemoveContainer" containerID="d82dddf25115b97eacea097c2d89d3dbbeaff47f141eed9d2816fe7e25a4ddb7" Dec 09 13:05:01 crc kubenswrapper[4703]: I1209 13:05:01.208958 4703 scope.go:117] "RemoveContainer" containerID="9fa2385da1baff876428e5756a19c3d3ddf508cad92b88df98cc322f0c8079d9" Dec 09 13:05:01 crc kubenswrapper[4703]: E1209 13:05:01.209502 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 13:05:05 crc kubenswrapper[4703]: E1209 13:05:05.073231 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:05:12 crc kubenswrapper[4703]: E1209 13:05:12.073389 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:05:13 crc kubenswrapper[4703]: I1209 13:05:13.070895 4703 scope.go:117] "RemoveContainer" containerID="9fa2385da1baff876428e5756a19c3d3ddf508cad92b88df98cc322f0c8079d9" Dec 09 13:05:13 crc kubenswrapper[4703]: E1209 13:05:13.071248 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 13:05:19 crc kubenswrapper[4703]: E1209 13:05:19.075557 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:05:26 crc kubenswrapper[4703]: I1209 13:05:26.070247 4703 scope.go:117] "RemoveContainer" containerID="9fa2385da1baff876428e5756a19c3d3ddf508cad92b88df98cc322f0c8079d9" Dec 09 13:05:26 crc kubenswrapper[4703]: E1209 13:05:26.071159 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 13:05:27 crc kubenswrapper[4703]: E1209 13:05:27.072681 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:05:32 crc kubenswrapper[4703]: E1209 13:05:32.073250 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:05:38 crc kubenswrapper[4703]: I1209 13:05:38.069996 4703 scope.go:117] "RemoveContainer" containerID="9fa2385da1baff876428e5756a19c3d3ddf508cad92b88df98cc322f0c8079d9" Dec 09 13:05:38 crc kubenswrapper[4703]: E1209 13:05:38.070804 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 13:05:39 crc kubenswrapper[4703]: E1209 13:05:39.072917 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:05:46 crc kubenswrapper[4703]: E1209 13:05:46.073551 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:05:50 crc kubenswrapper[4703]: I1209 13:05:50.070179 4703 scope.go:117] "RemoveContainer" containerID="9fa2385da1baff876428e5756a19c3d3ddf508cad92b88df98cc322f0c8079d9" Dec 09 13:05:50 crc kubenswrapper[4703]: E1209 13:05:50.071382 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 13:05:54 crc kubenswrapper[4703]: E1209 13:05:54.072753 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:05:57 crc kubenswrapper[4703]: E1209 13:05:57.072581 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:06:04 crc kubenswrapper[4703]: I1209 13:06:04.070098 4703 scope.go:117] "RemoveContainer" containerID="9fa2385da1baff876428e5756a19c3d3ddf508cad92b88df98cc322f0c8079d9" Dec 09 13:06:04 crc kubenswrapper[4703]: E1209 13:06:04.071864 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 13:06:07 crc kubenswrapper[4703]: E1209 13:06:07.071815 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:06:10 crc kubenswrapper[4703]: I1209 13:06:10.037391 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jjxfg"] Dec 09 13:06:10 crc kubenswrapper[4703]: E1209 13:06:10.038278 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbef74eb-61bc-4efa-8621-6e089311a571" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 09 13:06:10 crc kubenswrapper[4703]: I1209 13:06:10.038301 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbef74eb-61bc-4efa-8621-6e089311a571" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 09 13:06:10 crc kubenswrapper[4703]: E1209 13:06:10.038317 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19f50df7-14bc-4b46-8171-a032fa5932ed" containerName="registry-server" Dec 09 13:06:10 crc kubenswrapper[4703]: I1209 13:06:10.038325 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="19f50df7-14bc-4b46-8171-a032fa5932ed" containerName="registry-server" Dec 09 13:06:10 crc kubenswrapper[4703]: E1209 13:06:10.038342 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19f50df7-14bc-4b46-8171-a032fa5932ed" containerName="extract-content" Dec 09 13:06:10 crc kubenswrapper[4703]: I1209 13:06:10.038349 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="19f50df7-14bc-4b46-8171-a032fa5932ed" containerName="extract-content" Dec 09 13:06:10 crc kubenswrapper[4703]: E1209 13:06:10.038397 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19f50df7-14bc-4b46-8171-a032fa5932ed" containerName="extract-utilities" Dec 09 13:06:10 crc kubenswrapper[4703]: I1209 13:06:10.038407 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="19f50df7-14bc-4b46-8171-a032fa5932ed" containerName="extract-utilities" Dec 09 13:06:10 crc kubenswrapper[4703]: I1209 13:06:10.038656 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbef74eb-61bc-4efa-8621-6e089311a571" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 09 13:06:10 crc kubenswrapper[4703]: I1209 13:06:10.038686 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="19f50df7-14bc-4b46-8171-a032fa5932ed" containerName="registry-server" Dec 09 13:06:10 crc kubenswrapper[4703]: I1209 13:06:10.039743 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jjxfg" Dec 09 13:06:10 crc kubenswrapper[4703]: I1209 13:06:10.042402 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 09 13:06:10 crc kubenswrapper[4703]: I1209 13:06:10.042608 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 09 13:06:10 crc kubenswrapper[4703]: I1209 13:06:10.042627 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 09 13:06:10 crc kubenswrapper[4703]: I1209 13:06:10.042639 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-8xnzm" Dec 09 13:06:10 crc kubenswrapper[4703]: I1209 13:06:10.067338 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jjxfg"] Dec 09 13:06:10 crc kubenswrapper[4703]: E1209 13:06:10.074463 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:06:10 crc kubenswrapper[4703]: I1209 13:06:10.147587 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7pjs\" (UniqueName: \"kubernetes.io/projected/ff13d4cd-5b50-4df6-9c21-fe4eed8fa7bc-kube-api-access-s7pjs\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-jjxfg\" (UID: \"ff13d4cd-5b50-4df6-9c21-fe4eed8fa7bc\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jjxfg" Dec 09 13:06:10 crc kubenswrapper[4703]: I1209 13:06:10.147647 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ff13d4cd-5b50-4df6-9c21-fe4eed8fa7bc-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-jjxfg\" (UID: \"ff13d4cd-5b50-4df6-9c21-fe4eed8fa7bc\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jjxfg" Dec 09 13:06:10 crc kubenswrapper[4703]: I1209 13:06:10.148808 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ff13d4cd-5b50-4df6-9c21-fe4eed8fa7bc-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-jjxfg\" (UID: \"ff13d4cd-5b50-4df6-9c21-fe4eed8fa7bc\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jjxfg" Dec 09 13:06:10 crc kubenswrapper[4703]: I1209 13:06:10.250942 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7pjs\" (UniqueName: \"kubernetes.io/projected/ff13d4cd-5b50-4df6-9c21-fe4eed8fa7bc-kube-api-access-s7pjs\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-jjxfg\" (UID: \"ff13d4cd-5b50-4df6-9c21-fe4eed8fa7bc\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jjxfg" Dec 09 13:06:10 crc kubenswrapper[4703]: I1209 13:06:10.251011 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ff13d4cd-5b50-4df6-9c21-fe4eed8fa7bc-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-jjxfg\" (UID: \"ff13d4cd-5b50-4df6-9c21-fe4eed8fa7bc\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jjxfg" Dec 09 13:06:10 crc kubenswrapper[4703]: I1209 13:06:10.251161 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ff13d4cd-5b50-4df6-9c21-fe4eed8fa7bc-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-jjxfg\" (UID: \"ff13d4cd-5b50-4df6-9c21-fe4eed8fa7bc\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jjxfg" Dec 09 13:06:10 crc kubenswrapper[4703]: I1209 13:06:10.267963 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ff13d4cd-5b50-4df6-9c21-fe4eed8fa7bc-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-jjxfg\" (UID: \"ff13d4cd-5b50-4df6-9c21-fe4eed8fa7bc\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jjxfg" Dec 09 13:06:10 crc kubenswrapper[4703]: I1209 13:06:10.273122 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7pjs\" (UniqueName: \"kubernetes.io/projected/ff13d4cd-5b50-4df6-9c21-fe4eed8fa7bc-kube-api-access-s7pjs\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-jjxfg\" (UID: \"ff13d4cd-5b50-4df6-9c21-fe4eed8fa7bc\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jjxfg" Dec 09 13:06:10 crc kubenswrapper[4703]: I1209 13:06:10.287355 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ff13d4cd-5b50-4df6-9c21-fe4eed8fa7bc-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-jjxfg\" (UID: \"ff13d4cd-5b50-4df6-9c21-fe4eed8fa7bc\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jjxfg" Dec 09 13:06:10 crc kubenswrapper[4703]: I1209 13:06:10.365613 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jjxfg" Dec 09 13:06:10 crc kubenswrapper[4703]: I1209 13:06:10.929207 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jjxfg"] Dec 09 13:06:10 crc kubenswrapper[4703]: W1209 13:06:10.941939 4703 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podff13d4cd_5b50_4df6_9c21_fe4eed8fa7bc.slice/crio-ee1bfe8ac89d2615948537ed94a89c649a2214b20bbd3426ccd12eadb6c275ba WatchSource:0}: Error finding container ee1bfe8ac89d2615948537ed94a89c649a2214b20bbd3426ccd12eadb6c275ba: Status 404 returned error can't find the container with id ee1bfe8ac89d2615948537ed94a89c649a2214b20bbd3426ccd12eadb6c275ba Dec 09 13:06:11 crc kubenswrapper[4703]: I1209 13:06:11.010254 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jjxfg" event={"ID":"ff13d4cd-5b50-4df6-9c21-fe4eed8fa7bc","Type":"ContainerStarted","Data":"ee1bfe8ac89d2615948537ed94a89c649a2214b20bbd3426ccd12eadb6c275ba"} Dec 09 13:06:12 crc kubenswrapper[4703]: I1209 13:06:12.026479 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jjxfg" event={"ID":"ff13d4cd-5b50-4df6-9c21-fe4eed8fa7bc","Type":"ContainerStarted","Data":"aad8b909c0c144fcbb74628a8614aa6bc84b5194c553ffbe388297dff026dd0f"} Dec 09 13:06:12 crc kubenswrapper[4703]: I1209 13:06:12.051650 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jjxfg" podStartSLOduration=1.5503965690000001 podStartE2EDuration="2.051629342s" podCreationTimestamp="2025-12-09 13:06:10 +0000 UTC" firstStartedPulling="2025-12-09 13:06:10.945280394 +0000 UTC m=+3670.194043913" lastFinishedPulling="2025-12-09 13:06:11.446513167 +0000 UTC m=+3670.695276686" observedRunningTime="2025-12-09 13:06:12.043634362 +0000 UTC m=+3671.292397881" watchObservedRunningTime="2025-12-09 13:06:12.051629342 +0000 UTC m=+3671.300392861" Dec 09 13:06:15 crc kubenswrapper[4703]: I1209 13:06:15.070688 4703 scope.go:117] "RemoveContainer" containerID="9fa2385da1baff876428e5756a19c3d3ddf508cad92b88df98cc322f0c8079d9" Dec 09 13:06:15 crc kubenswrapper[4703]: E1209 13:06:15.071609 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 13:06:18 crc kubenswrapper[4703]: E1209 13:06:18.073708 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:06:24 crc kubenswrapper[4703]: E1209 13:06:24.072695 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:06:29 crc kubenswrapper[4703]: I1209 13:06:29.069462 4703 scope.go:117] "RemoveContainer" containerID="9fa2385da1baff876428e5756a19c3d3ddf508cad92b88df98cc322f0c8079d9" Dec 09 13:06:29 crc kubenswrapper[4703]: E1209 13:06:29.070304 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 13:06:31 crc kubenswrapper[4703]: E1209 13:06:31.078930 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:06:38 crc kubenswrapper[4703]: E1209 13:06:38.072665 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:06:43 crc kubenswrapper[4703]: I1209 13:06:43.070795 4703 scope.go:117] "RemoveContainer" containerID="9fa2385da1baff876428e5756a19c3d3ddf508cad92b88df98cc322f0c8079d9" Dec 09 13:06:43 crc kubenswrapper[4703]: E1209 13:06:43.073416 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 13:06:45 crc kubenswrapper[4703]: E1209 13:06:45.073096 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:06:49 crc kubenswrapper[4703]: E1209 13:06:49.073108 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:06:56 crc kubenswrapper[4703]: I1209 13:06:56.069546 4703 scope.go:117] "RemoveContainer" containerID="9fa2385da1baff876428e5756a19c3d3ddf508cad92b88df98cc322f0c8079d9" Dec 09 13:06:56 crc kubenswrapper[4703]: E1209 13:06:56.070380 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 13:06:57 crc kubenswrapper[4703]: E1209 13:06:57.077560 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:07:02 crc kubenswrapper[4703]: E1209 13:07:02.072877 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:07:09 crc kubenswrapper[4703]: E1209 13:07:09.074519 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:07:10 crc kubenswrapper[4703]: I1209 13:07:10.069537 4703 scope.go:117] "RemoveContainer" containerID="9fa2385da1baff876428e5756a19c3d3ddf508cad92b88df98cc322f0c8079d9" Dec 09 13:07:10 crc kubenswrapper[4703]: E1209 13:07:10.070327 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 13:07:14 crc kubenswrapper[4703]: E1209 13:07:14.072135 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:07:22 crc kubenswrapper[4703]: I1209 13:07:22.070441 4703 scope.go:117] "RemoveContainer" containerID="9fa2385da1baff876428e5756a19c3d3ddf508cad92b88df98cc322f0c8079d9" Dec 09 13:07:22 crc kubenswrapper[4703]: E1209 13:07:22.074364 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:07:22 crc kubenswrapper[4703]: E1209 13:07:22.074555 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 13:07:29 crc kubenswrapper[4703]: E1209 13:07:29.072247 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:07:35 crc kubenswrapper[4703]: E1209 13:07:35.074048 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:07:37 crc kubenswrapper[4703]: I1209 13:07:37.071413 4703 scope.go:117] "RemoveContainer" containerID="9fa2385da1baff876428e5756a19c3d3ddf508cad92b88df98cc322f0c8079d9" Dec 09 13:07:37 crc kubenswrapper[4703]: E1209 13:07:37.072060 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 13:07:41 crc kubenswrapper[4703]: E1209 13:07:41.078587 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:07:49 crc kubenswrapper[4703]: I1209 13:07:49.069765 4703 scope.go:117] "RemoveContainer" containerID="9fa2385da1baff876428e5756a19c3d3ddf508cad92b88df98cc322f0c8079d9" Dec 09 13:07:49 crc kubenswrapper[4703]: E1209 13:07:49.070738 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 13:07:50 crc kubenswrapper[4703]: E1209 13:07:50.072672 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:07:56 crc kubenswrapper[4703]: E1209 13:07:56.070935 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:08:01 crc kubenswrapper[4703]: E1209 13:08:01.093106 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:08:04 crc kubenswrapper[4703]: I1209 13:08:04.069527 4703 scope.go:117] "RemoveContainer" containerID="9fa2385da1baff876428e5756a19c3d3ddf508cad92b88df98cc322f0c8079d9" Dec 09 13:08:04 crc kubenswrapper[4703]: E1209 13:08:04.071712 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 13:08:10 crc kubenswrapper[4703]: E1209 13:08:10.072682 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:08:12 crc kubenswrapper[4703]: E1209 13:08:12.071608 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:08:14 crc kubenswrapper[4703]: I1209 13:08:14.495404 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8jv2v"] Dec 09 13:08:14 crc kubenswrapper[4703]: I1209 13:08:14.499071 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jv2v" Dec 09 13:08:14 crc kubenswrapper[4703]: I1209 13:08:14.524232 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8jv2v"] Dec 09 13:08:14 crc kubenswrapper[4703]: I1209 13:08:14.630381 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64be9afb-e5ab-4f29-abf1-6d4914bdbd4c-utilities\") pod \"community-operators-8jv2v\" (UID: \"64be9afb-e5ab-4f29-abf1-6d4914bdbd4c\") " pod="openshift-marketplace/community-operators-8jv2v" Dec 09 13:08:14 crc kubenswrapper[4703]: I1209 13:08:14.630762 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cb6wp\" (UniqueName: \"kubernetes.io/projected/64be9afb-e5ab-4f29-abf1-6d4914bdbd4c-kube-api-access-cb6wp\") pod \"community-operators-8jv2v\" (UID: \"64be9afb-e5ab-4f29-abf1-6d4914bdbd4c\") " pod="openshift-marketplace/community-operators-8jv2v" Dec 09 13:08:14 crc kubenswrapper[4703]: I1209 13:08:14.630876 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64be9afb-e5ab-4f29-abf1-6d4914bdbd4c-catalog-content\") pod \"community-operators-8jv2v\" (UID: \"64be9afb-e5ab-4f29-abf1-6d4914bdbd4c\") " pod="openshift-marketplace/community-operators-8jv2v" Dec 09 13:08:14 crc kubenswrapper[4703]: I1209 13:08:14.733460 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64be9afb-e5ab-4f29-abf1-6d4914bdbd4c-catalog-content\") pod \"community-operators-8jv2v\" (UID: \"64be9afb-e5ab-4f29-abf1-6d4914bdbd4c\") " pod="openshift-marketplace/community-operators-8jv2v" Dec 09 13:08:14 crc kubenswrapper[4703]: I1209 13:08:14.733712 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64be9afb-e5ab-4f29-abf1-6d4914bdbd4c-utilities\") pod \"community-operators-8jv2v\" (UID: \"64be9afb-e5ab-4f29-abf1-6d4914bdbd4c\") " pod="openshift-marketplace/community-operators-8jv2v" Dec 09 13:08:14 crc kubenswrapper[4703]: I1209 13:08:14.733832 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cb6wp\" (UniqueName: \"kubernetes.io/projected/64be9afb-e5ab-4f29-abf1-6d4914bdbd4c-kube-api-access-cb6wp\") pod \"community-operators-8jv2v\" (UID: \"64be9afb-e5ab-4f29-abf1-6d4914bdbd4c\") " pod="openshift-marketplace/community-operators-8jv2v" Dec 09 13:08:14 crc kubenswrapper[4703]: I1209 13:08:14.734152 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64be9afb-e5ab-4f29-abf1-6d4914bdbd4c-utilities\") pod \"community-operators-8jv2v\" (UID: \"64be9afb-e5ab-4f29-abf1-6d4914bdbd4c\") " pod="openshift-marketplace/community-operators-8jv2v" Dec 09 13:08:14 crc kubenswrapper[4703]: I1209 13:08:14.734464 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64be9afb-e5ab-4f29-abf1-6d4914bdbd4c-catalog-content\") pod \"community-operators-8jv2v\" (UID: \"64be9afb-e5ab-4f29-abf1-6d4914bdbd4c\") " pod="openshift-marketplace/community-operators-8jv2v" Dec 09 13:08:14 crc kubenswrapper[4703]: I1209 13:08:14.757511 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cb6wp\" (UniqueName: \"kubernetes.io/projected/64be9afb-e5ab-4f29-abf1-6d4914bdbd4c-kube-api-access-cb6wp\") pod \"community-operators-8jv2v\" (UID: \"64be9afb-e5ab-4f29-abf1-6d4914bdbd4c\") " pod="openshift-marketplace/community-operators-8jv2v" Dec 09 13:08:14 crc kubenswrapper[4703]: I1209 13:08:14.825028 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jv2v" Dec 09 13:08:15 crc kubenswrapper[4703]: I1209 13:08:15.517452 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8jv2v"] Dec 09 13:08:16 crc kubenswrapper[4703]: I1209 13:08:16.473387 4703 generic.go:334] "Generic (PLEG): container finished" podID="64be9afb-e5ab-4f29-abf1-6d4914bdbd4c" containerID="f26ee0c41b51b729b1d062dda50ca50ce43b244edcf6f1fde195a196b88e93b0" exitCode=0 Dec 09 13:08:16 crc kubenswrapper[4703]: I1209 13:08:16.474203 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8jv2v" event={"ID":"64be9afb-e5ab-4f29-abf1-6d4914bdbd4c","Type":"ContainerDied","Data":"f26ee0c41b51b729b1d062dda50ca50ce43b244edcf6f1fde195a196b88e93b0"} Dec 09 13:08:16 crc kubenswrapper[4703]: I1209 13:08:16.474244 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8jv2v" event={"ID":"64be9afb-e5ab-4f29-abf1-6d4914bdbd4c","Type":"ContainerStarted","Data":"9af674015373d39e19ecdeceb81f87fd11879cb60c1a0287e9032e73cd08c018"} Dec 09 13:08:17 crc kubenswrapper[4703]: I1209 13:08:17.490284 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8jv2v" event={"ID":"64be9afb-e5ab-4f29-abf1-6d4914bdbd4c","Type":"ContainerStarted","Data":"430c1ae8a9fdbbcd40257561ac759c8e8046afc0234f5ea34e01e9b43d456a33"} Dec 09 13:08:18 crc kubenswrapper[4703]: I1209 13:08:18.070554 4703 scope.go:117] "RemoveContainer" containerID="9fa2385da1baff876428e5756a19c3d3ddf508cad92b88df98cc322f0c8079d9" Dec 09 13:08:18 crc kubenswrapper[4703]: E1209 13:08:18.071022 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 13:08:18 crc kubenswrapper[4703]: I1209 13:08:18.506705 4703 generic.go:334] "Generic (PLEG): container finished" podID="64be9afb-e5ab-4f29-abf1-6d4914bdbd4c" containerID="430c1ae8a9fdbbcd40257561ac759c8e8046afc0234f5ea34e01e9b43d456a33" exitCode=0 Dec 09 13:08:18 crc kubenswrapper[4703]: I1209 13:08:18.506905 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8jv2v" event={"ID":"64be9afb-e5ab-4f29-abf1-6d4914bdbd4c","Type":"ContainerDied","Data":"430c1ae8a9fdbbcd40257561ac759c8e8046afc0234f5ea34e01e9b43d456a33"} Dec 09 13:08:19 crc kubenswrapper[4703]: I1209 13:08:19.523659 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8jv2v" event={"ID":"64be9afb-e5ab-4f29-abf1-6d4914bdbd4c","Type":"ContainerStarted","Data":"475e3224e6707a14ec4b14712388d25fa6b46e38b856b5963a8bb36fbe236ff1"} Dec 09 13:08:19 crc kubenswrapper[4703]: I1209 13:08:19.549112 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8jv2v" podStartSLOduration=3.113692421 podStartE2EDuration="5.549087328s" podCreationTimestamp="2025-12-09 13:08:14 +0000 UTC" firstStartedPulling="2025-12-09 13:08:16.478088321 +0000 UTC m=+3795.726851840" lastFinishedPulling="2025-12-09 13:08:18.913483238 +0000 UTC m=+3798.162246747" observedRunningTime="2025-12-09 13:08:19.542909715 +0000 UTC m=+3798.791673234" watchObservedRunningTime="2025-12-09 13:08:19.549087328 +0000 UTC m=+3798.797850847" Dec 09 13:08:24 crc kubenswrapper[4703]: I1209 13:08:24.826020 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8jv2v" Dec 09 13:08:24 crc kubenswrapper[4703]: I1209 13:08:24.826853 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8jv2v" Dec 09 13:08:24 crc kubenswrapper[4703]: I1209 13:08:24.879088 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8jv2v" Dec 09 13:08:25 crc kubenswrapper[4703]: E1209 13:08:25.072018 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:08:25 crc kubenswrapper[4703]: I1209 13:08:25.648552 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8jv2v" Dec 09 13:08:25 crc kubenswrapper[4703]: I1209 13:08:25.708788 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8jv2v"] Dec 09 13:08:27 crc kubenswrapper[4703]: E1209 13:08:27.072124 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:08:27 crc kubenswrapper[4703]: I1209 13:08:27.617918 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-8jv2v" podUID="64be9afb-e5ab-4f29-abf1-6d4914bdbd4c" containerName="registry-server" containerID="cri-o://475e3224e6707a14ec4b14712388d25fa6b46e38b856b5963a8bb36fbe236ff1" gracePeriod=2 Dec 09 13:08:28 crc kubenswrapper[4703]: I1209 13:08:28.205286 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jv2v" Dec 09 13:08:28 crc kubenswrapper[4703]: I1209 13:08:28.363997 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cb6wp\" (UniqueName: \"kubernetes.io/projected/64be9afb-e5ab-4f29-abf1-6d4914bdbd4c-kube-api-access-cb6wp\") pod \"64be9afb-e5ab-4f29-abf1-6d4914bdbd4c\" (UID: \"64be9afb-e5ab-4f29-abf1-6d4914bdbd4c\") " Dec 09 13:08:28 crc kubenswrapper[4703]: I1209 13:08:28.364154 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64be9afb-e5ab-4f29-abf1-6d4914bdbd4c-catalog-content\") pod \"64be9afb-e5ab-4f29-abf1-6d4914bdbd4c\" (UID: \"64be9afb-e5ab-4f29-abf1-6d4914bdbd4c\") " Dec 09 13:08:28 crc kubenswrapper[4703]: I1209 13:08:28.364232 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64be9afb-e5ab-4f29-abf1-6d4914bdbd4c-utilities\") pod \"64be9afb-e5ab-4f29-abf1-6d4914bdbd4c\" (UID: \"64be9afb-e5ab-4f29-abf1-6d4914bdbd4c\") " Dec 09 13:08:28 crc kubenswrapper[4703]: I1209 13:08:28.366435 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64be9afb-e5ab-4f29-abf1-6d4914bdbd4c-utilities" (OuterVolumeSpecName: "utilities") pod "64be9afb-e5ab-4f29-abf1-6d4914bdbd4c" (UID: "64be9afb-e5ab-4f29-abf1-6d4914bdbd4c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 13:08:28 crc kubenswrapper[4703]: I1209 13:08:28.367434 4703 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64be9afb-e5ab-4f29-abf1-6d4914bdbd4c-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 13:08:28 crc kubenswrapper[4703]: I1209 13:08:28.374139 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64be9afb-e5ab-4f29-abf1-6d4914bdbd4c-kube-api-access-cb6wp" (OuterVolumeSpecName: "kube-api-access-cb6wp") pod "64be9afb-e5ab-4f29-abf1-6d4914bdbd4c" (UID: "64be9afb-e5ab-4f29-abf1-6d4914bdbd4c"). InnerVolumeSpecName "kube-api-access-cb6wp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 13:08:28 crc kubenswrapper[4703]: I1209 13:08:28.428873 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64be9afb-e5ab-4f29-abf1-6d4914bdbd4c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "64be9afb-e5ab-4f29-abf1-6d4914bdbd4c" (UID: "64be9afb-e5ab-4f29-abf1-6d4914bdbd4c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 13:08:28 crc kubenswrapper[4703]: I1209 13:08:28.469420 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cb6wp\" (UniqueName: \"kubernetes.io/projected/64be9afb-e5ab-4f29-abf1-6d4914bdbd4c-kube-api-access-cb6wp\") on node \"crc\" DevicePath \"\"" Dec 09 13:08:28 crc kubenswrapper[4703]: I1209 13:08:28.469463 4703 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64be9afb-e5ab-4f29-abf1-6d4914bdbd4c-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 13:08:28 crc kubenswrapper[4703]: I1209 13:08:28.632855 4703 generic.go:334] "Generic (PLEG): container finished" podID="64be9afb-e5ab-4f29-abf1-6d4914bdbd4c" containerID="475e3224e6707a14ec4b14712388d25fa6b46e38b856b5963a8bb36fbe236ff1" exitCode=0 Dec 09 13:08:28 crc kubenswrapper[4703]: I1209 13:08:28.632899 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jv2v" Dec 09 13:08:28 crc kubenswrapper[4703]: I1209 13:08:28.632904 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8jv2v" event={"ID":"64be9afb-e5ab-4f29-abf1-6d4914bdbd4c","Type":"ContainerDied","Data":"475e3224e6707a14ec4b14712388d25fa6b46e38b856b5963a8bb36fbe236ff1"} Dec 09 13:08:28 crc kubenswrapper[4703]: I1209 13:08:28.632937 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8jv2v" event={"ID":"64be9afb-e5ab-4f29-abf1-6d4914bdbd4c","Type":"ContainerDied","Data":"9af674015373d39e19ecdeceb81f87fd11879cb60c1a0287e9032e73cd08c018"} Dec 09 13:08:28 crc kubenswrapper[4703]: I1209 13:08:28.632958 4703 scope.go:117] "RemoveContainer" containerID="475e3224e6707a14ec4b14712388d25fa6b46e38b856b5963a8bb36fbe236ff1" Dec 09 13:08:28 crc kubenswrapper[4703]: I1209 13:08:28.666572 4703 scope.go:117] "RemoveContainer" containerID="430c1ae8a9fdbbcd40257561ac759c8e8046afc0234f5ea34e01e9b43d456a33" Dec 09 13:08:28 crc kubenswrapper[4703]: I1209 13:08:28.671624 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8jv2v"] Dec 09 13:08:28 crc kubenswrapper[4703]: I1209 13:08:28.684548 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-8jv2v"] Dec 09 13:08:28 crc kubenswrapper[4703]: I1209 13:08:28.695480 4703 scope.go:117] "RemoveContainer" containerID="f26ee0c41b51b729b1d062dda50ca50ce43b244edcf6f1fde195a196b88e93b0" Dec 09 13:08:28 crc kubenswrapper[4703]: I1209 13:08:28.762172 4703 scope.go:117] "RemoveContainer" containerID="475e3224e6707a14ec4b14712388d25fa6b46e38b856b5963a8bb36fbe236ff1" Dec 09 13:08:28 crc kubenswrapper[4703]: E1209 13:08:28.763103 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"475e3224e6707a14ec4b14712388d25fa6b46e38b856b5963a8bb36fbe236ff1\": container with ID starting with 475e3224e6707a14ec4b14712388d25fa6b46e38b856b5963a8bb36fbe236ff1 not found: ID does not exist" containerID="475e3224e6707a14ec4b14712388d25fa6b46e38b856b5963a8bb36fbe236ff1" Dec 09 13:08:28 crc kubenswrapper[4703]: I1209 13:08:28.763147 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"475e3224e6707a14ec4b14712388d25fa6b46e38b856b5963a8bb36fbe236ff1"} err="failed to get container status \"475e3224e6707a14ec4b14712388d25fa6b46e38b856b5963a8bb36fbe236ff1\": rpc error: code = NotFound desc = could not find container \"475e3224e6707a14ec4b14712388d25fa6b46e38b856b5963a8bb36fbe236ff1\": container with ID starting with 475e3224e6707a14ec4b14712388d25fa6b46e38b856b5963a8bb36fbe236ff1 not found: ID does not exist" Dec 09 13:08:28 crc kubenswrapper[4703]: I1209 13:08:28.763177 4703 scope.go:117] "RemoveContainer" containerID="430c1ae8a9fdbbcd40257561ac759c8e8046afc0234f5ea34e01e9b43d456a33" Dec 09 13:08:28 crc kubenswrapper[4703]: E1209 13:08:28.763838 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"430c1ae8a9fdbbcd40257561ac759c8e8046afc0234f5ea34e01e9b43d456a33\": container with ID starting with 430c1ae8a9fdbbcd40257561ac759c8e8046afc0234f5ea34e01e9b43d456a33 not found: ID does not exist" containerID="430c1ae8a9fdbbcd40257561ac759c8e8046afc0234f5ea34e01e9b43d456a33" Dec 09 13:08:28 crc kubenswrapper[4703]: I1209 13:08:28.763881 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"430c1ae8a9fdbbcd40257561ac759c8e8046afc0234f5ea34e01e9b43d456a33"} err="failed to get container status \"430c1ae8a9fdbbcd40257561ac759c8e8046afc0234f5ea34e01e9b43d456a33\": rpc error: code = NotFound desc = could not find container \"430c1ae8a9fdbbcd40257561ac759c8e8046afc0234f5ea34e01e9b43d456a33\": container with ID starting with 430c1ae8a9fdbbcd40257561ac759c8e8046afc0234f5ea34e01e9b43d456a33 not found: ID does not exist" Dec 09 13:08:28 crc kubenswrapper[4703]: I1209 13:08:28.763915 4703 scope.go:117] "RemoveContainer" containerID="f26ee0c41b51b729b1d062dda50ca50ce43b244edcf6f1fde195a196b88e93b0" Dec 09 13:08:28 crc kubenswrapper[4703]: E1209 13:08:28.764570 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f26ee0c41b51b729b1d062dda50ca50ce43b244edcf6f1fde195a196b88e93b0\": container with ID starting with f26ee0c41b51b729b1d062dda50ca50ce43b244edcf6f1fde195a196b88e93b0 not found: ID does not exist" containerID="f26ee0c41b51b729b1d062dda50ca50ce43b244edcf6f1fde195a196b88e93b0" Dec 09 13:08:28 crc kubenswrapper[4703]: I1209 13:08:28.764603 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f26ee0c41b51b729b1d062dda50ca50ce43b244edcf6f1fde195a196b88e93b0"} err="failed to get container status \"f26ee0c41b51b729b1d062dda50ca50ce43b244edcf6f1fde195a196b88e93b0\": rpc error: code = NotFound desc = could not find container \"f26ee0c41b51b729b1d062dda50ca50ce43b244edcf6f1fde195a196b88e93b0\": container with ID starting with f26ee0c41b51b729b1d062dda50ca50ce43b244edcf6f1fde195a196b88e93b0 not found: ID does not exist" Dec 09 13:08:29 crc kubenswrapper[4703]: I1209 13:08:29.092607 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64be9afb-e5ab-4f29-abf1-6d4914bdbd4c" path="/var/lib/kubelet/pods/64be9afb-e5ab-4f29-abf1-6d4914bdbd4c/volumes" Dec 09 13:08:33 crc kubenswrapper[4703]: I1209 13:08:33.070671 4703 scope.go:117] "RemoveContainer" containerID="9fa2385da1baff876428e5756a19c3d3ddf508cad92b88df98cc322f0c8079d9" Dec 09 13:08:33 crc kubenswrapper[4703]: E1209 13:08:33.071535 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 13:08:39 crc kubenswrapper[4703]: E1209 13:08:39.074061 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:08:40 crc kubenswrapper[4703]: E1209 13:08:40.073122 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:08:47 crc kubenswrapper[4703]: I1209 13:08:47.070425 4703 scope.go:117] "RemoveContainer" containerID="9fa2385da1baff876428e5756a19c3d3ddf508cad92b88df98cc322f0c8079d9" Dec 09 13:08:47 crc kubenswrapper[4703]: E1209 13:08:47.071343 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 13:08:50 crc kubenswrapper[4703]: E1209 13:08:50.072753 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:08:53 crc kubenswrapper[4703]: E1209 13:08:53.073268 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:09:01 crc kubenswrapper[4703]: I1209 13:09:01.079420 4703 scope.go:117] "RemoveContainer" containerID="9fa2385da1baff876428e5756a19c3d3ddf508cad92b88df98cc322f0c8079d9" Dec 09 13:09:01 crc kubenswrapper[4703]: E1209 13:09:01.097496 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 13:09:04 crc kubenswrapper[4703]: I1209 13:09:04.077165 4703 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 09 13:09:04 crc kubenswrapper[4703]: E1209 13:09:04.204820 4703 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 13:09:04 crc kubenswrapper[4703]: E1209 13:09:04.205246 4703 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 13:09:04 crc kubenswrapper[4703]: E1209 13:09:04.205414 4703 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n565h678h645h65h685h657h568h564h8fhcfhc4h58fh66bh564h649h5d5hb5h67dh57fh97h58ch557h4h68h59fh5c8h65ch586h5bch5b8h5c6h7q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p7hbq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(ce42c586-f397-4f98-be45-f56d36115d7a): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 13:09:04 crc kubenswrapper[4703]: E1209 13:09:04.206663 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:09:07 crc kubenswrapper[4703]: E1209 13:09:07.071734 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:09:15 crc kubenswrapper[4703]: I1209 13:09:15.069756 4703 scope.go:117] "RemoveContainer" containerID="9fa2385da1baff876428e5756a19c3d3ddf508cad92b88df98cc322f0c8079d9" Dec 09 13:09:15 crc kubenswrapper[4703]: E1209 13:09:15.071663 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 13:09:17 crc kubenswrapper[4703]: E1209 13:09:17.072898 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:09:20 crc kubenswrapper[4703]: E1209 13:09:20.073357 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:09:27 crc kubenswrapper[4703]: I1209 13:09:27.070036 4703 scope.go:117] "RemoveContainer" containerID="9fa2385da1baff876428e5756a19c3d3ddf508cad92b88df98cc322f0c8079d9" Dec 09 13:09:27 crc kubenswrapper[4703]: E1209 13:09:27.070839 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 13:09:30 crc kubenswrapper[4703]: E1209 13:09:30.080330 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:09:34 crc kubenswrapper[4703]: E1209 13:09:34.186667 4703 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 09 13:09:34 crc kubenswrapper[4703]: E1209 13:09:34.187429 4703 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 09 13:09:34 crc kubenswrapper[4703]: E1209 13:09:34.187696 4703 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6kdkz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-6ljb8_openstack(862e9d91-760e-43af-aba3-a23255b0fd7a): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 13:09:34 crc kubenswrapper[4703]: E1209 13:09:34.188971 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:09:41 crc kubenswrapper[4703]: I1209 13:09:41.090904 4703 scope.go:117] "RemoveContainer" containerID="9fa2385da1baff876428e5756a19c3d3ddf508cad92b88df98cc322f0c8079d9" Dec 09 13:09:41 crc kubenswrapper[4703]: E1209 13:09:41.092317 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 13:09:44 crc kubenswrapper[4703]: E1209 13:09:44.073582 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:09:47 crc kubenswrapper[4703]: E1209 13:09:47.073676 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:09:52 crc kubenswrapper[4703]: I1209 13:09:52.070511 4703 scope.go:117] "RemoveContainer" containerID="9fa2385da1baff876428e5756a19c3d3ddf508cad92b88df98cc322f0c8079d9" Dec 09 13:09:52 crc kubenswrapper[4703]: E1209 13:09:52.071403 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 13:09:59 crc kubenswrapper[4703]: E1209 13:09:59.072976 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:09:59 crc kubenswrapper[4703]: E1209 13:09:59.073014 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:10:02 crc kubenswrapper[4703]: I1209 13:10:02.720380 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-d4sbr"] Dec 09 13:10:02 crc kubenswrapper[4703]: E1209 13:10:02.721293 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64be9afb-e5ab-4f29-abf1-6d4914bdbd4c" containerName="extract-utilities" Dec 09 13:10:02 crc kubenswrapper[4703]: I1209 13:10:02.721306 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="64be9afb-e5ab-4f29-abf1-6d4914bdbd4c" containerName="extract-utilities" Dec 09 13:10:02 crc kubenswrapper[4703]: E1209 13:10:02.721319 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64be9afb-e5ab-4f29-abf1-6d4914bdbd4c" containerName="registry-server" Dec 09 13:10:02 crc kubenswrapper[4703]: I1209 13:10:02.721326 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="64be9afb-e5ab-4f29-abf1-6d4914bdbd4c" containerName="registry-server" Dec 09 13:10:02 crc kubenswrapper[4703]: E1209 13:10:02.721346 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64be9afb-e5ab-4f29-abf1-6d4914bdbd4c" containerName="extract-content" Dec 09 13:10:02 crc kubenswrapper[4703]: I1209 13:10:02.721352 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="64be9afb-e5ab-4f29-abf1-6d4914bdbd4c" containerName="extract-content" Dec 09 13:10:02 crc kubenswrapper[4703]: I1209 13:10:02.721584 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="64be9afb-e5ab-4f29-abf1-6d4914bdbd4c" containerName="registry-server" Dec 09 13:10:02 crc kubenswrapper[4703]: I1209 13:10:02.723122 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d4sbr" Dec 09 13:10:02 crc kubenswrapper[4703]: I1209 13:10:02.738995 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d4sbr"] Dec 09 13:10:02 crc kubenswrapper[4703]: I1209 13:10:02.809347 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a702dfb8-0ef9-4e32-9af7-32eb8b2118dd-catalog-content\") pod \"certified-operators-d4sbr\" (UID: \"a702dfb8-0ef9-4e32-9af7-32eb8b2118dd\") " pod="openshift-marketplace/certified-operators-d4sbr" Dec 09 13:10:02 crc kubenswrapper[4703]: I1209 13:10:02.809974 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a702dfb8-0ef9-4e32-9af7-32eb8b2118dd-utilities\") pod \"certified-operators-d4sbr\" (UID: \"a702dfb8-0ef9-4e32-9af7-32eb8b2118dd\") " pod="openshift-marketplace/certified-operators-d4sbr" Dec 09 13:10:02 crc kubenswrapper[4703]: I1209 13:10:02.810129 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vb4cw\" (UniqueName: \"kubernetes.io/projected/a702dfb8-0ef9-4e32-9af7-32eb8b2118dd-kube-api-access-vb4cw\") pod \"certified-operators-d4sbr\" (UID: \"a702dfb8-0ef9-4e32-9af7-32eb8b2118dd\") " pod="openshift-marketplace/certified-operators-d4sbr" Dec 09 13:10:02 crc kubenswrapper[4703]: I1209 13:10:02.912697 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a702dfb8-0ef9-4e32-9af7-32eb8b2118dd-catalog-content\") pod \"certified-operators-d4sbr\" (UID: \"a702dfb8-0ef9-4e32-9af7-32eb8b2118dd\") " pod="openshift-marketplace/certified-operators-d4sbr" Dec 09 13:10:02 crc kubenswrapper[4703]: I1209 13:10:02.912898 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a702dfb8-0ef9-4e32-9af7-32eb8b2118dd-utilities\") pod \"certified-operators-d4sbr\" (UID: \"a702dfb8-0ef9-4e32-9af7-32eb8b2118dd\") " pod="openshift-marketplace/certified-operators-d4sbr" Dec 09 13:10:02 crc kubenswrapper[4703]: I1209 13:10:02.912955 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vb4cw\" (UniqueName: \"kubernetes.io/projected/a702dfb8-0ef9-4e32-9af7-32eb8b2118dd-kube-api-access-vb4cw\") pod \"certified-operators-d4sbr\" (UID: \"a702dfb8-0ef9-4e32-9af7-32eb8b2118dd\") " pod="openshift-marketplace/certified-operators-d4sbr" Dec 09 13:10:02 crc kubenswrapper[4703]: I1209 13:10:02.913569 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a702dfb8-0ef9-4e32-9af7-32eb8b2118dd-catalog-content\") pod \"certified-operators-d4sbr\" (UID: \"a702dfb8-0ef9-4e32-9af7-32eb8b2118dd\") " pod="openshift-marketplace/certified-operators-d4sbr" Dec 09 13:10:02 crc kubenswrapper[4703]: I1209 13:10:02.913781 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a702dfb8-0ef9-4e32-9af7-32eb8b2118dd-utilities\") pod \"certified-operators-d4sbr\" (UID: \"a702dfb8-0ef9-4e32-9af7-32eb8b2118dd\") " pod="openshift-marketplace/certified-operators-d4sbr" Dec 09 13:10:02 crc kubenswrapper[4703]: I1209 13:10:02.941428 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vb4cw\" (UniqueName: \"kubernetes.io/projected/a702dfb8-0ef9-4e32-9af7-32eb8b2118dd-kube-api-access-vb4cw\") pod \"certified-operators-d4sbr\" (UID: \"a702dfb8-0ef9-4e32-9af7-32eb8b2118dd\") " pod="openshift-marketplace/certified-operators-d4sbr" Dec 09 13:10:03 crc kubenswrapper[4703]: I1209 13:10:03.088090 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d4sbr" Dec 09 13:10:03 crc kubenswrapper[4703]: I1209 13:10:03.679348 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d4sbr"] Dec 09 13:10:03 crc kubenswrapper[4703]: I1209 13:10:03.715649 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d4sbr" event={"ID":"a702dfb8-0ef9-4e32-9af7-32eb8b2118dd","Type":"ContainerStarted","Data":"79805256119fa3936a1e26a1b37494f4d5b7d772471fbd05d28f12cd968aea6c"} Dec 09 13:10:04 crc kubenswrapper[4703]: I1209 13:10:04.741550 4703 generic.go:334] "Generic (PLEG): container finished" podID="a702dfb8-0ef9-4e32-9af7-32eb8b2118dd" containerID="9d902e276ff57d46a1f4acfc541e119897248cffd88a40998c3a68c0b96b33e4" exitCode=0 Dec 09 13:10:04 crc kubenswrapper[4703]: I1209 13:10:04.741975 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d4sbr" event={"ID":"a702dfb8-0ef9-4e32-9af7-32eb8b2118dd","Type":"ContainerDied","Data":"9d902e276ff57d46a1f4acfc541e119897248cffd88a40998c3a68c0b96b33e4"} Dec 09 13:10:05 crc kubenswrapper[4703]: I1209 13:10:05.070603 4703 scope.go:117] "RemoveContainer" containerID="9fa2385da1baff876428e5756a19c3d3ddf508cad92b88df98cc322f0c8079d9" Dec 09 13:10:05 crc kubenswrapper[4703]: I1209 13:10:05.754786 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" event={"ID":"32956ceb-8540-406e-8693-e86efb46cd42","Type":"ContainerStarted","Data":"f30a6c9f3a12ee1b27686558b4578528b7079a65b9e0ed554e65cbed4a034e04"} Dec 09 13:10:06 crc kubenswrapper[4703]: I1209 13:10:06.781718 4703 generic.go:334] "Generic (PLEG): container finished" podID="a702dfb8-0ef9-4e32-9af7-32eb8b2118dd" containerID="368f241e391d35f0c6904b9d3b96495c4eee6ed24062b99ca25a9cfa9dfe8d32" exitCode=0 Dec 09 13:10:06 crc kubenswrapper[4703]: I1209 13:10:06.781828 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d4sbr" event={"ID":"a702dfb8-0ef9-4e32-9af7-32eb8b2118dd","Type":"ContainerDied","Data":"368f241e391d35f0c6904b9d3b96495c4eee6ed24062b99ca25a9cfa9dfe8d32"} Dec 09 13:10:07 crc kubenswrapper[4703]: I1209 13:10:07.793773 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d4sbr" event={"ID":"a702dfb8-0ef9-4e32-9af7-32eb8b2118dd","Type":"ContainerStarted","Data":"6cc172d511898bf58290a1ba14b7ff59cd4c81ec1d47087211ea43a7213efe8a"} Dec 09 13:10:07 crc kubenswrapper[4703]: I1209 13:10:07.822881 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-d4sbr" podStartSLOduration=3.210743458 podStartE2EDuration="5.82286186s" podCreationTimestamp="2025-12-09 13:10:02 +0000 UTC" firstStartedPulling="2025-12-09 13:10:04.750110497 +0000 UTC m=+3903.998874016" lastFinishedPulling="2025-12-09 13:10:07.362228899 +0000 UTC m=+3906.610992418" observedRunningTime="2025-12-09 13:10:07.816145973 +0000 UTC m=+3907.064909492" watchObservedRunningTime="2025-12-09 13:10:07.82286186 +0000 UTC m=+3907.071625379" Dec 09 13:10:10 crc kubenswrapper[4703]: E1209 13:10:10.073930 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:10:12 crc kubenswrapper[4703]: E1209 13:10:12.072094 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:10:13 crc kubenswrapper[4703]: I1209 13:10:13.088999 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-d4sbr" Dec 09 13:10:13 crc kubenswrapper[4703]: I1209 13:10:13.090476 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-d4sbr" Dec 09 13:10:13 crc kubenswrapper[4703]: I1209 13:10:13.191586 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-d4sbr" Dec 09 13:10:13 crc kubenswrapper[4703]: I1209 13:10:13.942837 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-d4sbr" Dec 09 13:10:14 crc kubenswrapper[4703]: I1209 13:10:14.033714 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-d4sbr"] Dec 09 13:10:15 crc kubenswrapper[4703]: I1209 13:10:15.873994 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-d4sbr" podUID="a702dfb8-0ef9-4e32-9af7-32eb8b2118dd" containerName="registry-server" containerID="cri-o://6cc172d511898bf58290a1ba14b7ff59cd4c81ec1d47087211ea43a7213efe8a" gracePeriod=2 Dec 09 13:10:16 crc kubenswrapper[4703]: I1209 13:10:16.656915 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d4sbr" Dec 09 13:10:16 crc kubenswrapper[4703]: I1209 13:10:16.817570 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a702dfb8-0ef9-4e32-9af7-32eb8b2118dd-catalog-content\") pod \"a702dfb8-0ef9-4e32-9af7-32eb8b2118dd\" (UID: \"a702dfb8-0ef9-4e32-9af7-32eb8b2118dd\") " Dec 09 13:10:16 crc kubenswrapper[4703]: I1209 13:10:16.817743 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a702dfb8-0ef9-4e32-9af7-32eb8b2118dd-utilities\") pod \"a702dfb8-0ef9-4e32-9af7-32eb8b2118dd\" (UID: \"a702dfb8-0ef9-4e32-9af7-32eb8b2118dd\") " Dec 09 13:10:16 crc kubenswrapper[4703]: I1209 13:10:16.818625 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a702dfb8-0ef9-4e32-9af7-32eb8b2118dd-utilities" (OuterVolumeSpecName: "utilities") pod "a702dfb8-0ef9-4e32-9af7-32eb8b2118dd" (UID: "a702dfb8-0ef9-4e32-9af7-32eb8b2118dd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 13:10:16 crc kubenswrapper[4703]: I1209 13:10:16.818866 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vb4cw\" (UniqueName: \"kubernetes.io/projected/a702dfb8-0ef9-4e32-9af7-32eb8b2118dd-kube-api-access-vb4cw\") pod \"a702dfb8-0ef9-4e32-9af7-32eb8b2118dd\" (UID: \"a702dfb8-0ef9-4e32-9af7-32eb8b2118dd\") " Dec 09 13:10:16 crc kubenswrapper[4703]: I1209 13:10:16.820081 4703 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a702dfb8-0ef9-4e32-9af7-32eb8b2118dd-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 13:10:16 crc kubenswrapper[4703]: I1209 13:10:16.827490 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a702dfb8-0ef9-4e32-9af7-32eb8b2118dd-kube-api-access-vb4cw" (OuterVolumeSpecName: "kube-api-access-vb4cw") pod "a702dfb8-0ef9-4e32-9af7-32eb8b2118dd" (UID: "a702dfb8-0ef9-4e32-9af7-32eb8b2118dd"). InnerVolumeSpecName "kube-api-access-vb4cw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 13:10:16 crc kubenswrapper[4703]: I1209 13:10:16.856671 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a702dfb8-0ef9-4e32-9af7-32eb8b2118dd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a702dfb8-0ef9-4e32-9af7-32eb8b2118dd" (UID: "a702dfb8-0ef9-4e32-9af7-32eb8b2118dd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 13:10:16 crc kubenswrapper[4703]: I1209 13:10:16.887413 4703 generic.go:334] "Generic (PLEG): container finished" podID="a702dfb8-0ef9-4e32-9af7-32eb8b2118dd" containerID="6cc172d511898bf58290a1ba14b7ff59cd4c81ec1d47087211ea43a7213efe8a" exitCode=0 Dec 09 13:10:16 crc kubenswrapper[4703]: I1209 13:10:16.887478 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d4sbr" event={"ID":"a702dfb8-0ef9-4e32-9af7-32eb8b2118dd","Type":"ContainerDied","Data":"6cc172d511898bf58290a1ba14b7ff59cd4c81ec1d47087211ea43a7213efe8a"} Dec 09 13:10:16 crc kubenswrapper[4703]: I1209 13:10:16.887546 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d4sbr" event={"ID":"a702dfb8-0ef9-4e32-9af7-32eb8b2118dd","Type":"ContainerDied","Data":"79805256119fa3936a1e26a1b37494f4d5b7d772471fbd05d28f12cd968aea6c"} Dec 09 13:10:16 crc kubenswrapper[4703]: I1209 13:10:16.887571 4703 scope.go:117] "RemoveContainer" containerID="6cc172d511898bf58290a1ba14b7ff59cd4c81ec1d47087211ea43a7213efe8a" Dec 09 13:10:16 crc kubenswrapper[4703]: I1209 13:10:16.887498 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d4sbr" Dec 09 13:10:16 crc kubenswrapper[4703]: I1209 13:10:16.918159 4703 scope.go:117] "RemoveContainer" containerID="368f241e391d35f0c6904b9d3b96495c4eee6ed24062b99ca25a9cfa9dfe8d32" Dec 09 13:10:16 crc kubenswrapper[4703]: I1209 13:10:16.922950 4703 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a702dfb8-0ef9-4e32-9af7-32eb8b2118dd-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 13:10:16 crc kubenswrapper[4703]: I1209 13:10:16.922992 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vb4cw\" (UniqueName: \"kubernetes.io/projected/a702dfb8-0ef9-4e32-9af7-32eb8b2118dd-kube-api-access-vb4cw\") on node \"crc\" DevicePath \"\"" Dec 09 13:10:16 crc kubenswrapper[4703]: I1209 13:10:16.942280 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-d4sbr"] Dec 09 13:10:16 crc kubenswrapper[4703]: I1209 13:10:16.953641 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-d4sbr"] Dec 09 13:10:16 crc kubenswrapper[4703]: I1209 13:10:16.970035 4703 scope.go:117] "RemoveContainer" containerID="9d902e276ff57d46a1f4acfc541e119897248cffd88a40998c3a68c0b96b33e4" Dec 09 13:10:17 crc kubenswrapper[4703]: I1209 13:10:17.030320 4703 scope.go:117] "RemoveContainer" containerID="6cc172d511898bf58290a1ba14b7ff59cd4c81ec1d47087211ea43a7213efe8a" Dec 09 13:10:17 crc kubenswrapper[4703]: E1209 13:10:17.031070 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6cc172d511898bf58290a1ba14b7ff59cd4c81ec1d47087211ea43a7213efe8a\": container with ID starting with 6cc172d511898bf58290a1ba14b7ff59cd4c81ec1d47087211ea43a7213efe8a not found: ID does not exist" containerID="6cc172d511898bf58290a1ba14b7ff59cd4c81ec1d47087211ea43a7213efe8a" Dec 09 13:10:17 crc kubenswrapper[4703]: I1209 13:10:17.031101 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6cc172d511898bf58290a1ba14b7ff59cd4c81ec1d47087211ea43a7213efe8a"} err="failed to get container status \"6cc172d511898bf58290a1ba14b7ff59cd4c81ec1d47087211ea43a7213efe8a\": rpc error: code = NotFound desc = could not find container \"6cc172d511898bf58290a1ba14b7ff59cd4c81ec1d47087211ea43a7213efe8a\": container with ID starting with 6cc172d511898bf58290a1ba14b7ff59cd4c81ec1d47087211ea43a7213efe8a not found: ID does not exist" Dec 09 13:10:17 crc kubenswrapper[4703]: I1209 13:10:17.031138 4703 scope.go:117] "RemoveContainer" containerID="368f241e391d35f0c6904b9d3b96495c4eee6ed24062b99ca25a9cfa9dfe8d32" Dec 09 13:10:17 crc kubenswrapper[4703]: E1209 13:10:17.031513 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"368f241e391d35f0c6904b9d3b96495c4eee6ed24062b99ca25a9cfa9dfe8d32\": container with ID starting with 368f241e391d35f0c6904b9d3b96495c4eee6ed24062b99ca25a9cfa9dfe8d32 not found: ID does not exist" containerID="368f241e391d35f0c6904b9d3b96495c4eee6ed24062b99ca25a9cfa9dfe8d32" Dec 09 13:10:17 crc kubenswrapper[4703]: I1209 13:10:17.031555 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"368f241e391d35f0c6904b9d3b96495c4eee6ed24062b99ca25a9cfa9dfe8d32"} err="failed to get container status \"368f241e391d35f0c6904b9d3b96495c4eee6ed24062b99ca25a9cfa9dfe8d32\": rpc error: code = NotFound desc = could not find container \"368f241e391d35f0c6904b9d3b96495c4eee6ed24062b99ca25a9cfa9dfe8d32\": container with ID starting with 368f241e391d35f0c6904b9d3b96495c4eee6ed24062b99ca25a9cfa9dfe8d32 not found: ID does not exist" Dec 09 13:10:17 crc kubenswrapper[4703]: I1209 13:10:17.031568 4703 scope.go:117] "RemoveContainer" containerID="9d902e276ff57d46a1f4acfc541e119897248cffd88a40998c3a68c0b96b33e4" Dec 09 13:10:17 crc kubenswrapper[4703]: E1209 13:10:17.031958 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d902e276ff57d46a1f4acfc541e119897248cffd88a40998c3a68c0b96b33e4\": container with ID starting with 9d902e276ff57d46a1f4acfc541e119897248cffd88a40998c3a68c0b96b33e4 not found: ID does not exist" containerID="9d902e276ff57d46a1f4acfc541e119897248cffd88a40998c3a68c0b96b33e4" Dec 09 13:10:17 crc kubenswrapper[4703]: I1209 13:10:17.031996 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d902e276ff57d46a1f4acfc541e119897248cffd88a40998c3a68c0b96b33e4"} err="failed to get container status \"9d902e276ff57d46a1f4acfc541e119897248cffd88a40998c3a68c0b96b33e4\": rpc error: code = NotFound desc = could not find container \"9d902e276ff57d46a1f4acfc541e119897248cffd88a40998c3a68c0b96b33e4\": container with ID starting with 9d902e276ff57d46a1f4acfc541e119897248cffd88a40998c3a68c0b96b33e4 not found: ID does not exist" Dec 09 13:10:17 crc kubenswrapper[4703]: I1209 13:10:17.086046 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a702dfb8-0ef9-4e32-9af7-32eb8b2118dd" path="/var/lib/kubelet/pods/a702dfb8-0ef9-4e32-9af7-32eb8b2118dd/volumes" Dec 09 13:10:24 crc kubenswrapper[4703]: E1209 13:10:24.079631 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:10:25 crc kubenswrapper[4703]: E1209 13:10:25.072661 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:10:30 crc kubenswrapper[4703]: I1209 13:10:30.262732 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-scvrm"] Dec 09 13:10:30 crc kubenswrapper[4703]: E1209 13:10:30.264071 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a702dfb8-0ef9-4e32-9af7-32eb8b2118dd" containerName="registry-server" Dec 09 13:10:30 crc kubenswrapper[4703]: I1209 13:10:30.264092 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="a702dfb8-0ef9-4e32-9af7-32eb8b2118dd" containerName="registry-server" Dec 09 13:10:30 crc kubenswrapper[4703]: E1209 13:10:30.264140 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a702dfb8-0ef9-4e32-9af7-32eb8b2118dd" containerName="extract-utilities" Dec 09 13:10:30 crc kubenswrapper[4703]: I1209 13:10:30.264149 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="a702dfb8-0ef9-4e32-9af7-32eb8b2118dd" containerName="extract-utilities" Dec 09 13:10:30 crc kubenswrapper[4703]: E1209 13:10:30.264174 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a702dfb8-0ef9-4e32-9af7-32eb8b2118dd" containerName="extract-content" Dec 09 13:10:30 crc kubenswrapper[4703]: I1209 13:10:30.264184 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="a702dfb8-0ef9-4e32-9af7-32eb8b2118dd" containerName="extract-content" Dec 09 13:10:30 crc kubenswrapper[4703]: I1209 13:10:30.264470 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="a702dfb8-0ef9-4e32-9af7-32eb8b2118dd" containerName="registry-server" Dec 09 13:10:30 crc kubenswrapper[4703]: I1209 13:10:30.266470 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-scvrm" Dec 09 13:10:30 crc kubenswrapper[4703]: I1209 13:10:30.284568 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-scvrm"] Dec 09 13:10:30 crc kubenswrapper[4703]: I1209 13:10:30.399387 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1af388a2-b042-4681-80cf-c2563c82fd7f-utilities\") pod \"redhat-operators-scvrm\" (UID: \"1af388a2-b042-4681-80cf-c2563c82fd7f\") " pod="openshift-marketplace/redhat-operators-scvrm" Dec 09 13:10:30 crc kubenswrapper[4703]: I1209 13:10:30.399737 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1af388a2-b042-4681-80cf-c2563c82fd7f-catalog-content\") pod \"redhat-operators-scvrm\" (UID: \"1af388a2-b042-4681-80cf-c2563c82fd7f\") " pod="openshift-marketplace/redhat-operators-scvrm" Dec 09 13:10:30 crc kubenswrapper[4703]: I1209 13:10:30.400161 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqxph\" (UniqueName: \"kubernetes.io/projected/1af388a2-b042-4681-80cf-c2563c82fd7f-kube-api-access-pqxph\") pod \"redhat-operators-scvrm\" (UID: \"1af388a2-b042-4681-80cf-c2563c82fd7f\") " pod="openshift-marketplace/redhat-operators-scvrm" Dec 09 13:10:30 crc kubenswrapper[4703]: I1209 13:10:30.502976 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1af388a2-b042-4681-80cf-c2563c82fd7f-utilities\") pod \"redhat-operators-scvrm\" (UID: \"1af388a2-b042-4681-80cf-c2563c82fd7f\") " pod="openshift-marketplace/redhat-operators-scvrm" Dec 09 13:10:30 crc kubenswrapper[4703]: I1209 13:10:30.503122 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1af388a2-b042-4681-80cf-c2563c82fd7f-catalog-content\") pod \"redhat-operators-scvrm\" (UID: \"1af388a2-b042-4681-80cf-c2563c82fd7f\") " pod="openshift-marketplace/redhat-operators-scvrm" Dec 09 13:10:30 crc kubenswrapper[4703]: I1209 13:10:30.503245 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqxph\" (UniqueName: \"kubernetes.io/projected/1af388a2-b042-4681-80cf-c2563c82fd7f-kube-api-access-pqxph\") pod \"redhat-operators-scvrm\" (UID: \"1af388a2-b042-4681-80cf-c2563c82fd7f\") " pod="openshift-marketplace/redhat-operators-scvrm" Dec 09 13:10:30 crc kubenswrapper[4703]: I1209 13:10:30.503461 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1af388a2-b042-4681-80cf-c2563c82fd7f-utilities\") pod \"redhat-operators-scvrm\" (UID: \"1af388a2-b042-4681-80cf-c2563c82fd7f\") " pod="openshift-marketplace/redhat-operators-scvrm" Dec 09 13:10:30 crc kubenswrapper[4703]: I1209 13:10:30.503944 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1af388a2-b042-4681-80cf-c2563c82fd7f-catalog-content\") pod \"redhat-operators-scvrm\" (UID: \"1af388a2-b042-4681-80cf-c2563c82fd7f\") " pod="openshift-marketplace/redhat-operators-scvrm" Dec 09 13:10:30 crc kubenswrapper[4703]: I1209 13:10:30.526374 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqxph\" (UniqueName: \"kubernetes.io/projected/1af388a2-b042-4681-80cf-c2563c82fd7f-kube-api-access-pqxph\") pod \"redhat-operators-scvrm\" (UID: \"1af388a2-b042-4681-80cf-c2563c82fd7f\") " pod="openshift-marketplace/redhat-operators-scvrm" Dec 09 13:10:30 crc kubenswrapper[4703]: I1209 13:10:30.610461 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-scvrm" Dec 09 13:10:31 crc kubenswrapper[4703]: I1209 13:10:31.205676 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-scvrm"] Dec 09 13:10:32 crc kubenswrapper[4703]: I1209 13:10:32.107992 4703 generic.go:334] "Generic (PLEG): container finished" podID="1af388a2-b042-4681-80cf-c2563c82fd7f" containerID="7e26a949d39405cf7c52a585b51acb6f76f9ea537e7357e3994be765310d52fb" exitCode=0 Dec 09 13:10:32 crc kubenswrapper[4703]: I1209 13:10:32.108118 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-scvrm" event={"ID":"1af388a2-b042-4681-80cf-c2563c82fd7f","Type":"ContainerDied","Data":"7e26a949d39405cf7c52a585b51acb6f76f9ea537e7357e3994be765310d52fb"} Dec 09 13:10:32 crc kubenswrapper[4703]: I1209 13:10:32.108355 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-scvrm" event={"ID":"1af388a2-b042-4681-80cf-c2563c82fd7f","Type":"ContainerStarted","Data":"b235596077207051c55d8d73a81d555df7cb57af9d8b8d8b0da9c1544e96fb11"} Dec 09 13:10:33 crc kubenswrapper[4703]: I1209 13:10:33.131161 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-scvrm" event={"ID":"1af388a2-b042-4681-80cf-c2563c82fd7f","Type":"ContainerStarted","Data":"ba751d50e9d158e3065de0bab002834ed2a4c55a61f8d350bf5acb0d1b35921b"} Dec 09 13:10:36 crc kubenswrapper[4703]: I1209 13:10:36.162437 4703 generic.go:334] "Generic (PLEG): container finished" podID="1af388a2-b042-4681-80cf-c2563c82fd7f" containerID="ba751d50e9d158e3065de0bab002834ed2a4c55a61f8d350bf5acb0d1b35921b" exitCode=0 Dec 09 13:10:36 crc kubenswrapper[4703]: I1209 13:10:36.162491 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-scvrm" event={"ID":"1af388a2-b042-4681-80cf-c2563c82fd7f","Type":"ContainerDied","Data":"ba751d50e9d158e3065de0bab002834ed2a4c55a61f8d350bf5acb0d1b35921b"} Dec 09 13:10:37 crc kubenswrapper[4703]: I1209 13:10:37.207514 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-scvrm" event={"ID":"1af388a2-b042-4681-80cf-c2563c82fd7f","Type":"ContainerStarted","Data":"f6f8187b917c7e479d16612b0edc8a76b48261d9a055a92fc8e0d2b8be5ea11f"} Dec 09 13:10:37 crc kubenswrapper[4703]: I1209 13:10:37.231080 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-scvrm" podStartSLOduration=2.8105469530000002 podStartE2EDuration="7.231064509s" podCreationTimestamp="2025-12-09 13:10:30 +0000 UTC" firstStartedPulling="2025-12-09 13:10:32.11199287 +0000 UTC m=+3931.360756389" lastFinishedPulling="2025-12-09 13:10:36.532510416 +0000 UTC m=+3935.781273945" observedRunningTime="2025-12-09 13:10:37.229707853 +0000 UTC m=+3936.478471372" watchObservedRunningTime="2025-12-09 13:10:37.231064509 +0000 UTC m=+3936.479828028" Dec 09 13:10:39 crc kubenswrapper[4703]: E1209 13:10:39.072305 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:10:40 crc kubenswrapper[4703]: E1209 13:10:40.071433 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:10:40 crc kubenswrapper[4703]: I1209 13:10:40.611696 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-scvrm" Dec 09 13:10:40 crc kubenswrapper[4703]: I1209 13:10:40.611749 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-scvrm" Dec 09 13:10:41 crc kubenswrapper[4703]: I1209 13:10:41.667998 4703 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-scvrm" podUID="1af388a2-b042-4681-80cf-c2563c82fd7f" containerName="registry-server" probeResult="failure" output=< Dec 09 13:10:41 crc kubenswrapper[4703]: timeout: failed to connect service ":50051" within 1s Dec 09 13:10:41 crc kubenswrapper[4703]: > Dec 09 13:10:50 crc kubenswrapper[4703]: I1209 13:10:50.739179 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-scvrm" Dec 09 13:10:50 crc kubenswrapper[4703]: I1209 13:10:50.793932 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-scvrm" Dec 09 13:10:50 crc kubenswrapper[4703]: I1209 13:10:50.980731 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-scvrm"] Dec 09 13:10:52 crc kubenswrapper[4703]: I1209 13:10:52.355020 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-scvrm" podUID="1af388a2-b042-4681-80cf-c2563c82fd7f" containerName="registry-server" containerID="cri-o://f6f8187b917c7e479d16612b0edc8a76b48261d9a055a92fc8e0d2b8be5ea11f" gracePeriod=2 Dec 09 13:10:53 crc kubenswrapper[4703]: E1209 13:10:53.071500 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:10:53 crc kubenswrapper[4703]: I1209 13:10:53.366896 4703 generic.go:334] "Generic (PLEG): container finished" podID="1af388a2-b042-4681-80cf-c2563c82fd7f" containerID="f6f8187b917c7e479d16612b0edc8a76b48261d9a055a92fc8e0d2b8be5ea11f" exitCode=0 Dec 09 13:10:53 crc kubenswrapper[4703]: I1209 13:10:53.366971 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-scvrm" event={"ID":"1af388a2-b042-4681-80cf-c2563c82fd7f","Type":"ContainerDied","Data":"f6f8187b917c7e479d16612b0edc8a76b48261d9a055a92fc8e0d2b8be5ea11f"} Dec 09 13:10:53 crc kubenswrapper[4703]: I1209 13:10:53.714719 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-scvrm" Dec 09 13:10:53 crc kubenswrapper[4703]: I1209 13:10:53.769664 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1af388a2-b042-4681-80cf-c2563c82fd7f-utilities\") pod \"1af388a2-b042-4681-80cf-c2563c82fd7f\" (UID: \"1af388a2-b042-4681-80cf-c2563c82fd7f\") " Dec 09 13:10:53 crc kubenswrapper[4703]: I1209 13:10:53.769927 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pqxph\" (UniqueName: \"kubernetes.io/projected/1af388a2-b042-4681-80cf-c2563c82fd7f-kube-api-access-pqxph\") pod \"1af388a2-b042-4681-80cf-c2563c82fd7f\" (UID: \"1af388a2-b042-4681-80cf-c2563c82fd7f\") " Dec 09 13:10:53 crc kubenswrapper[4703]: I1209 13:10:53.770208 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1af388a2-b042-4681-80cf-c2563c82fd7f-catalog-content\") pod \"1af388a2-b042-4681-80cf-c2563c82fd7f\" (UID: \"1af388a2-b042-4681-80cf-c2563c82fd7f\") " Dec 09 13:10:53 crc kubenswrapper[4703]: I1209 13:10:53.770819 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1af388a2-b042-4681-80cf-c2563c82fd7f-utilities" (OuterVolumeSpecName: "utilities") pod "1af388a2-b042-4681-80cf-c2563c82fd7f" (UID: "1af388a2-b042-4681-80cf-c2563c82fd7f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 13:10:53 crc kubenswrapper[4703]: I1209 13:10:53.778821 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1af388a2-b042-4681-80cf-c2563c82fd7f-kube-api-access-pqxph" (OuterVolumeSpecName: "kube-api-access-pqxph") pod "1af388a2-b042-4681-80cf-c2563c82fd7f" (UID: "1af388a2-b042-4681-80cf-c2563c82fd7f"). InnerVolumeSpecName "kube-api-access-pqxph". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 13:10:53 crc kubenswrapper[4703]: I1209 13:10:53.797354 4703 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1af388a2-b042-4681-80cf-c2563c82fd7f-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 13:10:53 crc kubenswrapper[4703]: I1209 13:10:53.800122 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pqxph\" (UniqueName: \"kubernetes.io/projected/1af388a2-b042-4681-80cf-c2563c82fd7f-kube-api-access-pqxph\") on node \"crc\" DevicePath \"\"" Dec 09 13:10:53 crc kubenswrapper[4703]: I1209 13:10:53.919150 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1af388a2-b042-4681-80cf-c2563c82fd7f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1af388a2-b042-4681-80cf-c2563c82fd7f" (UID: "1af388a2-b042-4681-80cf-c2563c82fd7f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 13:10:54 crc kubenswrapper[4703]: I1209 13:10:54.003863 4703 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1af388a2-b042-4681-80cf-c2563c82fd7f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 13:10:54 crc kubenswrapper[4703]: E1209 13:10:54.072416 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:10:54 crc kubenswrapper[4703]: I1209 13:10:54.380490 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-scvrm" event={"ID":"1af388a2-b042-4681-80cf-c2563c82fd7f","Type":"ContainerDied","Data":"b235596077207051c55d8d73a81d555df7cb57af9d8b8d8b0da9c1544e96fb11"} Dec 09 13:10:54 crc kubenswrapper[4703]: I1209 13:10:54.380555 4703 scope.go:117] "RemoveContainer" containerID="f6f8187b917c7e479d16612b0edc8a76b48261d9a055a92fc8e0d2b8be5ea11f" Dec 09 13:10:54 crc kubenswrapper[4703]: I1209 13:10:54.380572 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-scvrm" Dec 09 13:10:54 crc kubenswrapper[4703]: I1209 13:10:54.417943 4703 scope.go:117] "RemoveContainer" containerID="ba751d50e9d158e3065de0bab002834ed2a4c55a61f8d350bf5acb0d1b35921b" Dec 09 13:10:54 crc kubenswrapper[4703]: I1209 13:10:54.441696 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-scvrm"] Dec 09 13:10:54 crc kubenswrapper[4703]: I1209 13:10:54.457343 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-scvrm"] Dec 09 13:10:54 crc kubenswrapper[4703]: I1209 13:10:54.751133 4703 scope.go:117] "RemoveContainer" containerID="7e26a949d39405cf7c52a585b51acb6f76f9ea537e7357e3994be765310d52fb" Dec 09 13:10:55 crc kubenswrapper[4703]: I1209 13:10:55.084007 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1af388a2-b042-4681-80cf-c2563c82fd7f" path="/var/lib/kubelet/pods/1af388a2-b042-4681-80cf-c2563c82fd7f/volumes" Dec 09 13:11:06 crc kubenswrapper[4703]: E1209 13:11:06.073795 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:11:06 crc kubenswrapper[4703]: E1209 13:11:06.073810 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:11:18 crc kubenswrapper[4703]: E1209 13:11:18.073017 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:11:20 crc kubenswrapper[4703]: E1209 13:11:20.072892 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:11:29 crc kubenswrapper[4703]: E1209 13:11:29.078977 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:11:35 crc kubenswrapper[4703]: E1209 13:11:35.074889 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:11:43 crc kubenswrapper[4703]: E1209 13:11:43.074702 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:11:46 crc kubenswrapper[4703]: E1209 13:11:46.073563 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:11:56 crc kubenswrapper[4703]: E1209 13:11:56.074802 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:12:00 crc kubenswrapper[4703]: E1209 13:12:00.073527 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:12:08 crc kubenswrapper[4703]: E1209 13:12:08.075167 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:12:12 crc kubenswrapper[4703]: E1209 13:12:12.072505 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:12:23 crc kubenswrapper[4703]: E1209 13:12:23.073687 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:12:24 crc kubenswrapper[4703]: E1209 13:12:24.071393 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:12:29 crc kubenswrapper[4703]: I1209 13:12:29.429481 4703 generic.go:334] "Generic (PLEG): container finished" podID="ff13d4cd-5b50-4df6-9c21-fe4eed8fa7bc" containerID="aad8b909c0c144fcbb74628a8614aa6bc84b5194c553ffbe388297dff026dd0f" exitCode=2 Dec 09 13:12:29 crc kubenswrapper[4703]: I1209 13:12:29.429552 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jjxfg" event={"ID":"ff13d4cd-5b50-4df6-9c21-fe4eed8fa7bc","Type":"ContainerDied","Data":"aad8b909c0c144fcbb74628a8614aa6bc84b5194c553ffbe388297dff026dd0f"} Dec 09 13:12:30 crc kubenswrapper[4703]: I1209 13:12:30.084169 4703 patch_prober.go:28] interesting pod/machine-config-daemon-q8sfk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 13:12:30 crc kubenswrapper[4703]: I1209 13:12:30.084251 4703 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 13:12:31 crc kubenswrapper[4703]: I1209 13:12:31.103051 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jjxfg" Dec 09 13:12:31 crc kubenswrapper[4703]: I1209 13:12:31.244337 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ff13d4cd-5b50-4df6-9c21-fe4eed8fa7bc-inventory\") pod \"ff13d4cd-5b50-4df6-9c21-fe4eed8fa7bc\" (UID: \"ff13d4cd-5b50-4df6-9c21-fe4eed8fa7bc\") " Dec 09 13:12:31 crc kubenswrapper[4703]: I1209 13:12:31.244451 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ff13d4cd-5b50-4df6-9c21-fe4eed8fa7bc-ssh-key\") pod \"ff13d4cd-5b50-4df6-9c21-fe4eed8fa7bc\" (UID: \"ff13d4cd-5b50-4df6-9c21-fe4eed8fa7bc\") " Dec 09 13:12:31 crc kubenswrapper[4703]: I1209 13:12:31.244623 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s7pjs\" (UniqueName: \"kubernetes.io/projected/ff13d4cd-5b50-4df6-9c21-fe4eed8fa7bc-kube-api-access-s7pjs\") pod \"ff13d4cd-5b50-4df6-9c21-fe4eed8fa7bc\" (UID: \"ff13d4cd-5b50-4df6-9c21-fe4eed8fa7bc\") " Dec 09 13:12:31 crc kubenswrapper[4703]: I1209 13:12:31.250968 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff13d4cd-5b50-4df6-9c21-fe4eed8fa7bc-kube-api-access-s7pjs" (OuterVolumeSpecName: "kube-api-access-s7pjs") pod "ff13d4cd-5b50-4df6-9c21-fe4eed8fa7bc" (UID: "ff13d4cd-5b50-4df6-9c21-fe4eed8fa7bc"). InnerVolumeSpecName "kube-api-access-s7pjs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 13:12:31 crc kubenswrapper[4703]: I1209 13:12:31.277558 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff13d4cd-5b50-4df6-9c21-fe4eed8fa7bc-inventory" (OuterVolumeSpecName: "inventory") pod "ff13d4cd-5b50-4df6-9c21-fe4eed8fa7bc" (UID: "ff13d4cd-5b50-4df6-9c21-fe4eed8fa7bc"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 13:12:31 crc kubenswrapper[4703]: I1209 13:12:31.277751 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff13d4cd-5b50-4df6-9c21-fe4eed8fa7bc-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "ff13d4cd-5b50-4df6-9c21-fe4eed8fa7bc" (UID: "ff13d4cd-5b50-4df6-9c21-fe4eed8fa7bc"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 13:12:31 crc kubenswrapper[4703]: I1209 13:12:31.346930 4703 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ff13d4cd-5b50-4df6-9c21-fe4eed8fa7bc-inventory\") on node \"crc\" DevicePath \"\"" Dec 09 13:12:31 crc kubenswrapper[4703]: I1209 13:12:31.346972 4703 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ff13d4cd-5b50-4df6-9c21-fe4eed8fa7bc-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 09 13:12:31 crc kubenswrapper[4703]: I1209 13:12:31.346987 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s7pjs\" (UniqueName: \"kubernetes.io/projected/ff13d4cd-5b50-4df6-9c21-fe4eed8fa7bc-kube-api-access-s7pjs\") on node \"crc\" DevicePath \"\"" Dec 09 13:12:31 crc kubenswrapper[4703]: I1209 13:12:31.452756 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jjxfg" event={"ID":"ff13d4cd-5b50-4df6-9c21-fe4eed8fa7bc","Type":"ContainerDied","Data":"ee1bfe8ac89d2615948537ed94a89c649a2214b20bbd3426ccd12eadb6c275ba"} Dec 09 13:12:31 crc kubenswrapper[4703]: I1209 13:12:31.452801 4703 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee1bfe8ac89d2615948537ed94a89c649a2214b20bbd3426ccd12eadb6c275ba" Dec 09 13:12:31 crc kubenswrapper[4703]: I1209 13:12:31.452829 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jjxfg" Dec 09 13:12:31 crc kubenswrapper[4703]: E1209 13:12:31.576645 4703 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podff13d4cd_5b50_4df6_9c21_fe4eed8fa7bc.slice\": RecentStats: unable to find data in memory cache]" Dec 09 13:12:36 crc kubenswrapper[4703]: E1209 13:12:36.073568 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:12:37 crc kubenswrapper[4703]: E1209 13:12:37.072048 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:12:50 crc kubenswrapper[4703]: E1209 13:12:50.073182 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:12:50 crc kubenswrapper[4703]: E1209 13:12:50.073240 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:13:00 crc kubenswrapper[4703]: I1209 13:13:00.083155 4703 patch_prober.go:28] interesting pod/machine-config-daemon-q8sfk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 13:13:00 crc kubenswrapper[4703]: I1209 13:13:00.084044 4703 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 13:13:02 crc kubenswrapper[4703]: E1209 13:13:02.072527 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:13:03 crc kubenswrapper[4703]: E1209 13:13:03.071913 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:13:14 crc kubenswrapper[4703]: E1209 13:13:14.074310 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:13:15 crc kubenswrapper[4703]: E1209 13:13:15.074360 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:13:27 crc kubenswrapper[4703]: E1209 13:13:27.073021 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:13:29 crc kubenswrapper[4703]: E1209 13:13:29.071821 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:13:30 crc kubenswrapper[4703]: I1209 13:13:30.083445 4703 patch_prober.go:28] interesting pod/machine-config-daemon-q8sfk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 13:13:30 crc kubenswrapper[4703]: I1209 13:13:30.083778 4703 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 13:13:30 crc kubenswrapper[4703]: I1209 13:13:30.083819 4703 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" Dec 09 13:13:30 crc kubenswrapper[4703]: I1209 13:13:30.561644 4703 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f30a6c9f3a12ee1b27686558b4578528b7079a65b9e0ed554e65cbed4a034e04"} pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 13:13:30 crc kubenswrapper[4703]: I1209 13:13:30.562240 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" containerName="machine-config-daemon" containerID="cri-o://f30a6c9f3a12ee1b27686558b4578528b7079a65b9e0ed554e65cbed4a034e04" gracePeriod=600 Dec 09 13:13:31 crc kubenswrapper[4703]: I1209 13:13:31.573360 4703 generic.go:334] "Generic (PLEG): container finished" podID="32956ceb-8540-406e-8693-e86efb46cd42" containerID="f30a6c9f3a12ee1b27686558b4578528b7079a65b9e0ed554e65cbed4a034e04" exitCode=0 Dec 09 13:13:31 crc kubenswrapper[4703]: I1209 13:13:31.573441 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" event={"ID":"32956ceb-8540-406e-8693-e86efb46cd42","Type":"ContainerDied","Data":"f30a6c9f3a12ee1b27686558b4578528b7079a65b9e0ed554e65cbed4a034e04"} Dec 09 13:13:31 crc kubenswrapper[4703]: I1209 13:13:31.574416 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" event={"ID":"32956ceb-8540-406e-8693-e86efb46cd42","Type":"ContainerStarted","Data":"201b468eb3deb9c9f49909653b73cb425575b5a6787ffb529578793f3e143446"} Dec 09 13:13:31 crc kubenswrapper[4703]: I1209 13:13:31.574451 4703 scope.go:117] "RemoveContainer" containerID="9fa2385da1baff876428e5756a19c3d3ddf508cad92b88df98cc322f0c8079d9" Dec 09 13:13:38 crc kubenswrapper[4703]: E1209 13:13:38.072131 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:13:41 crc kubenswrapper[4703]: E1209 13:13:41.083787 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:13:50 crc kubenswrapper[4703]: E1209 13:13:50.073294 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:13:54 crc kubenswrapper[4703]: E1209 13:13:54.072396 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:14:02 crc kubenswrapper[4703]: E1209 13:14:02.075584 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:14:05 crc kubenswrapper[4703]: E1209 13:14:05.071980 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:14:15 crc kubenswrapper[4703]: I1209 13:14:15.072153 4703 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 09 13:14:15 crc kubenswrapper[4703]: E1209 13:14:15.216279 4703 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 13:14:15 crc kubenswrapper[4703]: E1209 13:14:15.216356 4703 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 13:14:15 crc kubenswrapper[4703]: E1209 13:14:15.216498 4703 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n565h678h645h65h685h657h568h564h8fhcfhc4h58fh66bh564h649h5d5hb5h67dh57fh97h58ch557h4h68h59fh5c8h65ch586h5bch5b8h5c6h7q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p7hbq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(ce42c586-f397-4f98-be45-f56d36115d7a): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 13:14:15 crc kubenswrapper[4703]: E1209 13:14:15.217887 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:14:19 crc kubenswrapper[4703]: E1209 13:14:19.072540 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:14:27 crc kubenswrapper[4703]: E1209 13:14:27.073468 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:14:30 crc kubenswrapper[4703]: E1209 13:14:30.073143 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:14:42 crc kubenswrapper[4703]: E1209 13:14:42.072241 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:14:43 crc kubenswrapper[4703]: E1209 13:14:43.210587 4703 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 09 13:14:43 crc kubenswrapper[4703]: E1209 13:14:43.211141 4703 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 09 13:14:43 crc kubenswrapper[4703]: E1209 13:14:43.211376 4703 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6kdkz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-6ljb8_openstack(862e9d91-760e-43af-aba3-a23255b0fd7a): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 13:14:43 crc kubenswrapper[4703]: E1209 13:14:43.212651 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:14:45 crc kubenswrapper[4703]: I1209 13:14:45.277295 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vbfkg"] Dec 09 13:14:45 crc kubenswrapper[4703]: E1209 13:14:45.277838 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1af388a2-b042-4681-80cf-c2563c82fd7f" containerName="extract-content" Dec 09 13:14:45 crc kubenswrapper[4703]: I1209 13:14:45.277855 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="1af388a2-b042-4681-80cf-c2563c82fd7f" containerName="extract-content" Dec 09 13:14:45 crc kubenswrapper[4703]: E1209 13:14:45.277872 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1af388a2-b042-4681-80cf-c2563c82fd7f" containerName="registry-server" Dec 09 13:14:45 crc kubenswrapper[4703]: I1209 13:14:45.277878 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="1af388a2-b042-4681-80cf-c2563c82fd7f" containerName="registry-server" Dec 09 13:14:45 crc kubenswrapper[4703]: E1209 13:14:45.277889 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1af388a2-b042-4681-80cf-c2563c82fd7f" containerName="extract-utilities" Dec 09 13:14:45 crc kubenswrapper[4703]: I1209 13:14:45.277895 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="1af388a2-b042-4681-80cf-c2563c82fd7f" containerName="extract-utilities" Dec 09 13:14:45 crc kubenswrapper[4703]: E1209 13:14:45.277917 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff13d4cd-5b50-4df6-9c21-fe4eed8fa7bc" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 09 13:14:45 crc kubenswrapper[4703]: I1209 13:14:45.277923 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff13d4cd-5b50-4df6-9c21-fe4eed8fa7bc" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 09 13:14:45 crc kubenswrapper[4703]: I1209 13:14:45.278155 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="1af388a2-b042-4681-80cf-c2563c82fd7f" containerName="registry-server" Dec 09 13:14:45 crc kubenswrapper[4703]: I1209 13:14:45.278176 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff13d4cd-5b50-4df6-9c21-fe4eed8fa7bc" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 09 13:14:45 crc kubenswrapper[4703]: I1209 13:14:45.279974 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vbfkg" Dec 09 13:14:45 crc kubenswrapper[4703]: I1209 13:14:45.291217 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vbfkg"] Dec 09 13:14:45 crc kubenswrapper[4703]: I1209 13:14:45.421755 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77xh4\" (UniqueName: \"kubernetes.io/projected/0ec6a113-f611-4037-ba0e-ce31ae6d19b3-kube-api-access-77xh4\") pod \"redhat-marketplace-vbfkg\" (UID: \"0ec6a113-f611-4037-ba0e-ce31ae6d19b3\") " pod="openshift-marketplace/redhat-marketplace-vbfkg" Dec 09 13:14:45 crc kubenswrapper[4703]: I1209 13:14:45.421829 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ec6a113-f611-4037-ba0e-ce31ae6d19b3-catalog-content\") pod \"redhat-marketplace-vbfkg\" (UID: \"0ec6a113-f611-4037-ba0e-ce31ae6d19b3\") " pod="openshift-marketplace/redhat-marketplace-vbfkg" Dec 09 13:14:45 crc kubenswrapper[4703]: I1209 13:14:45.421861 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ec6a113-f611-4037-ba0e-ce31ae6d19b3-utilities\") pod \"redhat-marketplace-vbfkg\" (UID: \"0ec6a113-f611-4037-ba0e-ce31ae6d19b3\") " pod="openshift-marketplace/redhat-marketplace-vbfkg" Dec 09 13:14:45 crc kubenswrapper[4703]: I1209 13:14:45.525001 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77xh4\" (UniqueName: \"kubernetes.io/projected/0ec6a113-f611-4037-ba0e-ce31ae6d19b3-kube-api-access-77xh4\") pod \"redhat-marketplace-vbfkg\" (UID: \"0ec6a113-f611-4037-ba0e-ce31ae6d19b3\") " pod="openshift-marketplace/redhat-marketplace-vbfkg" Dec 09 13:14:45 crc kubenswrapper[4703]: I1209 13:14:45.525091 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ec6a113-f611-4037-ba0e-ce31ae6d19b3-catalog-content\") pod \"redhat-marketplace-vbfkg\" (UID: \"0ec6a113-f611-4037-ba0e-ce31ae6d19b3\") " pod="openshift-marketplace/redhat-marketplace-vbfkg" Dec 09 13:14:45 crc kubenswrapper[4703]: I1209 13:14:45.525124 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ec6a113-f611-4037-ba0e-ce31ae6d19b3-utilities\") pod \"redhat-marketplace-vbfkg\" (UID: \"0ec6a113-f611-4037-ba0e-ce31ae6d19b3\") " pod="openshift-marketplace/redhat-marketplace-vbfkg" Dec 09 13:14:45 crc kubenswrapper[4703]: I1209 13:14:45.525835 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ec6a113-f611-4037-ba0e-ce31ae6d19b3-utilities\") pod \"redhat-marketplace-vbfkg\" (UID: \"0ec6a113-f611-4037-ba0e-ce31ae6d19b3\") " pod="openshift-marketplace/redhat-marketplace-vbfkg" Dec 09 13:14:45 crc kubenswrapper[4703]: I1209 13:14:45.525876 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ec6a113-f611-4037-ba0e-ce31ae6d19b3-catalog-content\") pod \"redhat-marketplace-vbfkg\" (UID: \"0ec6a113-f611-4037-ba0e-ce31ae6d19b3\") " pod="openshift-marketplace/redhat-marketplace-vbfkg" Dec 09 13:14:45 crc kubenswrapper[4703]: I1209 13:14:45.552835 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77xh4\" (UniqueName: \"kubernetes.io/projected/0ec6a113-f611-4037-ba0e-ce31ae6d19b3-kube-api-access-77xh4\") pod \"redhat-marketplace-vbfkg\" (UID: \"0ec6a113-f611-4037-ba0e-ce31ae6d19b3\") " pod="openshift-marketplace/redhat-marketplace-vbfkg" Dec 09 13:14:45 crc kubenswrapper[4703]: I1209 13:14:45.626793 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vbfkg" Dec 09 13:14:46 crc kubenswrapper[4703]: I1209 13:14:46.204278 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vbfkg"] Dec 09 13:14:46 crc kubenswrapper[4703]: I1209 13:14:46.547981 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vbfkg" event={"ID":"0ec6a113-f611-4037-ba0e-ce31ae6d19b3","Type":"ContainerStarted","Data":"a2a3aaee1cd9140c952e0f39df418861b6dba1fcdd669c781f689924aaff5715"} Dec 09 13:14:47 crc kubenswrapper[4703]: I1209 13:14:47.560811 4703 generic.go:334] "Generic (PLEG): container finished" podID="0ec6a113-f611-4037-ba0e-ce31ae6d19b3" containerID="473695eb64dbd1734370ebe30e13988fdb43bb191ea424409f6d0647bbc2169e" exitCode=0 Dec 09 13:14:47 crc kubenswrapper[4703]: I1209 13:14:47.560874 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vbfkg" event={"ID":"0ec6a113-f611-4037-ba0e-ce31ae6d19b3","Type":"ContainerDied","Data":"473695eb64dbd1734370ebe30e13988fdb43bb191ea424409f6d0647bbc2169e"} Dec 09 13:14:48 crc kubenswrapper[4703]: I1209 13:14:48.581767 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vbfkg" event={"ID":"0ec6a113-f611-4037-ba0e-ce31ae6d19b3","Type":"ContainerStarted","Data":"391b74909b3d06b60e143fafdfee087167e6cbebdc15a33859aa3c6d107db122"} Dec 09 13:14:49 crc kubenswrapper[4703]: I1209 13:14:49.595251 4703 generic.go:334] "Generic (PLEG): container finished" podID="0ec6a113-f611-4037-ba0e-ce31ae6d19b3" containerID="391b74909b3d06b60e143fafdfee087167e6cbebdc15a33859aa3c6d107db122" exitCode=0 Dec 09 13:14:49 crc kubenswrapper[4703]: I1209 13:14:49.595304 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vbfkg" event={"ID":"0ec6a113-f611-4037-ba0e-ce31ae6d19b3","Type":"ContainerDied","Data":"391b74909b3d06b60e143fafdfee087167e6cbebdc15a33859aa3c6d107db122"} Dec 09 13:14:50 crc kubenswrapper[4703]: I1209 13:14:50.613273 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vbfkg" event={"ID":"0ec6a113-f611-4037-ba0e-ce31ae6d19b3","Type":"ContainerStarted","Data":"810c03adb3f4d09c0512fffcf85579ae84a8cfd972a93b16b2aeb0b79ba7c15a"} Dec 09 13:14:50 crc kubenswrapper[4703]: I1209 13:14:50.644060 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vbfkg" podStartSLOduration=2.898692729 podStartE2EDuration="5.644036303s" podCreationTimestamp="2025-12-09 13:14:45 +0000 UTC" firstStartedPulling="2025-12-09 13:14:47.565458843 +0000 UTC m=+4186.814222362" lastFinishedPulling="2025-12-09 13:14:50.310802427 +0000 UTC m=+4189.559565936" observedRunningTime="2025-12-09 13:14:50.635225216 +0000 UTC m=+4189.883988735" watchObservedRunningTime="2025-12-09 13:14:50.644036303 +0000 UTC m=+4189.892799822" Dec 09 13:14:54 crc kubenswrapper[4703]: E1209 13:14:54.072892 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:14:55 crc kubenswrapper[4703]: E1209 13:14:55.072327 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:14:55 crc kubenswrapper[4703]: I1209 13:14:55.627098 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vbfkg" Dec 09 13:14:55 crc kubenswrapper[4703]: I1209 13:14:55.628452 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vbfkg" Dec 09 13:14:55 crc kubenswrapper[4703]: I1209 13:14:55.688575 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vbfkg" Dec 09 13:14:56 crc kubenswrapper[4703]: I1209 13:14:56.766176 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vbfkg" Dec 09 13:14:56 crc kubenswrapper[4703]: I1209 13:14:56.864386 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vbfkg"] Dec 09 13:14:58 crc kubenswrapper[4703]: I1209 13:14:58.713680 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vbfkg" podUID="0ec6a113-f611-4037-ba0e-ce31ae6d19b3" containerName="registry-server" containerID="cri-o://810c03adb3f4d09c0512fffcf85579ae84a8cfd972a93b16b2aeb0b79ba7c15a" gracePeriod=2 Dec 09 13:14:59 crc kubenswrapper[4703]: I1209 13:14:59.382986 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vbfkg" Dec 09 13:14:59 crc kubenswrapper[4703]: I1209 13:14:59.472390 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ec6a113-f611-4037-ba0e-ce31ae6d19b3-utilities\") pod \"0ec6a113-f611-4037-ba0e-ce31ae6d19b3\" (UID: \"0ec6a113-f611-4037-ba0e-ce31ae6d19b3\") " Dec 09 13:14:59 crc kubenswrapper[4703]: I1209 13:14:59.472555 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-77xh4\" (UniqueName: \"kubernetes.io/projected/0ec6a113-f611-4037-ba0e-ce31ae6d19b3-kube-api-access-77xh4\") pod \"0ec6a113-f611-4037-ba0e-ce31ae6d19b3\" (UID: \"0ec6a113-f611-4037-ba0e-ce31ae6d19b3\") " Dec 09 13:14:59 crc kubenswrapper[4703]: I1209 13:14:59.472817 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ec6a113-f611-4037-ba0e-ce31ae6d19b3-catalog-content\") pod \"0ec6a113-f611-4037-ba0e-ce31ae6d19b3\" (UID: \"0ec6a113-f611-4037-ba0e-ce31ae6d19b3\") " Dec 09 13:14:59 crc kubenswrapper[4703]: I1209 13:14:59.473942 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ec6a113-f611-4037-ba0e-ce31ae6d19b3-utilities" (OuterVolumeSpecName: "utilities") pod "0ec6a113-f611-4037-ba0e-ce31ae6d19b3" (UID: "0ec6a113-f611-4037-ba0e-ce31ae6d19b3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 13:14:59 crc kubenswrapper[4703]: I1209 13:14:59.479809 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ec6a113-f611-4037-ba0e-ce31ae6d19b3-kube-api-access-77xh4" (OuterVolumeSpecName: "kube-api-access-77xh4") pod "0ec6a113-f611-4037-ba0e-ce31ae6d19b3" (UID: "0ec6a113-f611-4037-ba0e-ce31ae6d19b3"). InnerVolumeSpecName "kube-api-access-77xh4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 13:14:59 crc kubenswrapper[4703]: I1209 13:14:59.499437 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ec6a113-f611-4037-ba0e-ce31ae6d19b3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0ec6a113-f611-4037-ba0e-ce31ae6d19b3" (UID: "0ec6a113-f611-4037-ba0e-ce31ae6d19b3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 13:14:59 crc kubenswrapper[4703]: I1209 13:14:59.576464 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-77xh4\" (UniqueName: \"kubernetes.io/projected/0ec6a113-f611-4037-ba0e-ce31ae6d19b3-kube-api-access-77xh4\") on node \"crc\" DevicePath \"\"" Dec 09 13:14:59 crc kubenswrapper[4703]: I1209 13:14:59.577303 4703 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ec6a113-f611-4037-ba0e-ce31ae6d19b3-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 13:14:59 crc kubenswrapper[4703]: I1209 13:14:59.577325 4703 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ec6a113-f611-4037-ba0e-ce31ae6d19b3-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 13:14:59 crc kubenswrapper[4703]: I1209 13:14:59.731512 4703 generic.go:334] "Generic (PLEG): container finished" podID="0ec6a113-f611-4037-ba0e-ce31ae6d19b3" containerID="810c03adb3f4d09c0512fffcf85579ae84a8cfd972a93b16b2aeb0b79ba7c15a" exitCode=0 Dec 09 13:14:59 crc kubenswrapper[4703]: I1209 13:14:59.731575 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vbfkg" event={"ID":"0ec6a113-f611-4037-ba0e-ce31ae6d19b3","Type":"ContainerDied","Data":"810c03adb3f4d09c0512fffcf85579ae84a8cfd972a93b16b2aeb0b79ba7c15a"} Dec 09 13:14:59 crc kubenswrapper[4703]: I1209 13:14:59.731620 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vbfkg" event={"ID":"0ec6a113-f611-4037-ba0e-ce31ae6d19b3","Type":"ContainerDied","Data":"a2a3aaee1cd9140c952e0f39df418861b6dba1fcdd669c781f689924aaff5715"} Dec 09 13:14:59 crc kubenswrapper[4703]: I1209 13:14:59.731627 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vbfkg" Dec 09 13:14:59 crc kubenswrapper[4703]: I1209 13:14:59.731647 4703 scope.go:117] "RemoveContainer" containerID="810c03adb3f4d09c0512fffcf85579ae84a8cfd972a93b16b2aeb0b79ba7c15a" Dec 09 13:14:59 crc kubenswrapper[4703]: I1209 13:14:59.758955 4703 scope.go:117] "RemoveContainer" containerID="391b74909b3d06b60e143fafdfee087167e6cbebdc15a33859aa3c6d107db122" Dec 09 13:14:59 crc kubenswrapper[4703]: I1209 13:14:59.775609 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vbfkg"] Dec 09 13:14:59 crc kubenswrapper[4703]: I1209 13:14:59.783797 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vbfkg"] Dec 09 13:14:59 crc kubenswrapper[4703]: I1209 13:14:59.792692 4703 scope.go:117] "RemoveContainer" containerID="473695eb64dbd1734370ebe30e13988fdb43bb191ea424409f6d0647bbc2169e" Dec 09 13:14:59 crc kubenswrapper[4703]: I1209 13:14:59.859557 4703 scope.go:117] "RemoveContainer" containerID="810c03adb3f4d09c0512fffcf85579ae84a8cfd972a93b16b2aeb0b79ba7c15a" Dec 09 13:14:59 crc kubenswrapper[4703]: E1209 13:14:59.860603 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"810c03adb3f4d09c0512fffcf85579ae84a8cfd972a93b16b2aeb0b79ba7c15a\": container with ID starting with 810c03adb3f4d09c0512fffcf85579ae84a8cfd972a93b16b2aeb0b79ba7c15a not found: ID does not exist" containerID="810c03adb3f4d09c0512fffcf85579ae84a8cfd972a93b16b2aeb0b79ba7c15a" Dec 09 13:14:59 crc kubenswrapper[4703]: I1209 13:14:59.860722 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"810c03adb3f4d09c0512fffcf85579ae84a8cfd972a93b16b2aeb0b79ba7c15a"} err="failed to get container status \"810c03adb3f4d09c0512fffcf85579ae84a8cfd972a93b16b2aeb0b79ba7c15a\": rpc error: code = NotFound desc = could not find container \"810c03adb3f4d09c0512fffcf85579ae84a8cfd972a93b16b2aeb0b79ba7c15a\": container with ID starting with 810c03adb3f4d09c0512fffcf85579ae84a8cfd972a93b16b2aeb0b79ba7c15a not found: ID does not exist" Dec 09 13:14:59 crc kubenswrapper[4703]: I1209 13:14:59.860839 4703 scope.go:117] "RemoveContainer" containerID="391b74909b3d06b60e143fafdfee087167e6cbebdc15a33859aa3c6d107db122" Dec 09 13:14:59 crc kubenswrapper[4703]: E1209 13:14:59.861335 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"391b74909b3d06b60e143fafdfee087167e6cbebdc15a33859aa3c6d107db122\": container with ID starting with 391b74909b3d06b60e143fafdfee087167e6cbebdc15a33859aa3c6d107db122 not found: ID does not exist" containerID="391b74909b3d06b60e143fafdfee087167e6cbebdc15a33859aa3c6d107db122" Dec 09 13:14:59 crc kubenswrapper[4703]: I1209 13:14:59.861432 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"391b74909b3d06b60e143fafdfee087167e6cbebdc15a33859aa3c6d107db122"} err="failed to get container status \"391b74909b3d06b60e143fafdfee087167e6cbebdc15a33859aa3c6d107db122\": rpc error: code = NotFound desc = could not find container \"391b74909b3d06b60e143fafdfee087167e6cbebdc15a33859aa3c6d107db122\": container with ID starting with 391b74909b3d06b60e143fafdfee087167e6cbebdc15a33859aa3c6d107db122 not found: ID does not exist" Dec 09 13:14:59 crc kubenswrapper[4703]: I1209 13:14:59.861499 4703 scope.go:117] "RemoveContainer" containerID="473695eb64dbd1734370ebe30e13988fdb43bb191ea424409f6d0647bbc2169e" Dec 09 13:14:59 crc kubenswrapper[4703]: E1209 13:14:59.862124 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"473695eb64dbd1734370ebe30e13988fdb43bb191ea424409f6d0647bbc2169e\": container with ID starting with 473695eb64dbd1734370ebe30e13988fdb43bb191ea424409f6d0647bbc2169e not found: ID does not exist" containerID="473695eb64dbd1734370ebe30e13988fdb43bb191ea424409f6d0647bbc2169e" Dec 09 13:14:59 crc kubenswrapper[4703]: I1209 13:14:59.862177 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"473695eb64dbd1734370ebe30e13988fdb43bb191ea424409f6d0647bbc2169e"} err="failed to get container status \"473695eb64dbd1734370ebe30e13988fdb43bb191ea424409f6d0647bbc2169e\": rpc error: code = NotFound desc = could not find container \"473695eb64dbd1734370ebe30e13988fdb43bb191ea424409f6d0647bbc2169e\": container with ID starting with 473695eb64dbd1734370ebe30e13988fdb43bb191ea424409f6d0647bbc2169e not found: ID does not exist" Dec 09 13:15:00 crc kubenswrapper[4703]: I1209 13:15:00.192485 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421435-5rmmd"] Dec 09 13:15:00 crc kubenswrapper[4703]: E1209 13:15:00.193318 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ec6a113-f611-4037-ba0e-ce31ae6d19b3" containerName="registry-server" Dec 09 13:15:00 crc kubenswrapper[4703]: I1209 13:15:00.193390 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ec6a113-f611-4037-ba0e-ce31ae6d19b3" containerName="registry-server" Dec 09 13:15:00 crc kubenswrapper[4703]: E1209 13:15:00.193482 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ec6a113-f611-4037-ba0e-ce31ae6d19b3" containerName="extract-utilities" Dec 09 13:15:00 crc kubenswrapper[4703]: I1209 13:15:00.193544 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ec6a113-f611-4037-ba0e-ce31ae6d19b3" containerName="extract-utilities" Dec 09 13:15:00 crc kubenswrapper[4703]: E1209 13:15:00.193626 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ec6a113-f611-4037-ba0e-ce31ae6d19b3" containerName="extract-content" Dec 09 13:15:00 crc kubenswrapper[4703]: I1209 13:15:00.193681 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ec6a113-f611-4037-ba0e-ce31ae6d19b3" containerName="extract-content" Dec 09 13:15:00 crc kubenswrapper[4703]: I1209 13:15:00.193949 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ec6a113-f611-4037-ba0e-ce31ae6d19b3" containerName="registry-server" Dec 09 13:15:00 crc kubenswrapper[4703]: I1209 13:15:00.195112 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421435-5rmmd" Dec 09 13:15:00 crc kubenswrapper[4703]: I1209 13:15:00.199909 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 09 13:15:00 crc kubenswrapper[4703]: I1209 13:15:00.200981 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 09 13:15:00 crc kubenswrapper[4703]: I1209 13:15:00.206403 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421435-5rmmd"] Dec 09 13:15:00 crc kubenswrapper[4703]: I1209 13:15:00.293710 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2aa9fef9-4e95-4ee1-a852-8f48c90855f7-config-volume\") pod \"collect-profiles-29421435-5rmmd\" (UID: \"2aa9fef9-4e95-4ee1-a852-8f48c90855f7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421435-5rmmd" Dec 09 13:15:00 crc kubenswrapper[4703]: I1209 13:15:00.293806 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2aa9fef9-4e95-4ee1-a852-8f48c90855f7-secret-volume\") pod \"collect-profiles-29421435-5rmmd\" (UID: \"2aa9fef9-4e95-4ee1-a852-8f48c90855f7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421435-5rmmd" Dec 09 13:15:00 crc kubenswrapper[4703]: I1209 13:15:00.294355 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrbsl\" (UniqueName: \"kubernetes.io/projected/2aa9fef9-4e95-4ee1-a852-8f48c90855f7-kube-api-access-jrbsl\") pod \"collect-profiles-29421435-5rmmd\" (UID: \"2aa9fef9-4e95-4ee1-a852-8f48c90855f7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421435-5rmmd" Dec 09 13:15:00 crc kubenswrapper[4703]: I1209 13:15:00.398447 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2aa9fef9-4e95-4ee1-a852-8f48c90855f7-config-volume\") pod \"collect-profiles-29421435-5rmmd\" (UID: \"2aa9fef9-4e95-4ee1-a852-8f48c90855f7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421435-5rmmd" Dec 09 13:15:00 crc kubenswrapper[4703]: I1209 13:15:00.398531 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2aa9fef9-4e95-4ee1-a852-8f48c90855f7-secret-volume\") pod \"collect-profiles-29421435-5rmmd\" (UID: \"2aa9fef9-4e95-4ee1-a852-8f48c90855f7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421435-5rmmd" Dec 09 13:15:00 crc kubenswrapper[4703]: I1209 13:15:00.398583 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrbsl\" (UniqueName: \"kubernetes.io/projected/2aa9fef9-4e95-4ee1-a852-8f48c90855f7-kube-api-access-jrbsl\") pod \"collect-profiles-29421435-5rmmd\" (UID: \"2aa9fef9-4e95-4ee1-a852-8f48c90855f7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421435-5rmmd" Dec 09 13:15:00 crc kubenswrapper[4703]: I1209 13:15:00.400292 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2aa9fef9-4e95-4ee1-a852-8f48c90855f7-config-volume\") pod \"collect-profiles-29421435-5rmmd\" (UID: \"2aa9fef9-4e95-4ee1-a852-8f48c90855f7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421435-5rmmd" Dec 09 13:15:00 crc kubenswrapper[4703]: I1209 13:15:00.404459 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2aa9fef9-4e95-4ee1-a852-8f48c90855f7-secret-volume\") pod \"collect-profiles-29421435-5rmmd\" (UID: \"2aa9fef9-4e95-4ee1-a852-8f48c90855f7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421435-5rmmd" Dec 09 13:15:00 crc kubenswrapper[4703]: I1209 13:15:00.418303 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrbsl\" (UniqueName: \"kubernetes.io/projected/2aa9fef9-4e95-4ee1-a852-8f48c90855f7-kube-api-access-jrbsl\") pod \"collect-profiles-29421435-5rmmd\" (UID: \"2aa9fef9-4e95-4ee1-a852-8f48c90855f7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421435-5rmmd" Dec 09 13:15:00 crc kubenswrapper[4703]: I1209 13:15:00.532174 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421435-5rmmd" Dec 09 13:15:01 crc kubenswrapper[4703]: I1209 13:15:01.138796 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ec6a113-f611-4037-ba0e-ce31ae6d19b3" path="/var/lib/kubelet/pods/0ec6a113-f611-4037-ba0e-ce31ae6d19b3/volumes" Dec 09 13:15:01 crc kubenswrapper[4703]: I1209 13:15:01.140893 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421435-5rmmd"] Dec 09 13:15:01 crc kubenswrapper[4703]: W1209 13:15:01.746680 4703 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2aa9fef9_4e95_4ee1_a852_8f48c90855f7.slice/crio-d186b04c78bab3ab1b35e7dd1ec723ce56072a5ac14cd71016416816e588407d WatchSource:0}: Error finding container d186b04c78bab3ab1b35e7dd1ec723ce56072a5ac14cd71016416816e588407d: Status 404 returned error can't find the container with id d186b04c78bab3ab1b35e7dd1ec723ce56072a5ac14cd71016416816e588407d Dec 09 13:15:01 crc kubenswrapper[4703]: I1209 13:15:01.765083 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421435-5rmmd" event={"ID":"2aa9fef9-4e95-4ee1-a852-8f48c90855f7","Type":"ContainerStarted","Data":"d186b04c78bab3ab1b35e7dd1ec723ce56072a5ac14cd71016416816e588407d"} Dec 09 13:15:02 crc kubenswrapper[4703]: I1209 13:15:02.780407 4703 generic.go:334] "Generic (PLEG): container finished" podID="2aa9fef9-4e95-4ee1-a852-8f48c90855f7" containerID="cfc873309763743329dcc071c709ca0653d835258cfeff5befbf118425135afd" exitCode=0 Dec 09 13:15:02 crc kubenswrapper[4703]: I1209 13:15:02.780638 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421435-5rmmd" event={"ID":"2aa9fef9-4e95-4ee1-a852-8f48c90855f7","Type":"ContainerDied","Data":"cfc873309763743329dcc071c709ca0653d835258cfeff5befbf118425135afd"} Dec 09 13:15:04 crc kubenswrapper[4703]: I1209 13:15:04.570604 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421435-5rmmd" Dec 09 13:15:04 crc kubenswrapper[4703]: I1209 13:15:04.636637 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jrbsl\" (UniqueName: \"kubernetes.io/projected/2aa9fef9-4e95-4ee1-a852-8f48c90855f7-kube-api-access-jrbsl\") pod \"2aa9fef9-4e95-4ee1-a852-8f48c90855f7\" (UID: \"2aa9fef9-4e95-4ee1-a852-8f48c90855f7\") " Dec 09 13:15:04 crc kubenswrapper[4703]: I1209 13:15:04.636896 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2aa9fef9-4e95-4ee1-a852-8f48c90855f7-config-volume\") pod \"2aa9fef9-4e95-4ee1-a852-8f48c90855f7\" (UID: \"2aa9fef9-4e95-4ee1-a852-8f48c90855f7\") " Dec 09 13:15:04 crc kubenswrapper[4703]: I1209 13:15:04.636975 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2aa9fef9-4e95-4ee1-a852-8f48c90855f7-secret-volume\") pod \"2aa9fef9-4e95-4ee1-a852-8f48c90855f7\" (UID: \"2aa9fef9-4e95-4ee1-a852-8f48c90855f7\") " Dec 09 13:15:04 crc kubenswrapper[4703]: I1209 13:15:04.639319 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2aa9fef9-4e95-4ee1-a852-8f48c90855f7-config-volume" (OuterVolumeSpecName: "config-volume") pod "2aa9fef9-4e95-4ee1-a852-8f48c90855f7" (UID: "2aa9fef9-4e95-4ee1-a852-8f48c90855f7"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 13:15:04 crc kubenswrapper[4703]: I1209 13:15:04.730489 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2aa9fef9-4e95-4ee1-a852-8f48c90855f7-kube-api-access-jrbsl" (OuterVolumeSpecName: "kube-api-access-jrbsl") pod "2aa9fef9-4e95-4ee1-a852-8f48c90855f7" (UID: "2aa9fef9-4e95-4ee1-a852-8f48c90855f7"). InnerVolumeSpecName "kube-api-access-jrbsl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 13:15:04 crc kubenswrapper[4703]: I1209 13:15:04.731160 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2aa9fef9-4e95-4ee1-a852-8f48c90855f7-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2aa9fef9-4e95-4ee1-a852-8f48c90855f7" (UID: "2aa9fef9-4e95-4ee1-a852-8f48c90855f7"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 13:15:04 crc kubenswrapper[4703]: I1209 13:15:04.740141 4703 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2aa9fef9-4e95-4ee1-a852-8f48c90855f7-config-volume\") on node \"crc\" DevicePath \"\"" Dec 09 13:15:04 crc kubenswrapper[4703]: I1209 13:15:04.740185 4703 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2aa9fef9-4e95-4ee1-a852-8f48c90855f7-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 09 13:15:04 crc kubenswrapper[4703]: I1209 13:15:04.740212 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jrbsl\" (UniqueName: \"kubernetes.io/projected/2aa9fef9-4e95-4ee1-a852-8f48c90855f7-kube-api-access-jrbsl\") on node \"crc\" DevicePath \"\"" Dec 09 13:15:04 crc kubenswrapper[4703]: I1209 13:15:04.817008 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421435-5rmmd" event={"ID":"2aa9fef9-4e95-4ee1-a852-8f48c90855f7","Type":"ContainerDied","Data":"d186b04c78bab3ab1b35e7dd1ec723ce56072a5ac14cd71016416816e588407d"} Dec 09 13:15:04 crc kubenswrapper[4703]: I1209 13:15:04.817053 4703 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d186b04c78bab3ab1b35e7dd1ec723ce56072a5ac14cd71016416816e588407d" Dec 09 13:15:04 crc kubenswrapper[4703]: I1209 13:15:04.817151 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421435-5rmmd" Dec 09 13:15:05 crc kubenswrapper[4703]: I1209 13:15:05.711325 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421390-7x87k"] Dec 09 13:15:05 crc kubenswrapper[4703]: I1209 13:15:05.727486 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421390-7x87k"] Dec 09 13:15:07 crc kubenswrapper[4703]: I1209 13:15:07.084776 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="624667b4-444c-42a5-91e1-cd0bbc2f79ae" path="/var/lib/kubelet/pods/624667b4-444c-42a5-91e1-cd0bbc2f79ae/volumes" Dec 09 13:15:08 crc kubenswrapper[4703]: I1209 13:15:08.048676 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ftl6m"] Dec 09 13:15:08 crc kubenswrapper[4703]: E1209 13:15:08.049916 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2aa9fef9-4e95-4ee1-a852-8f48c90855f7" containerName="collect-profiles" Dec 09 13:15:08 crc kubenswrapper[4703]: I1209 13:15:08.049951 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="2aa9fef9-4e95-4ee1-a852-8f48c90855f7" containerName="collect-profiles" Dec 09 13:15:08 crc kubenswrapper[4703]: I1209 13:15:08.050278 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="2aa9fef9-4e95-4ee1-a852-8f48c90855f7" containerName="collect-profiles" Dec 09 13:15:08 crc kubenswrapper[4703]: I1209 13:15:08.051390 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ftl6m" Dec 09 13:15:08 crc kubenswrapper[4703]: I1209 13:15:08.061048 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 09 13:15:08 crc kubenswrapper[4703]: I1209 13:15:08.061111 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 09 13:15:08 crc kubenswrapper[4703]: I1209 13:15:08.061461 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 09 13:15:08 crc kubenswrapper[4703]: I1209 13:15:08.061537 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-8xnzm" Dec 09 13:15:08 crc kubenswrapper[4703]: I1209 13:15:08.071992 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ftl6m"] Dec 09 13:15:08 crc kubenswrapper[4703]: E1209 13:15:08.074900 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:15:08 crc kubenswrapper[4703]: E1209 13:15:08.075639 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:15:08 crc kubenswrapper[4703]: I1209 13:15:08.227915 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/43ca3e6a-7682-46cb-97e4-51c7a8010e98-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-ftl6m\" (UID: \"43ca3e6a-7682-46cb-97e4-51c7a8010e98\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ftl6m" Dec 09 13:15:08 crc kubenswrapper[4703]: I1209 13:15:08.227989 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/43ca3e6a-7682-46cb-97e4-51c7a8010e98-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-ftl6m\" (UID: \"43ca3e6a-7682-46cb-97e4-51c7a8010e98\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ftl6m" Dec 09 13:15:08 crc kubenswrapper[4703]: I1209 13:15:08.228052 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9nz9\" (UniqueName: \"kubernetes.io/projected/43ca3e6a-7682-46cb-97e4-51c7a8010e98-kube-api-access-w9nz9\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-ftl6m\" (UID: \"43ca3e6a-7682-46cb-97e4-51c7a8010e98\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ftl6m" Dec 09 13:15:08 crc kubenswrapper[4703]: I1209 13:15:08.330616 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/43ca3e6a-7682-46cb-97e4-51c7a8010e98-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-ftl6m\" (UID: \"43ca3e6a-7682-46cb-97e4-51c7a8010e98\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ftl6m" Dec 09 13:15:08 crc kubenswrapper[4703]: I1209 13:15:08.330740 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/43ca3e6a-7682-46cb-97e4-51c7a8010e98-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-ftl6m\" (UID: \"43ca3e6a-7682-46cb-97e4-51c7a8010e98\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ftl6m" Dec 09 13:15:08 crc kubenswrapper[4703]: I1209 13:15:08.330832 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9nz9\" (UniqueName: \"kubernetes.io/projected/43ca3e6a-7682-46cb-97e4-51c7a8010e98-kube-api-access-w9nz9\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-ftl6m\" (UID: \"43ca3e6a-7682-46cb-97e4-51c7a8010e98\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ftl6m" Dec 09 13:15:08 crc kubenswrapper[4703]: I1209 13:15:08.344159 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/43ca3e6a-7682-46cb-97e4-51c7a8010e98-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-ftl6m\" (UID: \"43ca3e6a-7682-46cb-97e4-51c7a8010e98\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ftl6m" Dec 09 13:15:08 crc kubenswrapper[4703]: I1209 13:15:08.344156 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/43ca3e6a-7682-46cb-97e4-51c7a8010e98-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-ftl6m\" (UID: \"43ca3e6a-7682-46cb-97e4-51c7a8010e98\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ftl6m" Dec 09 13:15:08 crc kubenswrapper[4703]: I1209 13:15:08.352658 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9nz9\" (UniqueName: \"kubernetes.io/projected/43ca3e6a-7682-46cb-97e4-51c7a8010e98-kube-api-access-w9nz9\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-ftl6m\" (UID: \"43ca3e6a-7682-46cb-97e4-51c7a8010e98\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ftl6m" Dec 09 13:15:08 crc kubenswrapper[4703]: I1209 13:15:08.394235 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ftl6m" Dec 09 13:15:08 crc kubenswrapper[4703]: I1209 13:15:08.953483 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ftl6m"] Dec 09 13:15:09 crc kubenswrapper[4703]: I1209 13:15:09.873063 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ftl6m" event={"ID":"43ca3e6a-7682-46cb-97e4-51c7a8010e98","Type":"ContainerStarted","Data":"d258e0eedeead6698847e3391a4988c2297996e413f37a5003e4d87da7f7000f"} Dec 09 13:15:09 crc kubenswrapper[4703]: I1209 13:15:09.874246 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ftl6m" event={"ID":"43ca3e6a-7682-46cb-97e4-51c7a8010e98","Type":"ContainerStarted","Data":"ff5ce3255d607c9dd927fa1b3d39bfdb9853727ff1f14be1f8da40151bd87a34"} Dec 09 13:15:09 crc kubenswrapper[4703]: I1209 13:15:09.903971 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ftl6m" podStartSLOduration=1.441364176 podStartE2EDuration="1.903943326s" podCreationTimestamp="2025-12-09 13:15:08 +0000 UTC" firstStartedPulling="2025-12-09 13:15:08.960437524 +0000 UTC m=+4208.209201053" lastFinishedPulling="2025-12-09 13:15:09.423016694 +0000 UTC m=+4208.671780203" observedRunningTime="2025-12-09 13:15:09.893033483 +0000 UTC m=+4209.141797012" watchObservedRunningTime="2025-12-09 13:15:09.903943326 +0000 UTC m=+4209.152706845" Dec 09 13:15:19 crc kubenswrapper[4703]: E1209 13:15:19.073325 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:15:20 crc kubenswrapper[4703]: E1209 13:15:20.072801 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:15:24 crc kubenswrapper[4703]: I1209 13:15:24.020035 4703 scope.go:117] "RemoveContainer" containerID="9dfa5b1ce6aff44fbb0e693c090028d418a01dde89bd591b3d661566e3951420" Dec 09 13:15:30 crc kubenswrapper[4703]: I1209 13:15:30.084615 4703 patch_prober.go:28] interesting pod/machine-config-daemon-q8sfk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 13:15:30 crc kubenswrapper[4703]: I1209 13:15:30.085440 4703 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 13:15:32 crc kubenswrapper[4703]: E1209 13:15:32.074677 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:15:34 crc kubenswrapper[4703]: E1209 13:15:34.073036 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:15:45 crc kubenswrapper[4703]: E1209 13:15:45.089022 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:15:48 crc kubenswrapper[4703]: E1209 13:15:48.074352 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:15:58 crc kubenswrapper[4703]: E1209 13:15:58.073785 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:16:00 crc kubenswrapper[4703]: I1209 13:16:00.084153 4703 patch_prober.go:28] interesting pod/machine-config-daemon-q8sfk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 13:16:00 crc kubenswrapper[4703]: I1209 13:16:00.085798 4703 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 13:16:01 crc kubenswrapper[4703]: E1209 13:16:01.086283 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:16:13 crc kubenswrapper[4703]: E1209 13:16:13.073669 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:16:16 crc kubenswrapper[4703]: E1209 13:16:16.072595 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:16:25 crc kubenswrapper[4703]: E1209 13:16:25.072629 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:16:27 crc kubenswrapper[4703]: E1209 13:16:27.103433 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:16:30 crc kubenswrapper[4703]: I1209 13:16:30.084137 4703 patch_prober.go:28] interesting pod/machine-config-daemon-q8sfk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 13:16:30 crc kubenswrapper[4703]: I1209 13:16:30.084870 4703 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 13:16:30 crc kubenswrapper[4703]: I1209 13:16:30.084935 4703 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" Dec 09 13:16:30 crc kubenswrapper[4703]: I1209 13:16:30.086085 4703 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"201b468eb3deb9c9f49909653b73cb425575b5a6787ffb529578793f3e143446"} pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 13:16:30 crc kubenswrapper[4703]: I1209 13:16:30.086141 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" containerName="machine-config-daemon" containerID="cri-o://201b468eb3deb9c9f49909653b73cb425575b5a6787ffb529578793f3e143446" gracePeriod=600 Dec 09 13:16:30 crc kubenswrapper[4703]: E1209 13:16:30.227086 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 13:16:30 crc kubenswrapper[4703]: I1209 13:16:30.860937 4703 generic.go:334] "Generic (PLEG): container finished" podID="32956ceb-8540-406e-8693-e86efb46cd42" containerID="201b468eb3deb9c9f49909653b73cb425575b5a6787ffb529578793f3e143446" exitCode=0 Dec 09 13:16:30 crc kubenswrapper[4703]: I1209 13:16:30.861483 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" event={"ID":"32956ceb-8540-406e-8693-e86efb46cd42","Type":"ContainerDied","Data":"201b468eb3deb9c9f49909653b73cb425575b5a6787ffb529578793f3e143446"} Dec 09 13:16:30 crc kubenswrapper[4703]: I1209 13:16:30.861536 4703 scope.go:117] "RemoveContainer" containerID="f30a6c9f3a12ee1b27686558b4578528b7079a65b9e0ed554e65cbed4a034e04" Dec 09 13:16:30 crc kubenswrapper[4703]: I1209 13:16:30.862710 4703 scope.go:117] "RemoveContainer" containerID="201b468eb3deb9c9f49909653b73cb425575b5a6787ffb529578793f3e143446" Dec 09 13:16:30 crc kubenswrapper[4703]: E1209 13:16:30.863101 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 13:16:39 crc kubenswrapper[4703]: E1209 13:16:39.073354 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:16:40 crc kubenswrapper[4703]: E1209 13:16:40.071861 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:16:42 crc kubenswrapper[4703]: I1209 13:16:42.071318 4703 scope.go:117] "RemoveContainer" containerID="201b468eb3deb9c9f49909653b73cb425575b5a6787ffb529578793f3e143446" Dec 09 13:16:42 crc kubenswrapper[4703]: E1209 13:16:42.072347 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 13:16:50 crc kubenswrapper[4703]: E1209 13:16:50.072076 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:16:54 crc kubenswrapper[4703]: I1209 13:16:54.071391 4703 scope.go:117] "RemoveContainer" containerID="201b468eb3deb9c9f49909653b73cb425575b5a6787ffb529578793f3e143446" Dec 09 13:16:54 crc kubenswrapper[4703]: E1209 13:16:54.072310 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 13:16:55 crc kubenswrapper[4703]: E1209 13:16:55.075062 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:17:02 crc kubenswrapper[4703]: E1209 13:17:02.072796 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:17:05 crc kubenswrapper[4703]: I1209 13:17:05.071503 4703 scope.go:117] "RemoveContainer" containerID="201b468eb3deb9c9f49909653b73cb425575b5a6787ffb529578793f3e143446" Dec 09 13:17:05 crc kubenswrapper[4703]: E1209 13:17:05.072640 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 13:17:09 crc kubenswrapper[4703]: E1209 13:17:09.073447 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:17:17 crc kubenswrapper[4703]: I1209 13:17:17.070525 4703 scope.go:117] "RemoveContainer" containerID="201b468eb3deb9c9f49909653b73cb425575b5a6787ffb529578793f3e143446" Dec 09 13:17:17 crc kubenswrapper[4703]: E1209 13:17:17.071749 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 13:17:17 crc kubenswrapper[4703]: E1209 13:17:17.073014 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:17:24 crc kubenswrapper[4703]: E1209 13:17:24.072523 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:17:28 crc kubenswrapper[4703]: E1209 13:17:28.074460 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:17:30 crc kubenswrapper[4703]: I1209 13:17:30.070521 4703 scope.go:117] "RemoveContainer" containerID="201b468eb3deb9c9f49909653b73cb425575b5a6787ffb529578793f3e143446" Dec 09 13:17:30 crc kubenswrapper[4703]: E1209 13:17:30.071575 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 13:17:39 crc kubenswrapper[4703]: E1209 13:17:39.072647 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:17:42 crc kubenswrapper[4703]: E1209 13:17:42.073742 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:17:43 crc kubenswrapper[4703]: I1209 13:17:43.070716 4703 scope.go:117] "RemoveContainer" containerID="201b468eb3deb9c9f49909653b73cb425575b5a6787ffb529578793f3e143446" Dec 09 13:17:43 crc kubenswrapper[4703]: E1209 13:17:43.071114 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 13:17:51 crc kubenswrapper[4703]: E1209 13:17:51.086951 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:17:54 crc kubenswrapper[4703]: E1209 13:17:54.089913 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:17:58 crc kubenswrapper[4703]: I1209 13:17:58.070097 4703 scope.go:117] "RemoveContainer" containerID="201b468eb3deb9c9f49909653b73cb425575b5a6787ffb529578793f3e143446" Dec 09 13:17:58 crc kubenswrapper[4703]: E1209 13:17:58.072661 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 13:18:03 crc kubenswrapper[4703]: E1209 13:18:03.072461 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:18:05 crc kubenswrapper[4703]: E1209 13:18:05.073691 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:18:12 crc kubenswrapper[4703]: I1209 13:18:12.070139 4703 scope.go:117] "RemoveContainer" containerID="201b468eb3deb9c9f49909653b73cb425575b5a6787ffb529578793f3e143446" Dec 09 13:18:12 crc kubenswrapper[4703]: E1209 13:18:12.071170 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 13:18:14 crc kubenswrapper[4703]: E1209 13:18:14.072984 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:18:18 crc kubenswrapper[4703]: E1209 13:18:18.074670 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:18:27 crc kubenswrapper[4703]: I1209 13:18:27.071041 4703 scope.go:117] "RemoveContainer" containerID="201b468eb3deb9c9f49909653b73cb425575b5a6787ffb529578793f3e143446" Dec 09 13:18:27 crc kubenswrapper[4703]: E1209 13:18:27.071877 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 13:18:29 crc kubenswrapper[4703]: E1209 13:18:29.073288 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:18:30 crc kubenswrapper[4703]: E1209 13:18:30.072777 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:18:40 crc kubenswrapper[4703]: I1209 13:18:40.070936 4703 scope.go:117] "RemoveContainer" containerID="201b468eb3deb9c9f49909653b73cb425575b5a6787ffb529578793f3e143446" Dec 09 13:18:40 crc kubenswrapper[4703]: E1209 13:18:40.074313 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 13:18:44 crc kubenswrapper[4703]: E1209 13:18:44.073688 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:18:44 crc kubenswrapper[4703]: E1209 13:18:44.074083 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:18:55 crc kubenswrapper[4703]: I1209 13:18:55.069882 4703 scope.go:117] "RemoveContainer" containerID="201b468eb3deb9c9f49909653b73cb425575b5a6787ffb529578793f3e143446" Dec 09 13:18:55 crc kubenswrapper[4703]: E1209 13:18:55.070985 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 13:18:56 crc kubenswrapper[4703]: E1209 13:18:56.074052 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:18:59 crc kubenswrapper[4703]: E1209 13:18:59.074161 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:19:08 crc kubenswrapper[4703]: I1209 13:19:08.070553 4703 scope.go:117] "RemoveContainer" containerID="201b468eb3deb9c9f49909653b73cb425575b5a6787ffb529578793f3e143446" Dec 09 13:19:08 crc kubenswrapper[4703]: E1209 13:19:08.071696 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 13:19:11 crc kubenswrapper[4703]: E1209 13:19:11.090424 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:19:11 crc kubenswrapper[4703]: E1209 13:19:11.090424 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:19:19 crc kubenswrapper[4703]: I1209 13:19:19.070615 4703 scope.go:117] "RemoveContainer" containerID="201b468eb3deb9c9f49909653b73cb425575b5a6787ffb529578793f3e143446" Dec 09 13:19:19 crc kubenswrapper[4703]: E1209 13:19:19.071994 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 13:19:24 crc kubenswrapper[4703]: E1209 13:19:24.074507 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:19:27 crc kubenswrapper[4703]: I1209 13:19:27.073012 4703 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 09 13:19:27 crc kubenswrapper[4703]: E1209 13:19:27.210272 4703 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 13:19:27 crc kubenswrapper[4703]: E1209 13:19:27.210359 4703 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 13:19:27 crc kubenswrapper[4703]: E1209 13:19:27.210616 4703 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n565h678h645h65h685h657h568h564h8fhcfhc4h58fh66bh564h649h5d5hb5h67dh57fh97h58ch557h4h68h59fh5c8h65ch586h5bch5b8h5c6h7q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p7hbq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(ce42c586-f397-4f98-be45-f56d36115d7a): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 13:19:27 crc kubenswrapper[4703]: E1209 13:19:27.212605 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:19:32 crc kubenswrapper[4703]: I1209 13:19:32.558299 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gwwgk"] Dec 09 13:19:32 crc kubenswrapper[4703]: I1209 13:19:32.561928 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gwwgk" Dec 09 13:19:32 crc kubenswrapper[4703]: I1209 13:19:32.576878 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gwwgk"] Dec 09 13:19:32 crc kubenswrapper[4703]: I1209 13:19:32.671216 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17006cfe-6767-441a-890e-2a06ba771460-catalog-content\") pod \"community-operators-gwwgk\" (UID: \"17006cfe-6767-441a-890e-2a06ba771460\") " pod="openshift-marketplace/community-operators-gwwgk" Dec 09 13:19:32 crc kubenswrapper[4703]: I1209 13:19:32.671664 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17006cfe-6767-441a-890e-2a06ba771460-utilities\") pod \"community-operators-gwwgk\" (UID: \"17006cfe-6767-441a-890e-2a06ba771460\") " pod="openshift-marketplace/community-operators-gwwgk" Dec 09 13:19:32 crc kubenswrapper[4703]: I1209 13:19:32.672221 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9twr\" (UniqueName: \"kubernetes.io/projected/17006cfe-6767-441a-890e-2a06ba771460-kube-api-access-z9twr\") pod \"community-operators-gwwgk\" (UID: \"17006cfe-6767-441a-890e-2a06ba771460\") " pod="openshift-marketplace/community-operators-gwwgk" Dec 09 13:19:32 crc kubenswrapper[4703]: I1209 13:19:32.775794 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17006cfe-6767-441a-890e-2a06ba771460-catalog-content\") pod \"community-operators-gwwgk\" (UID: \"17006cfe-6767-441a-890e-2a06ba771460\") " pod="openshift-marketplace/community-operators-gwwgk" Dec 09 13:19:32 crc kubenswrapper[4703]: I1209 13:19:32.775870 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17006cfe-6767-441a-890e-2a06ba771460-utilities\") pod \"community-operators-gwwgk\" (UID: \"17006cfe-6767-441a-890e-2a06ba771460\") " pod="openshift-marketplace/community-operators-gwwgk" Dec 09 13:19:32 crc kubenswrapper[4703]: I1209 13:19:32.775972 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9twr\" (UniqueName: \"kubernetes.io/projected/17006cfe-6767-441a-890e-2a06ba771460-kube-api-access-z9twr\") pod \"community-operators-gwwgk\" (UID: \"17006cfe-6767-441a-890e-2a06ba771460\") " pod="openshift-marketplace/community-operators-gwwgk" Dec 09 13:19:32 crc kubenswrapper[4703]: I1209 13:19:32.776967 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17006cfe-6767-441a-890e-2a06ba771460-catalog-content\") pod \"community-operators-gwwgk\" (UID: \"17006cfe-6767-441a-890e-2a06ba771460\") " pod="openshift-marketplace/community-operators-gwwgk" Dec 09 13:19:32 crc kubenswrapper[4703]: I1209 13:19:32.776972 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17006cfe-6767-441a-890e-2a06ba771460-utilities\") pod \"community-operators-gwwgk\" (UID: \"17006cfe-6767-441a-890e-2a06ba771460\") " pod="openshift-marketplace/community-operators-gwwgk" Dec 09 13:19:32 crc kubenswrapper[4703]: I1209 13:19:32.801583 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9twr\" (UniqueName: \"kubernetes.io/projected/17006cfe-6767-441a-890e-2a06ba771460-kube-api-access-z9twr\") pod \"community-operators-gwwgk\" (UID: \"17006cfe-6767-441a-890e-2a06ba771460\") " pod="openshift-marketplace/community-operators-gwwgk" Dec 09 13:19:32 crc kubenswrapper[4703]: I1209 13:19:32.897817 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gwwgk" Dec 09 13:19:33 crc kubenswrapper[4703]: I1209 13:19:33.463520 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gwwgk"] Dec 09 13:19:33 crc kubenswrapper[4703]: W1209 13:19:33.464597 4703 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod17006cfe_6767_441a_890e_2a06ba771460.slice/crio-4c9000a40a1cd7e91542caa9df7ca7722f98046a5daa67e028e338098b431d88 WatchSource:0}: Error finding container 4c9000a40a1cd7e91542caa9df7ca7722f98046a5daa67e028e338098b431d88: Status 404 returned error can't find the container with id 4c9000a40a1cd7e91542caa9df7ca7722f98046a5daa67e028e338098b431d88 Dec 09 13:19:34 crc kubenswrapper[4703]: I1209 13:19:34.003380 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gwwgk" event={"ID":"17006cfe-6767-441a-890e-2a06ba771460","Type":"ContainerStarted","Data":"4c9000a40a1cd7e91542caa9df7ca7722f98046a5daa67e028e338098b431d88"} Dec 09 13:19:34 crc kubenswrapper[4703]: I1209 13:19:34.069780 4703 scope.go:117] "RemoveContainer" containerID="201b468eb3deb9c9f49909653b73cb425575b5a6787ffb529578793f3e143446" Dec 09 13:19:34 crc kubenswrapper[4703]: E1209 13:19:34.070215 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 13:19:35 crc kubenswrapper[4703]: I1209 13:19:35.018579 4703 generic.go:334] "Generic (PLEG): container finished" podID="17006cfe-6767-441a-890e-2a06ba771460" containerID="71cc857ed409d9e5c3440fb0eb8e3d231cd79ab07391981dd12e011a897fe8d6" exitCode=0 Dec 09 13:19:35 crc kubenswrapper[4703]: I1209 13:19:35.018675 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gwwgk" event={"ID":"17006cfe-6767-441a-890e-2a06ba771460","Type":"ContainerDied","Data":"71cc857ed409d9e5c3440fb0eb8e3d231cd79ab07391981dd12e011a897fe8d6"} Dec 09 13:19:36 crc kubenswrapper[4703]: I1209 13:19:36.031538 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gwwgk" event={"ID":"17006cfe-6767-441a-890e-2a06ba771460","Type":"ContainerStarted","Data":"d24b70be32731ec85c4a9e10aae80a2b61daae5493044443deca7562b3c1d0a8"} Dec 09 13:19:37 crc kubenswrapper[4703]: I1209 13:19:37.049866 4703 generic.go:334] "Generic (PLEG): container finished" podID="17006cfe-6767-441a-890e-2a06ba771460" containerID="d24b70be32731ec85c4a9e10aae80a2b61daae5493044443deca7562b3c1d0a8" exitCode=0 Dec 09 13:19:37 crc kubenswrapper[4703]: I1209 13:19:37.049977 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gwwgk" event={"ID":"17006cfe-6767-441a-890e-2a06ba771460","Type":"ContainerDied","Data":"d24b70be32731ec85c4a9e10aae80a2b61daae5493044443deca7562b3c1d0a8"} Dec 09 13:19:37 crc kubenswrapper[4703]: E1209 13:19:37.074459 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:19:38 crc kubenswrapper[4703]: I1209 13:19:38.064820 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gwwgk" event={"ID":"17006cfe-6767-441a-890e-2a06ba771460","Type":"ContainerStarted","Data":"fb5cb170cb58d5c255bc6b7b6c111b9289220336371e301ae2dbfabe6378b48d"} Dec 09 13:19:38 crc kubenswrapper[4703]: I1209 13:19:38.089775 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gwwgk" podStartSLOduration=3.650659877 podStartE2EDuration="6.089745738s" podCreationTimestamp="2025-12-09 13:19:32 +0000 UTC" firstStartedPulling="2025-12-09 13:19:35.021071943 +0000 UTC m=+4474.269835462" lastFinishedPulling="2025-12-09 13:19:37.460157804 +0000 UTC m=+4476.708921323" observedRunningTime="2025-12-09 13:19:38.08422473 +0000 UTC m=+4477.332988249" watchObservedRunningTime="2025-12-09 13:19:38.089745738 +0000 UTC m=+4477.338509257" Dec 09 13:19:42 crc kubenswrapper[4703]: E1209 13:19:42.074527 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:19:42 crc kubenswrapper[4703]: I1209 13:19:42.898547 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gwwgk" Dec 09 13:19:42 crc kubenswrapper[4703]: I1209 13:19:42.899016 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gwwgk" Dec 09 13:19:42 crc kubenswrapper[4703]: I1209 13:19:42.954180 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gwwgk" Dec 09 13:19:43 crc kubenswrapper[4703]: I1209 13:19:43.172888 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gwwgk" Dec 09 13:19:43 crc kubenswrapper[4703]: I1209 13:19:43.235420 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gwwgk"] Dec 09 13:19:45 crc kubenswrapper[4703]: I1209 13:19:45.147391 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gwwgk" podUID="17006cfe-6767-441a-890e-2a06ba771460" containerName="registry-server" containerID="cri-o://fb5cb170cb58d5c255bc6b7b6c111b9289220336371e301ae2dbfabe6378b48d" gracePeriod=2 Dec 09 13:19:46 crc kubenswrapper[4703]: I1209 13:19:46.156550 4703 generic.go:334] "Generic (PLEG): container finished" podID="17006cfe-6767-441a-890e-2a06ba771460" containerID="fb5cb170cb58d5c255bc6b7b6c111b9289220336371e301ae2dbfabe6378b48d" exitCode=0 Dec 09 13:19:46 crc kubenswrapper[4703]: I1209 13:19:46.156649 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gwwgk" event={"ID":"17006cfe-6767-441a-890e-2a06ba771460","Type":"ContainerDied","Data":"fb5cb170cb58d5c255bc6b7b6c111b9289220336371e301ae2dbfabe6378b48d"} Dec 09 13:19:46 crc kubenswrapper[4703]: I1209 13:19:46.451758 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gwwgk" Dec 09 13:19:46 crc kubenswrapper[4703]: I1209 13:19:46.561710 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17006cfe-6767-441a-890e-2a06ba771460-utilities\") pod \"17006cfe-6767-441a-890e-2a06ba771460\" (UID: \"17006cfe-6767-441a-890e-2a06ba771460\") " Dec 09 13:19:46 crc kubenswrapper[4703]: I1209 13:19:46.561851 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17006cfe-6767-441a-890e-2a06ba771460-catalog-content\") pod \"17006cfe-6767-441a-890e-2a06ba771460\" (UID: \"17006cfe-6767-441a-890e-2a06ba771460\") " Dec 09 13:19:46 crc kubenswrapper[4703]: I1209 13:19:46.561976 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z9twr\" (UniqueName: \"kubernetes.io/projected/17006cfe-6767-441a-890e-2a06ba771460-kube-api-access-z9twr\") pod \"17006cfe-6767-441a-890e-2a06ba771460\" (UID: \"17006cfe-6767-441a-890e-2a06ba771460\") " Dec 09 13:19:46 crc kubenswrapper[4703]: I1209 13:19:46.563043 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17006cfe-6767-441a-890e-2a06ba771460-utilities" (OuterVolumeSpecName: "utilities") pod "17006cfe-6767-441a-890e-2a06ba771460" (UID: "17006cfe-6767-441a-890e-2a06ba771460"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 13:19:46 crc kubenswrapper[4703]: I1209 13:19:46.594622 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17006cfe-6767-441a-890e-2a06ba771460-kube-api-access-z9twr" (OuterVolumeSpecName: "kube-api-access-z9twr") pod "17006cfe-6767-441a-890e-2a06ba771460" (UID: "17006cfe-6767-441a-890e-2a06ba771460"). InnerVolumeSpecName "kube-api-access-z9twr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 13:19:46 crc kubenswrapper[4703]: I1209 13:19:46.620594 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17006cfe-6767-441a-890e-2a06ba771460-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "17006cfe-6767-441a-890e-2a06ba771460" (UID: "17006cfe-6767-441a-890e-2a06ba771460"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 13:19:46 crc kubenswrapper[4703]: I1209 13:19:46.665879 4703 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17006cfe-6767-441a-890e-2a06ba771460-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 13:19:46 crc kubenswrapper[4703]: I1209 13:19:46.665925 4703 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17006cfe-6767-441a-890e-2a06ba771460-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 13:19:46 crc kubenswrapper[4703]: I1209 13:19:46.665942 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z9twr\" (UniqueName: \"kubernetes.io/projected/17006cfe-6767-441a-890e-2a06ba771460-kube-api-access-z9twr\") on node \"crc\" DevicePath \"\"" Dec 09 13:19:47 crc kubenswrapper[4703]: I1209 13:19:47.172237 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gwwgk" event={"ID":"17006cfe-6767-441a-890e-2a06ba771460","Type":"ContainerDied","Data":"4c9000a40a1cd7e91542caa9df7ca7722f98046a5daa67e028e338098b431d88"} Dec 09 13:19:47 crc kubenswrapper[4703]: I1209 13:19:47.172313 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gwwgk" Dec 09 13:19:47 crc kubenswrapper[4703]: I1209 13:19:47.172329 4703 scope.go:117] "RemoveContainer" containerID="fb5cb170cb58d5c255bc6b7b6c111b9289220336371e301ae2dbfabe6378b48d" Dec 09 13:19:47 crc kubenswrapper[4703]: I1209 13:19:47.216181 4703 scope.go:117] "RemoveContainer" containerID="d24b70be32731ec85c4a9e10aae80a2b61daae5493044443deca7562b3c1d0a8" Dec 09 13:19:47 crc kubenswrapper[4703]: I1209 13:19:47.223600 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gwwgk"] Dec 09 13:19:47 crc kubenswrapper[4703]: I1209 13:19:47.239843 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gwwgk"] Dec 09 13:19:47 crc kubenswrapper[4703]: I1209 13:19:47.256062 4703 scope.go:117] "RemoveContainer" containerID="71cc857ed409d9e5c3440fb0eb8e3d231cd79ab07391981dd12e011a897fe8d6" Dec 09 13:19:48 crc kubenswrapper[4703]: E1209 13:19:48.226989 4703 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 09 13:19:48 crc kubenswrapper[4703]: E1209 13:19:48.227046 4703 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 09 13:19:48 crc kubenswrapper[4703]: E1209 13:19:48.227168 4703 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6kdkz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-6ljb8_openstack(862e9d91-760e-43af-aba3-a23255b0fd7a): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 13:19:48 crc kubenswrapper[4703]: E1209 13:19:48.228355 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:19:49 crc kubenswrapper[4703]: I1209 13:19:49.070882 4703 scope.go:117] "RemoveContainer" containerID="201b468eb3deb9c9f49909653b73cb425575b5a6787ffb529578793f3e143446" Dec 09 13:19:49 crc kubenswrapper[4703]: E1209 13:19:49.072128 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 13:19:49 crc kubenswrapper[4703]: I1209 13:19:49.085208 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17006cfe-6767-441a-890e-2a06ba771460" path="/var/lib/kubelet/pods/17006cfe-6767-441a-890e-2a06ba771460/volumes" Dec 09 13:19:53 crc kubenswrapper[4703]: E1209 13:19:53.073602 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:20:01 crc kubenswrapper[4703]: I1209 13:20:01.080694 4703 scope.go:117] "RemoveContainer" containerID="201b468eb3deb9c9f49909653b73cb425575b5a6787ffb529578793f3e143446" Dec 09 13:20:01 crc kubenswrapper[4703]: E1209 13:20:01.081891 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 13:20:03 crc kubenswrapper[4703]: E1209 13:20:03.074001 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:20:05 crc kubenswrapper[4703]: E1209 13:20:05.076578 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:20:13 crc kubenswrapper[4703]: I1209 13:20:13.070916 4703 scope.go:117] "RemoveContainer" containerID="201b468eb3deb9c9f49909653b73cb425575b5a6787ffb529578793f3e143446" Dec 09 13:20:13 crc kubenswrapper[4703]: E1209 13:20:13.072134 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 13:20:16 crc kubenswrapper[4703]: E1209 13:20:16.073124 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:20:17 crc kubenswrapper[4703]: E1209 13:20:17.073175 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:20:28 crc kubenswrapper[4703]: I1209 13:20:28.069633 4703 scope.go:117] "RemoveContainer" containerID="201b468eb3deb9c9f49909653b73cb425575b5a6787ffb529578793f3e143446" Dec 09 13:20:28 crc kubenswrapper[4703]: E1209 13:20:28.070726 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 13:20:30 crc kubenswrapper[4703]: E1209 13:20:30.072878 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:20:32 crc kubenswrapper[4703]: E1209 13:20:32.072993 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:20:41 crc kubenswrapper[4703]: I1209 13:20:41.076745 4703 scope.go:117] "RemoveContainer" containerID="201b468eb3deb9c9f49909653b73cb425575b5a6787ffb529578793f3e143446" Dec 09 13:20:41 crc kubenswrapper[4703]: E1209 13:20:41.077869 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 13:20:41 crc kubenswrapper[4703]: E1209 13:20:41.080024 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:20:43 crc kubenswrapper[4703]: E1209 13:20:43.072635 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:20:52 crc kubenswrapper[4703]: E1209 13:20:52.072758 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:20:56 crc kubenswrapper[4703]: I1209 13:20:56.071664 4703 scope.go:117] "RemoveContainer" containerID="201b468eb3deb9c9f49909653b73cb425575b5a6787ffb529578793f3e143446" Dec 09 13:20:56 crc kubenswrapper[4703]: E1209 13:20:56.073186 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 13:20:57 crc kubenswrapper[4703]: E1209 13:20:57.072724 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:20:59 crc kubenswrapper[4703]: I1209 13:20:59.878925 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-q7x9k"] Dec 09 13:20:59 crc kubenswrapper[4703]: E1209 13:20:59.880138 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17006cfe-6767-441a-890e-2a06ba771460" containerName="extract-content" Dec 09 13:20:59 crc kubenswrapper[4703]: I1209 13:20:59.880160 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="17006cfe-6767-441a-890e-2a06ba771460" containerName="extract-content" Dec 09 13:20:59 crc kubenswrapper[4703]: E1209 13:20:59.880173 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17006cfe-6767-441a-890e-2a06ba771460" containerName="extract-utilities" Dec 09 13:20:59 crc kubenswrapper[4703]: I1209 13:20:59.880182 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="17006cfe-6767-441a-890e-2a06ba771460" containerName="extract-utilities" Dec 09 13:20:59 crc kubenswrapper[4703]: E1209 13:20:59.880252 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17006cfe-6767-441a-890e-2a06ba771460" containerName="registry-server" Dec 09 13:20:59 crc kubenswrapper[4703]: I1209 13:20:59.880262 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="17006cfe-6767-441a-890e-2a06ba771460" containerName="registry-server" Dec 09 13:20:59 crc kubenswrapper[4703]: I1209 13:20:59.880547 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="17006cfe-6767-441a-890e-2a06ba771460" containerName="registry-server" Dec 09 13:20:59 crc kubenswrapper[4703]: I1209 13:20:59.882855 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q7x9k" Dec 09 13:20:59 crc kubenswrapper[4703]: I1209 13:20:59.902960 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-q7x9k"] Dec 09 13:20:59 crc kubenswrapper[4703]: I1209 13:20:59.971508 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6906221-7cfa-4fc2-8e57-e090d3ae9721-utilities\") pod \"redhat-operators-q7x9k\" (UID: \"c6906221-7cfa-4fc2-8e57-e090d3ae9721\") " pod="openshift-marketplace/redhat-operators-q7x9k" Dec 09 13:20:59 crc kubenswrapper[4703]: I1209 13:20:59.972137 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6906221-7cfa-4fc2-8e57-e090d3ae9721-catalog-content\") pod \"redhat-operators-q7x9k\" (UID: \"c6906221-7cfa-4fc2-8e57-e090d3ae9721\") " pod="openshift-marketplace/redhat-operators-q7x9k" Dec 09 13:20:59 crc kubenswrapper[4703]: I1209 13:20:59.972175 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qnd7\" (UniqueName: \"kubernetes.io/projected/c6906221-7cfa-4fc2-8e57-e090d3ae9721-kube-api-access-5qnd7\") pod \"redhat-operators-q7x9k\" (UID: \"c6906221-7cfa-4fc2-8e57-e090d3ae9721\") " pod="openshift-marketplace/redhat-operators-q7x9k" Dec 09 13:21:00 crc kubenswrapper[4703]: I1209 13:21:00.073936 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6906221-7cfa-4fc2-8e57-e090d3ae9721-utilities\") pod \"redhat-operators-q7x9k\" (UID: \"c6906221-7cfa-4fc2-8e57-e090d3ae9721\") " pod="openshift-marketplace/redhat-operators-q7x9k" Dec 09 13:21:00 crc kubenswrapper[4703]: I1209 13:21:00.074080 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6906221-7cfa-4fc2-8e57-e090d3ae9721-catalog-content\") pod \"redhat-operators-q7x9k\" (UID: \"c6906221-7cfa-4fc2-8e57-e090d3ae9721\") " pod="openshift-marketplace/redhat-operators-q7x9k" Dec 09 13:21:00 crc kubenswrapper[4703]: I1209 13:21:00.074137 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qnd7\" (UniqueName: \"kubernetes.io/projected/c6906221-7cfa-4fc2-8e57-e090d3ae9721-kube-api-access-5qnd7\") pod \"redhat-operators-q7x9k\" (UID: \"c6906221-7cfa-4fc2-8e57-e090d3ae9721\") " pod="openshift-marketplace/redhat-operators-q7x9k" Dec 09 13:21:00 crc kubenswrapper[4703]: I1209 13:21:00.074635 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6906221-7cfa-4fc2-8e57-e090d3ae9721-utilities\") pod \"redhat-operators-q7x9k\" (UID: \"c6906221-7cfa-4fc2-8e57-e090d3ae9721\") " pod="openshift-marketplace/redhat-operators-q7x9k" Dec 09 13:21:00 crc kubenswrapper[4703]: I1209 13:21:00.074918 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6906221-7cfa-4fc2-8e57-e090d3ae9721-catalog-content\") pod \"redhat-operators-q7x9k\" (UID: \"c6906221-7cfa-4fc2-8e57-e090d3ae9721\") " pod="openshift-marketplace/redhat-operators-q7x9k" Dec 09 13:21:00 crc kubenswrapper[4703]: I1209 13:21:00.232235 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qnd7\" (UniqueName: \"kubernetes.io/projected/c6906221-7cfa-4fc2-8e57-e090d3ae9721-kube-api-access-5qnd7\") pod \"redhat-operators-q7x9k\" (UID: \"c6906221-7cfa-4fc2-8e57-e090d3ae9721\") " pod="openshift-marketplace/redhat-operators-q7x9k" Dec 09 13:21:00 crc kubenswrapper[4703]: I1209 13:21:00.506386 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q7x9k" Dec 09 13:21:01 crc kubenswrapper[4703]: I1209 13:21:01.067354 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-q7x9k"] Dec 09 13:21:01 crc kubenswrapper[4703]: I1209 13:21:01.295558 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q7x9k" event={"ID":"c6906221-7cfa-4fc2-8e57-e090d3ae9721","Type":"ContainerStarted","Data":"88675359b3e335290315b0c0b1cf08a4f28101bd8c7dd017c21313d8e1d9e332"} Dec 09 13:21:02 crc kubenswrapper[4703]: I1209 13:21:02.310598 4703 generic.go:334] "Generic (PLEG): container finished" podID="c6906221-7cfa-4fc2-8e57-e090d3ae9721" containerID="b2434a8cec5bbb36813da9c4cb097c00c0d8a94d4ddb9c2371845e84aa6c8fc9" exitCode=0 Dec 09 13:21:02 crc kubenswrapper[4703]: I1209 13:21:02.310672 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q7x9k" event={"ID":"c6906221-7cfa-4fc2-8e57-e090d3ae9721","Type":"ContainerDied","Data":"b2434a8cec5bbb36813da9c4cb097c00c0d8a94d4ddb9c2371845e84aa6c8fc9"} Dec 09 13:21:03 crc kubenswrapper[4703]: E1209 13:21:03.078532 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:21:03 crc kubenswrapper[4703]: I1209 13:21:03.325696 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q7x9k" event={"ID":"c6906221-7cfa-4fc2-8e57-e090d3ae9721","Type":"ContainerStarted","Data":"0e31b2357b6e18115062b44855fd24ecbcb8a9db6c2ec41dd651042c88bd5ac4"} Dec 09 13:21:06 crc kubenswrapper[4703]: I1209 13:21:06.371085 4703 generic.go:334] "Generic (PLEG): container finished" podID="c6906221-7cfa-4fc2-8e57-e090d3ae9721" containerID="0e31b2357b6e18115062b44855fd24ecbcb8a9db6c2ec41dd651042c88bd5ac4" exitCode=0 Dec 09 13:21:06 crc kubenswrapper[4703]: I1209 13:21:06.371258 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q7x9k" event={"ID":"c6906221-7cfa-4fc2-8e57-e090d3ae9721","Type":"ContainerDied","Data":"0e31b2357b6e18115062b44855fd24ecbcb8a9db6c2ec41dd651042c88bd5ac4"} Dec 09 13:21:07 crc kubenswrapper[4703]: I1209 13:21:07.070341 4703 scope.go:117] "RemoveContainer" containerID="201b468eb3deb9c9f49909653b73cb425575b5a6787ffb529578793f3e143446" Dec 09 13:21:07 crc kubenswrapper[4703]: E1209 13:21:07.070962 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 13:21:07 crc kubenswrapper[4703]: I1209 13:21:07.384802 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q7x9k" event={"ID":"c6906221-7cfa-4fc2-8e57-e090d3ae9721","Type":"ContainerStarted","Data":"3b8e68c3b560fa466340396e2045e444775ce5c49fafc7acb2c0e90643adb8df"} Dec 09 13:21:07 crc kubenswrapper[4703]: I1209 13:21:07.407658 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-q7x9k" podStartSLOduration=3.6235950839999997 podStartE2EDuration="8.407642785s" podCreationTimestamp="2025-12-09 13:20:59 +0000 UTC" firstStartedPulling="2025-12-09 13:21:02.315756289 +0000 UTC m=+4561.564520138" lastFinishedPulling="2025-12-09 13:21:07.09980432 +0000 UTC m=+4566.348567839" observedRunningTime="2025-12-09 13:21:07.405893828 +0000 UTC m=+4566.654657347" watchObservedRunningTime="2025-12-09 13:21:07.407642785 +0000 UTC m=+4566.656406304" Dec 09 13:21:08 crc kubenswrapper[4703]: I1209 13:21:08.040711 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-kbkpk"] Dec 09 13:21:08 crc kubenswrapper[4703]: I1209 13:21:08.043933 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kbkpk" Dec 09 13:21:08 crc kubenswrapper[4703]: I1209 13:21:08.056628 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kbkpk"] Dec 09 13:21:08 crc kubenswrapper[4703]: I1209 13:21:08.100812 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6z5ls\" (UniqueName: \"kubernetes.io/projected/42ffc936-e5e5-4919-b4a3-e3040d948852-kube-api-access-6z5ls\") pod \"certified-operators-kbkpk\" (UID: \"42ffc936-e5e5-4919-b4a3-e3040d948852\") " pod="openshift-marketplace/certified-operators-kbkpk" Dec 09 13:21:08 crc kubenswrapper[4703]: I1209 13:21:08.100873 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42ffc936-e5e5-4919-b4a3-e3040d948852-catalog-content\") pod \"certified-operators-kbkpk\" (UID: \"42ffc936-e5e5-4919-b4a3-e3040d948852\") " pod="openshift-marketplace/certified-operators-kbkpk" Dec 09 13:21:08 crc kubenswrapper[4703]: I1209 13:21:08.100998 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42ffc936-e5e5-4919-b4a3-e3040d948852-utilities\") pod \"certified-operators-kbkpk\" (UID: \"42ffc936-e5e5-4919-b4a3-e3040d948852\") " pod="openshift-marketplace/certified-operators-kbkpk" Dec 09 13:21:08 crc kubenswrapper[4703]: I1209 13:21:08.202916 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6z5ls\" (UniqueName: \"kubernetes.io/projected/42ffc936-e5e5-4919-b4a3-e3040d948852-kube-api-access-6z5ls\") pod \"certified-operators-kbkpk\" (UID: \"42ffc936-e5e5-4919-b4a3-e3040d948852\") " pod="openshift-marketplace/certified-operators-kbkpk" Dec 09 13:21:08 crc kubenswrapper[4703]: I1209 13:21:08.202967 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42ffc936-e5e5-4919-b4a3-e3040d948852-catalog-content\") pod \"certified-operators-kbkpk\" (UID: \"42ffc936-e5e5-4919-b4a3-e3040d948852\") " pod="openshift-marketplace/certified-operators-kbkpk" Dec 09 13:21:08 crc kubenswrapper[4703]: I1209 13:21:08.203056 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42ffc936-e5e5-4919-b4a3-e3040d948852-utilities\") pod \"certified-operators-kbkpk\" (UID: \"42ffc936-e5e5-4919-b4a3-e3040d948852\") " pod="openshift-marketplace/certified-operators-kbkpk" Dec 09 13:21:08 crc kubenswrapper[4703]: I1209 13:21:08.203662 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42ffc936-e5e5-4919-b4a3-e3040d948852-utilities\") pod \"certified-operators-kbkpk\" (UID: \"42ffc936-e5e5-4919-b4a3-e3040d948852\") " pod="openshift-marketplace/certified-operators-kbkpk" Dec 09 13:21:08 crc kubenswrapper[4703]: I1209 13:21:08.203940 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42ffc936-e5e5-4919-b4a3-e3040d948852-catalog-content\") pod \"certified-operators-kbkpk\" (UID: \"42ffc936-e5e5-4919-b4a3-e3040d948852\") " pod="openshift-marketplace/certified-operators-kbkpk" Dec 09 13:21:08 crc kubenswrapper[4703]: I1209 13:21:08.228678 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6z5ls\" (UniqueName: \"kubernetes.io/projected/42ffc936-e5e5-4919-b4a3-e3040d948852-kube-api-access-6z5ls\") pod \"certified-operators-kbkpk\" (UID: \"42ffc936-e5e5-4919-b4a3-e3040d948852\") " pod="openshift-marketplace/certified-operators-kbkpk" Dec 09 13:21:08 crc kubenswrapper[4703]: I1209 13:21:08.367029 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kbkpk" Dec 09 13:21:08 crc kubenswrapper[4703]: I1209 13:21:08.987481 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kbkpk"] Dec 09 13:21:08 crc kubenswrapper[4703]: W1209 13:21:08.988637 4703 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod42ffc936_e5e5_4919_b4a3_e3040d948852.slice/crio-b635f537dd00ce2534fc0de96dbcc5a57f9a1c6a9df2987bb24a3cc0b9190755 WatchSource:0}: Error finding container b635f537dd00ce2534fc0de96dbcc5a57f9a1c6a9df2987bb24a3cc0b9190755: Status 404 returned error can't find the container with id b635f537dd00ce2534fc0de96dbcc5a57f9a1c6a9df2987bb24a3cc0b9190755 Dec 09 13:21:09 crc kubenswrapper[4703]: E1209 13:21:09.072622 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:21:09 crc kubenswrapper[4703]: I1209 13:21:09.406625 4703 generic.go:334] "Generic (PLEG): container finished" podID="42ffc936-e5e5-4919-b4a3-e3040d948852" containerID="ffeef5716fe583682ced65e4a3c4a43773ff448d68ea47806ff7f172d911e90d" exitCode=0 Dec 09 13:21:09 crc kubenswrapper[4703]: I1209 13:21:09.406698 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kbkpk" event={"ID":"42ffc936-e5e5-4919-b4a3-e3040d948852","Type":"ContainerDied","Data":"ffeef5716fe583682ced65e4a3c4a43773ff448d68ea47806ff7f172d911e90d"} Dec 09 13:21:09 crc kubenswrapper[4703]: I1209 13:21:09.407049 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kbkpk" event={"ID":"42ffc936-e5e5-4919-b4a3-e3040d948852","Type":"ContainerStarted","Data":"b635f537dd00ce2534fc0de96dbcc5a57f9a1c6a9df2987bb24a3cc0b9190755"} Dec 09 13:21:10 crc kubenswrapper[4703]: I1209 13:21:10.422249 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kbkpk" event={"ID":"42ffc936-e5e5-4919-b4a3-e3040d948852","Type":"ContainerStarted","Data":"7038d6100523443fa722ae7c28c988c4aea0cfdd95244c1ad063f702b5a0ba16"} Dec 09 13:21:10 crc kubenswrapper[4703]: I1209 13:21:10.506904 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-q7x9k" Dec 09 13:21:10 crc kubenswrapper[4703]: I1209 13:21:10.507094 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-q7x9k" Dec 09 13:21:11 crc kubenswrapper[4703]: I1209 13:21:11.438938 4703 generic.go:334] "Generic (PLEG): container finished" podID="42ffc936-e5e5-4919-b4a3-e3040d948852" containerID="7038d6100523443fa722ae7c28c988c4aea0cfdd95244c1ad063f702b5a0ba16" exitCode=0 Dec 09 13:21:11 crc kubenswrapper[4703]: I1209 13:21:11.439566 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kbkpk" event={"ID":"42ffc936-e5e5-4919-b4a3-e3040d948852","Type":"ContainerDied","Data":"7038d6100523443fa722ae7c28c988c4aea0cfdd95244c1ad063f702b5a0ba16"} Dec 09 13:21:11 crc kubenswrapper[4703]: I1209 13:21:11.563333 4703 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-q7x9k" podUID="c6906221-7cfa-4fc2-8e57-e090d3ae9721" containerName="registry-server" probeResult="failure" output=< Dec 09 13:21:11 crc kubenswrapper[4703]: timeout: failed to connect service ":50051" within 1s Dec 09 13:21:11 crc kubenswrapper[4703]: > Dec 09 13:21:13 crc kubenswrapper[4703]: I1209 13:21:13.466345 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kbkpk" event={"ID":"42ffc936-e5e5-4919-b4a3-e3040d948852","Type":"ContainerStarted","Data":"cfae21c9a4d183c625b67793ac159e0076edd384bab98aa99852f3d00b244b03"} Dec 09 13:21:13 crc kubenswrapper[4703]: I1209 13:21:13.498914 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-kbkpk" podStartSLOduration=2.543319953 podStartE2EDuration="5.498891607s" podCreationTimestamp="2025-12-09 13:21:08 +0000 UTC" firstStartedPulling="2025-12-09 13:21:09.409425141 +0000 UTC m=+4568.658188660" lastFinishedPulling="2025-12-09 13:21:12.364996785 +0000 UTC m=+4571.613760314" observedRunningTime="2025-12-09 13:21:13.486643858 +0000 UTC m=+4572.735407377" watchObservedRunningTime="2025-12-09 13:21:13.498891607 +0000 UTC m=+4572.747655126" Dec 09 13:21:15 crc kubenswrapper[4703]: E1209 13:21:15.074606 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:21:18 crc kubenswrapper[4703]: I1209 13:21:18.367744 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-kbkpk" Dec 09 13:21:18 crc kubenswrapper[4703]: I1209 13:21:18.368774 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-kbkpk" Dec 09 13:21:18 crc kubenswrapper[4703]: I1209 13:21:18.423484 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-kbkpk" Dec 09 13:21:18 crc kubenswrapper[4703]: I1209 13:21:18.573671 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-kbkpk" Dec 09 13:21:18 crc kubenswrapper[4703]: I1209 13:21:18.665714 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kbkpk"] Dec 09 13:21:19 crc kubenswrapper[4703]: I1209 13:21:19.071367 4703 scope.go:117] "RemoveContainer" containerID="201b468eb3deb9c9f49909653b73cb425575b5a6787ffb529578793f3e143446" Dec 09 13:21:19 crc kubenswrapper[4703]: E1209 13:21:19.072144 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 13:21:20 crc kubenswrapper[4703]: E1209 13:21:20.074843 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:21:20 crc kubenswrapper[4703]: I1209 13:21:20.545498 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-kbkpk" podUID="42ffc936-e5e5-4919-b4a3-e3040d948852" containerName="registry-server" containerID="cri-o://cfae21c9a4d183c625b67793ac159e0076edd384bab98aa99852f3d00b244b03" gracePeriod=2 Dec 09 13:21:20 crc kubenswrapper[4703]: I1209 13:21:20.563772 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-q7x9k" Dec 09 13:21:20 crc kubenswrapper[4703]: I1209 13:21:20.634759 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-q7x9k" Dec 09 13:21:21 crc kubenswrapper[4703]: I1209 13:21:21.065202 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-q7x9k"] Dec 09 13:21:21 crc kubenswrapper[4703]: I1209 13:21:21.209523 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kbkpk" Dec 09 13:21:21 crc kubenswrapper[4703]: I1209 13:21:21.254831 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42ffc936-e5e5-4919-b4a3-e3040d948852-catalog-content\") pod \"42ffc936-e5e5-4919-b4a3-e3040d948852\" (UID: \"42ffc936-e5e5-4919-b4a3-e3040d948852\") " Dec 09 13:21:21 crc kubenswrapper[4703]: I1209 13:21:21.255131 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42ffc936-e5e5-4919-b4a3-e3040d948852-utilities\") pod \"42ffc936-e5e5-4919-b4a3-e3040d948852\" (UID: \"42ffc936-e5e5-4919-b4a3-e3040d948852\") " Dec 09 13:21:21 crc kubenswrapper[4703]: I1209 13:21:21.255206 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6z5ls\" (UniqueName: \"kubernetes.io/projected/42ffc936-e5e5-4919-b4a3-e3040d948852-kube-api-access-6z5ls\") pod \"42ffc936-e5e5-4919-b4a3-e3040d948852\" (UID: \"42ffc936-e5e5-4919-b4a3-e3040d948852\") " Dec 09 13:21:21 crc kubenswrapper[4703]: I1209 13:21:21.256404 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42ffc936-e5e5-4919-b4a3-e3040d948852-utilities" (OuterVolumeSpecName: "utilities") pod "42ffc936-e5e5-4919-b4a3-e3040d948852" (UID: "42ffc936-e5e5-4919-b4a3-e3040d948852"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 13:21:21 crc kubenswrapper[4703]: I1209 13:21:21.265492 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42ffc936-e5e5-4919-b4a3-e3040d948852-kube-api-access-6z5ls" (OuterVolumeSpecName: "kube-api-access-6z5ls") pod "42ffc936-e5e5-4919-b4a3-e3040d948852" (UID: "42ffc936-e5e5-4919-b4a3-e3040d948852"). InnerVolumeSpecName "kube-api-access-6z5ls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 13:21:21 crc kubenswrapper[4703]: I1209 13:21:21.309269 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42ffc936-e5e5-4919-b4a3-e3040d948852-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "42ffc936-e5e5-4919-b4a3-e3040d948852" (UID: "42ffc936-e5e5-4919-b4a3-e3040d948852"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 13:21:21 crc kubenswrapper[4703]: I1209 13:21:21.358724 4703 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42ffc936-e5e5-4919-b4a3-e3040d948852-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 13:21:21 crc kubenswrapper[4703]: I1209 13:21:21.359147 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6z5ls\" (UniqueName: \"kubernetes.io/projected/42ffc936-e5e5-4919-b4a3-e3040d948852-kube-api-access-6z5ls\") on node \"crc\" DevicePath \"\"" Dec 09 13:21:21 crc kubenswrapper[4703]: I1209 13:21:21.359245 4703 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42ffc936-e5e5-4919-b4a3-e3040d948852-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 13:21:21 crc kubenswrapper[4703]: I1209 13:21:21.559769 4703 generic.go:334] "Generic (PLEG): container finished" podID="42ffc936-e5e5-4919-b4a3-e3040d948852" containerID="cfae21c9a4d183c625b67793ac159e0076edd384bab98aa99852f3d00b244b03" exitCode=0 Dec 09 13:21:21 crc kubenswrapper[4703]: I1209 13:21:21.559857 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kbkpk" Dec 09 13:21:21 crc kubenswrapper[4703]: I1209 13:21:21.559873 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kbkpk" event={"ID":"42ffc936-e5e5-4919-b4a3-e3040d948852","Type":"ContainerDied","Data":"cfae21c9a4d183c625b67793ac159e0076edd384bab98aa99852f3d00b244b03"} Dec 09 13:21:21 crc kubenswrapper[4703]: I1209 13:21:21.559935 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kbkpk" event={"ID":"42ffc936-e5e5-4919-b4a3-e3040d948852","Type":"ContainerDied","Data":"b635f537dd00ce2534fc0de96dbcc5a57f9a1c6a9df2987bb24a3cc0b9190755"} Dec 09 13:21:21 crc kubenswrapper[4703]: I1209 13:21:21.559960 4703 scope.go:117] "RemoveContainer" containerID="cfae21c9a4d183c625b67793ac159e0076edd384bab98aa99852f3d00b244b03" Dec 09 13:21:21 crc kubenswrapper[4703]: I1209 13:21:21.598000 4703 scope.go:117] "RemoveContainer" containerID="7038d6100523443fa722ae7c28c988c4aea0cfdd95244c1ad063f702b5a0ba16" Dec 09 13:21:21 crc kubenswrapper[4703]: I1209 13:21:21.609797 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kbkpk"] Dec 09 13:21:21 crc kubenswrapper[4703]: I1209 13:21:21.621574 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-kbkpk"] Dec 09 13:21:21 crc kubenswrapper[4703]: I1209 13:21:21.625828 4703 scope.go:117] "RemoveContainer" containerID="ffeef5716fe583682ced65e4a3c4a43773ff448d68ea47806ff7f172d911e90d" Dec 09 13:21:21 crc kubenswrapper[4703]: I1209 13:21:21.696598 4703 scope.go:117] "RemoveContainer" containerID="cfae21c9a4d183c625b67793ac159e0076edd384bab98aa99852f3d00b244b03" Dec 09 13:21:21 crc kubenswrapper[4703]: E1209 13:21:21.697275 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cfae21c9a4d183c625b67793ac159e0076edd384bab98aa99852f3d00b244b03\": container with ID starting with cfae21c9a4d183c625b67793ac159e0076edd384bab98aa99852f3d00b244b03 not found: ID does not exist" containerID="cfae21c9a4d183c625b67793ac159e0076edd384bab98aa99852f3d00b244b03" Dec 09 13:21:21 crc kubenswrapper[4703]: I1209 13:21:21.697314 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cfae21c9a4d183c625b67793ac159e0076edd384bab98aa99852f3d00b244b03"} err="failed to get container status \"cfae21c9a4d183c625b67793ac159e0076edd384bab98aa99852f3d00b244b03\": rpc error: code = NotFound desc = could not find container \"cfae21c9a4d183c625b67793ac159e0076edd384bab98aa99852f3d00b244b03\": container with ID starting with cfae21c9a4d183c625b67793ac159e0076edd384bab98aa99852f3d00b244b03 not found: ID does not exist" Dec 09 13:21:21 crc kubenswrapper[4703]: I1209 13:21:21.697347 4703 scope.go:117] "RemoveContainer" containerID="7038d6100523443fa722ae7c28c988c4aea0cfdd95244c1ad063f702b5a0ba16" Dec 09 13:21:21 crc kubenswrapper[4703]: E1209 13:21:21.697697 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7038d6100523443fa722ae7c28c988c4aea0cfdd95244c1ad063f702b5a0ba16\": container with ID starting with 7038d6100523443fa722ae7c28c988c4aea0cfdd95244c1ad063f702b5a0ba16 not found: ID does not exist" containerID="7038d6100523443fa722ae7c28c988c4aea0cfdd95244c1ad063f702b5a0ba16" Dec 09 13:21:21 crc kubenswrapper[4703]: I1209 13:21:21.697790 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7038d6100523443fa722ae7c28c988c4aea0cfdd95244c1ad063f702b5a0ba16"} err="failed to get container status \"7038d6100523443fa722ae7c28c988c4aea0cfdd95244c1ad063f702b5a0ba16\": rpc error: code = NotFound desc = could not find container \"7038d6100523443fa722ae7c28c988c4aea0cfdd95244c1ad063f702b5a0ba16\": container with ID starting with 7038d6100523443fa722ae7c28c988c4aea0cfdd95244c1ad063f702b5a0ba16 not found: ID does not exist" Dec 09 13:21:21 crc kubenswrapper[4703]: I1209 13:21:21.697842 4703 scope.go:117] "RemoveContainer" containerID="ffeef5716fe583682ced65e4a3c4a43773ff448d68ea47806ff7f172d911e90d" Dec 09 13:21:21 crc kubenswrapper[4703]: E1209 13:21:21.698689 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ffeef5716fe583682ced65e4a3c4a43773ff448d68ea47806ff7f172d911e90d\": container with ID starting with ffeef5716fe583682ced65e4a3c4a43773ff448d68ea47806ff7f172d911e90d not found: ID does not exist" containerID="ffeef5716fe583682ced65e4a3c4a43773ff448d68ea47806ff7f172d911e90d" Dec 09 13:21:21 crc kubenswrapper[4703]: I1209 13:21:21.698749 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffeef5716fe583682ced65e4a3c4a43773ff448d68ea47806ff7f172d911e90d"} err="failed to get container status \"ffeef5716fe583682ced65e4a3c4a43773ff448d68ea47806ff7f172d911e90d\": rpc error: code = NotFound desc = could not find container \"ffeef5716fe583682ced65e4a3c4a43773ff448d68ea47806ff7f172d911e90d\": container with ID starting with ffeef5716fe583682ced65e4a3c4a43773ff448d68ea47806ff7f172d911e90d not found: ID does not exist" Dec 09 13:21:22 crc kubenswrapper[4703]: I1209 13:21:22.574042 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-q7x9k" podUID="c6906221-7cfa-4fc2-8e57-e090d3ae9721" containerName="registry-server" containerID="cri-o://3b8e68c3b560fa466340396e2045e444775ce5c49fafc7acb2c0e90643adb8df" gracePeriod=2 Dec 09 13:21:23 crc kubenswrapper[4703]: I1209 13:21:23.085001 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42ffc936-e5e5-4919-b4a3-e3040d948852" path="/var/lib/kubelet/pods/42ffc936-e5e5-4919-b4a3-e3040d948852/volumes" Dec 09 13:21:23 crc kubenswrapper[4703]: I1209 13:21:23.201980 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q7x9k" Dec 09 13:21:23 crc kubenswrapper[4703]: I1209 13:21:23.324113 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6906221-7cfa-4fc2-8e57-e090d3ae9721-utilities\") pod \"c6906221-7cfa-4fc2-8e57-e090d3ae9721\" (UID: \"c6906221-7cfa-4fc2-8e57-e090d3ae9721\") " Dec 09 13:21:23 crc kubenswrapper[4703]: I1209 13:21:23.324284 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5qnd7\" (UniqueName: \"kubernetes.io/projected/c6906221-7cfa-4fc2-8e57-e090d3ae9721-kube-api-access-5qnd7\") pod \"c6906221-7cfa-4fc2-8e57-e090d3ae9721\" (UID: \"c6906221-7cfa-4fc2-8e57-e090d3ae9721\") " Dec 09 13:21:23 crc kubenswrapper[4703]: I1209 13:21:23.324543 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6906221-7cfa-4fc2-8e57-e090d3ae9721-catalog-content\") pod \"c6906221-7cfa-4fc2-8e57-e090d3ae9721\" (UID: \"c6906221-7cfa-4fc2-8e57-e090d3ae9721\") " Dec 09 13:21:23 crc kubenswrapper[4703]: I1209 13:21:23.325602 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6906221-7cfa-4fc2-8e57-e090d3ae9721-utilities" (OuterVolumeSpecName: "utilities") pod "c6906221-7cfa-4fc2-8e57-e090d3ae9721" (UID: "c6906221-7cfa-4fc2-8e57-e090d3ae9721"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 13:21:23 crc kubenswrapper[4703]: I1209 13:21:23.326044 4703 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6906221-7cfa-4fc2-8e57-e090d3ae9721-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 13:21:23 crc kubenswrapper[4703]: I1209 13:21:23.335490 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6906221-7cfa-4fc2-8e57-e090d3ae9721-kube-api-access-5qnd7" (OuterVolumeSpecName: "kube-api-access-5qnd7") pod "c6906221-7cfa-4fc2-8e57-e090d3ae9721" (UID: "c6906221-7cfa-4fc2-8e57-e090d3ae9721"). InnerVolumeSpecName "kube-api-access-5qnd7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 13:21:23 crc kubenswrapper[4703]: I1209 13:21:23.427427 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5qnd7\" (UniqueName: \"kubernetes.io/projected/c6906221-7cfa-4fc2-8e57-e090d3ae9721-kube-api-access-5qnd7\") on node \"crc\" DevicePath \"\"" Dec 09 13:21:23 crc kubenswrapper[4703]: I1209 13:21:23.456625 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6906221-7cfa-4fc2-8e57-e090d3ae9721-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c6906221-7cfa-4fc2-8e57-e090d3ae9721" (UID: "c6906221-7cfa-4fc2-8e57-e090d3ae9721"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 13:21:23 crc kubenswrapper[4703]: I1209 13:21:23.529562 4703 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6906221-7cfa-4fc2-8e57-e090d3ae9721-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 13:21:23 crc kubenswrapper[4703]: I1209 13:21:23.589800 4703 generic.go:334] "Generic (PLEG): container finished" podID="c6906221-7cfa-4fc2-8e57-e090d3ae9721" containerID="3b8e68c3b560fa466340396e2045e444775ce5c49fafc7acb2c0e90643adb8df" exitCode=0 Dec 09 13:21:23 crc kubenswrapper[4703]: I1209 13:21:23.589861 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q7x9k" event={"ID":"c6906221-7cfa-4fc2-8e57-e090d3ae9721","Type":"ContainerDied","Data":"3b8e68c3b560fa466340396e2045e444775ce5c49fafc7acb2c0e90643adb8df"} Dec 09 13:21:23 crc kubenswrapper[4703]: I1209 13:21:23.589920 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q7x9k" event={"ID":"c6906221-7cfa-4fc2-8e57-e090d3ae9721","Type":"ContainerDied","Data":"88675359b3e335290315b0c0b1cf08a4f28101bd8c7dd017c21313d8e1d9e332"} Dec 09 13:21:23 crc kubenswrapper[4703]: I1209 13:21:23.589949 4703 scope.go:117] "RemoveContainer" containerID="3b8e68c3b560fa466340396e2045e444775ce5c49fafc7acb2c0e90643adb8df" Dec 09 13:21:23 crc kubenswrapper[4703]: I1209 13:21:23.589986 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q7x9k" Dec 09 13:21:23 crc kubenswrapper[4703]: I1209 13:21:23.632326 4703 scope.go:117] "RemoveContainer" containerID="0e31b2357b6e18115062b44855fd24ecbcb8a9db6c2ec41dd651042c88bd5ac4" Dec 09 13:21:23 crc kubenswrapper[4703]: I1209 13:21:23.637052 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-q7x9k"] Dec 09 13:21:23 crc kubenswrapper[4703]: I1209 13:21:23.647595 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-q7x9k"] Dec 09 13:21:23 crc kubenswrapper[4703]: I1209 13:21:23.667555 4703 scope.go:117] "RemoveContainer" containerID="b2434a8cec5bbb36813da9c4cb097c00c0d8a94d4ddb9c2371845e84aa6c8fc9" Dec 09 13:21:23 crc kubenswrapper[4703]: I1209 13:21:23.722809 4703 scope.go:117] "RemoveContainer" containerID="3b8e68c3b560fa466340396e2045e444775ce5c49fafc7acb2c0e90643adb8df" Dec 09 13:21:23 crc kubenswrapper[4703]: E1209 13:21:23.723422 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b8e68c3b560fa466340396e2045e444775ce5c49fafc7acb2c0e90643adb8df\": container with ID starting with 3b8e68c3b560fa466340396e2045e444775ce5c49fafc7acb2c0e90643adb8df not found: ID does not exist" containerID="3b8e68c3b560fa466340396e2045e444775ce5c49fafc7acb2c0e90643adb8df" Dec 09 13:21:23 crc kubenswrapper[4703]: I1209 13:21:23.723465 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b8e68c3b560fa466340396e2045e444775ce5c49fafc7acb2c0e90643adb8df"} err="failed to get container status \"3b8e68c3b560fa466340396e2045e444775ce5c49fafc7acb2c0e90643adb8df\": rpc error: code = NotFound desc = could not find container \"3b8e68c3b560fa466340396e2045e444775ce5c49fafc7acb2c0e90643adb8df\": container with ID starting with 3b8e68c3b560fa466340396e2045e444775ce5c49fafc7acb2c0e90643adb8df not found: ID does not exist" Dec 09 13:21:23 crc kubenswrapper[4703]: I1209 13:21:23.723489 4703 scope.go:117] "RemoveContainer" containerID="0e31b2357b6e18115062b44855fd24ecbcb8a9db6c2ec41dd651042c88bd5ac4" Dec 09 13:21:23 crc kubenswrapper[4703]: E1209 13:21:23.724254 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e31b2357b6e18115062b44855fd24ecbcb8a9db6c2ec41dd651042c88bd5ac4\": container with ID starting with 0e31b2357b6e18115062b44855fd24ecbcb8a9db6c2ec41dd651042c88bd5ac4 not found: ID does not exist" containerID="0e31b2357b6e18115062b44855fd24ecbcb8a9db6c2ec41dd651042c88bd5ac4" Dec 09 13:21:23 crc kubenswrapper[4703]: I1209 13:21:23.724300 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e31b2357b6e18115062b44855fd24ecbcb8a9db6c2ec41dd651042c88bd5ac4"} err="failed to get container status \"0e31b2357b6e18115062b44855fd24ecbcb8a9db6c2ec41dd651042c88bd5ac4\": rpc error: code = NotFound desc = could not find container \"0e31b2357b6e18115062b44855fd24ecbcb8a9db6c2ec41dd651042c88bd5ac4\": container with ID starting with 0e31b2357b6e18115062b44855fd24ecbcb8a9db6c2ec41dd651042c88bd5ac4 not found: ID does not exist" Dec 09 13:21:23 crc kubenswrapper[4703]: I1209 13:21:23.724322 4703 scope.go:117] "RemoveContainer" containerID="b2434a8cec5bbb36813da9c4cb097c00c0d8a94d4ddb9c2371845e84aa6c8fc9" Dec 09 13:21:23 crc kubenswrapper[4703]: E1209 13:21:23.724757 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2434a8cec5bbb36813da9c4cb097c00c0d8a94d4ddb9c2371845e84aa6c8fc9\": container with ID starting with b2434a8cec5bbb36813da9c4cb097c00c0d8a94d4ddb9c2371845e84aa6c8fc9 not found: ID does not exist" containerID="b2434a8cec5bbb36813da9c4cb097c00c0d8a94d4ddb9c2371845e84aa6c8fc9" Dec 09 13:21:23 crc kubenswrapper[4703]: I1209 13:21:23.724792 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2434a8cec5bbb36813da9c4cb097c00c0d8a94d4ddb9c2371845e84aa6c8fc9"} err="failed to get container status \"b2434a8cec5bbb36813da9c4cb097c00c0d8a94d4ddb9c2371845e84aa6c8fc9\": rpc error: code = NotFound desc = could not find container \"b2434a8cec5bbb36813da9c4cb097c00c0d8a94d4ddb9c2371845e84aa6c8fc9\": container with ID starting with b2434a8cec5bbb36813da9c4cb097c00c0d8a94d4ddb9c2371845e84aa6c8fc9 not found: ID does not exist" Dec 09 13:21:25 crc kubenswrapper[4703]: I1209 13:21:25.082286 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6906221-7cfa-4fc2-8e57-e090d3ae9721" path="/var/lib/kubelet/pods/c6906221-7cfa-4fc2-8e57-e090d3ae9721/volumes" Dec 09 13:21:27 crc kubenswrapper[4703]: E1209 13:21:27.072048 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:21:29 crc kubenswrapper[4703]: I1209 13:21:29.677467 4703 generic.go:334] "Generic (PLEG): container finished" podID="43ca3e6a-7682-46cb-97e4-51c7a8010e98" containerID="d258e0eedeead6698847e3391a4988c2297996e413f37a5003e4d87da7f7000f" exitCode=2 Dec 09 13:21:29 crc kubenswrapper[4703]: I1209 13:21:29.677565 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ftl6m" event={"ID":"43ca3e6a-7682-46cb-97e4-51c7a8010e98","Type":"ContainerDied","Data":"d258e0eedeead6698847e3391a4988c2297996e413f37a5003e4d87da7f7000f"} Dec 09 13:21:31 crc kubenswrapper[4703]: E1209 13:21:31.112802 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:21:31 crc kubenswrapper[4703]: I1209 13:21:31.244062 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ftl6m" Dec 09 13:21:31 crc kubenswrapper[4703]: I1209 13:21:31.319097 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/43ca3e6a-7682-46cb-97e4-51c7a8010e98-ssh-key\") pod \"43ca3e6a-7682-46cb-97e4-51c7a8010e98\" (UID: \"43ca3e6a-7682-46cb-97e4-51c7a8010e98\") " Dec 09 13:21:31 crc kubenswrapper[4703]: I1209 13:21:31.319287 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/43ca3e6a-7682-46cb-97e4-51c7a8010e98-inventory\") pod \"43ca3e6a-7682-46cb-97e4-51c7a8010e98\" (UID: \"43ca3e6a-7682-46cb-97e4-51c7a8010e98\") " Dec 09 13:21:31 crc kubenswrapper[4703]: I1209 13:21:31.319487 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9nz9\" (UniqueName: \"kubernetes.io/projected/43ca3e6a-7682-46cb-97e4-51c7a8010e98-kube-api-access-w9nz9\") pod \"43ca3e6a-7682-46cb-97e4-51c7a8010e98\" (UID: \"43ca3e6a-7682-46cb-97e4-51c7a8010e98\") " Dec 09 13:21:31 crc kubenswrapper[4703]: I1209 13:21:31.340684 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43ca3e6a-7682-46cb-97e4-51c7a8010e98-kube-api-access-w9nz9" (OuterVolumeSpecName: "kube-api-access-w9nz9") pod "43ca3e6a-7682-46cb-97e4-51c7a8010e98" (UID: "43ca3e6a-7682-46cb-97e4-51c7a8010e98"). InnerVolumeSpecName "kube-api-access-w9nz9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 13:21:31 crc kubenswrapper[4703]: I1209 13:21:31.353708 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43ca3e6a-7682-46cb-97e4-51c7a8010e98-inventory" (OuterVolumeSpecName: "inventory") pod "43ca3e6a-7682-46cb-97e4-51c7a8010e98" (UID: "43ca3e6a-7682-46cb-97e4-51c7a8010e98"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 13:21:31 crc kubenswrapper[4703]: I1209 13:21:31.360927 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43ca3e6a-7682-46cb-97e4-51c7a8010e98-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "43ca3e6a-7682-46cb-97e4-51c7a8010e98" (UID: "43ca3e6a-7682-46cb-97e4-51c7a8010e98"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 13:21:31 crc kubenswrapper[4703]: I1209 13:21:31.422442 4703 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/43ca3e6a-7682-46cb-97e4-51c7a8010e98-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 09 13:21:31 crc kubenswrapper[4703]: I1209 13:21:31.422488 4703 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/43ca3e6a-7682-46cb-97e4-51c7a8010e98-inventory\") on node \"crc\" DevicePath \"\"" Dec 09 13:21:31 crc kubenswrapper[4703]: I1209 13:21:31.422499 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9nz9\" (UniqueName: \"kubernetes.io/projected/43ca3e6a-7682-46cb-97e4-51c7a8010e98-kube-api-access-w9nz9\") on node \"crc\" DevicePath \"\"" Dec 09 13:21:31 crc kubenswrapper[4703]: I1209 13:21:31.720143 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ftl6m" event={"ID":"43ca3e6a-7682-46cb-97e4-51c7a8010e98","Type":"ContainerDied","Data":"ff5ce3255d607c9dd927fa1b3d39bfdb9853727ff1f14be1f8da40151bd87a34"} Dec 09 13:21:31 crc kubenswrapper[4703]: I1209 13:21:31.720227 4703 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff5ce3255d607c9dd927fa1b3d39bfdb9853727ff1f14be1f8da40151bd87a34" Dec 09 13:21:31 crc kubenswrapper[4703]: I1209 13:21:31.720323 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ftl6m" Dec 09 13:21:33 crc kubenswrapper[4703]: I1209 13:21:33.070741 4703 scope.go:117] "RemoveContainer" containerID="201b468eb3deb9c9f49909653b73cb425575b5a6787ffb529578793f3e143446" Dec 09 13:21:33 crc kubenswrapper[4703]: I1209 13:21:33.743468 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" event={"ID":"32956ceb-8540-406e-8693-e86efb46cd42","Type":"ContainerStarted","Data":"006e78894076f90d553f2aff7ff4794812b06251e7c8d5ded4877eefe1f794df"} Dec 09 13:21:40 crc kubenswrapper[4703]: E1209 13:21:40.072819 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:21:43 crc kubenswrapper[4703]: E1209 13:21:43.073573 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:21:51 crc kubenswrapper[4703]: E1209 13:21:51.081203 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:21:54 crc kubenswrapper[4703]: E1209 13:21:54.072680 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:22:03 crc kubenswrapper[4703]: E1209 13:22:03.072435 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:22:06 crc kubenswrapper[4703]: E1209 13:22:06.072898 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:22:15 crc kubenswrapper[4703]: E1209 13:22:15.073712 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:22:18 crc kubenswrapper[4703]: E1209 13:22:18.073639 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:22:28 crc kubenswrapper[4703]: E1209 13:22:28.072408 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:22:33 crc kubenswrapper[4703]: E1209 13:22:33.072769 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:22:42 crc kubenswrapper[4703]: E1209 13:22:42.072971 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:22:44 crc kubenswrapper[4703]: E1209 13:22:44.072404 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:22:55 crc kubenswrapper[4703]: E1209 13:22:55.073260 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:22:57 crc kubenswrapper[4703]: E1209 13:22:57.072129 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:23:08 crc kubenswrapper[4703]: E1209 13:23:08.072979 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:23:08 crc kubenswrapper[4703]: E1209 13:23:08.074499 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:23:20 crc kubenswrapper[4703]: E1209 13:23:20.075111 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:23:23 crc kubenswrapper[4703]: E1209 13:23:23.073134 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:23:34 crc kubenswrapper[4703]: E1209 13:23:34.073661 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:23:36 crc kubenswrapper[4703]: E1209 13:23:36.071697 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:23:46 crc kubenswrapper[4703]: E1209 13:23:46.071992 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:23:51 crc kubenswrapper[4703]: E1209 13:23:51.081922 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:23:58 crc kubenswrapper[4703]: E1209 13:23:58.073629 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:24:00 crc kubenswrapper[4703]: I1209 13:24:00.083583 4703 patch_prober.go:28] interesting pod/machine-config-daemon-q8sfk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 13:24:00 crc kubenswrapper[4703]: I1209 13:24:00.084074 4703 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 13:24:04 crc kubenswrapper[4703]: E1209 13:24:04.073631 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:24:13 crc kubenswrapper[4703]: E1209 13:24:13.073212 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:24:19 crc kubenswrapper[4703]: E1209 13:24:19.073449 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:24:25 crc kubenswrapper[4703]: E1209 13:24:25.072707 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:24:30 crc kubenswrapper[4703]: I1209 13:24:30.083959 4703 patch_prober.go:28] interesting pod/machine-config-daemon-q8sfk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 13:24:30 crc kubenswrapper[4703]: I1209 13:24:30.084798 4703 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 13:24:34 crc kubenswrapper[4703]: I1209 13:24:34.074564 4703 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 09 13:24:34 crc kubenswrapper[4703]: E1209 13:24:34.206719 4703 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 13:24:34 crc kubenswrapper[4703]: E1209 13:24:34.207096 4703 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 13:24:34 crc kubenswrapper[4703]: E1209 13:24:34.207393 4703 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n565h678h645h65h685h657h568h564h8fhcfhc4h58fh66bh564h649h5d5hb5h67dh57fh97h58ch557h4h68h59fh5c8h65ch586h5bch5b8h5c6h7q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p7hbq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(ce42c586-f397-4f98-be45-f56d36115d7a): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 13:24:34 crc kubenswrapper[4703]: E1209 13:24:34.208491 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:24:36 crc kubenswrapper[4703]: E1209 13:24:36.072361 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:24:48 crc kubenswrapper[4703]: E1209 13:24:48.073674 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:24:49 crc kubenswrapper[4703]: E1209 13:24:49.076564 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:24:59 crc kubenswrapper[4703]: E1209 13:24:59.208005 4703 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 09 13:24:59 crc kubenswrapper[4703]: E1209 13:24:59.208713 4703 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 09 13:24:59 crc kubenswrapper[4703]: E1209 13:24:59.209027 4703 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6kdkz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-6ljb8_openstack(862e9d91-760e-43af-aba3-a23255b0fd7a): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 13:24:59 crc kubenswrapper[4703]: E1209 13:24:59.210240 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:25:00 crc kubenswrapper[4703]: E1209 13:25:00.072104 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:25:00 crc kubenswrapper[4703]: I1209 13:25:00.084165 4703 patch_prober.go:28] interesting pod/machine-config-daemon-q8sfk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 13:25:00 crc kubenswrapper[4703]: I1209 13:25:00.084464 4703 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 13:25:00 crc kubenswrapper[4703]: I1209 13:25:00.084694 4703 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" Dec 09 13:25:00 crc kubenswrapper[4703]: I1209 13:25:00.086043 4703 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"006e78894076f90d553f2aff7ff4794812b06251e7c8d5ded4877eefe1f794df"} pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 13:25:00 crc kubenswrapper[4703]: I1209 13:25:00.086442 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" containerName="machine-config-daemon" containerID="cri-o://006e78894076f90d553f2aff7ff4794812b06251e7c8d5ded4877eefe1f794df" gracePeriod=600 Dec 09 13:25:00 crc kubenswrapper[4703]: I1209 13:25:00.219431 4703 generic.go:334] "Generic (PLEG): container finished" podID="32956ceb-8540-406e-8693-e86efb46cd42" containerID="006e78894076f90d553f2aff7ff4794812b06251e7c8d5ded4877eefe1f794df" exitCode=0 Dec 09 13:25:00 crc kubenswrapper[4703]: I1209 13:25:00.219498 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" event={"ID":"32956ceb-8540-406e-8693-e86efb46cd42","Type":"ContainerDied","Data":"006e78894076f90d553f2aff7ff4794812b06251e7c8d5ded4877eefe1f794df"} Dec 09 13:25:00 crc kubenswrapper[4703]: I1209 13:25:00.219549 4703 scope.go:117] "RemoveContainer" containerID="201b468eb3deb9c9f49909653b73cb425575b5a6787ffb529578793f3e143446" Dec 09 13:25:01 crc kubenswrapper[4703]: I1209 13:25:01.235360 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" event={"ID":"32956ceb-8540-406e-8693-e86efb46cd42","Type":"ContainerStarted","Data":"59a313480bf76c604abfa3e91cb3f559c36326e7764983df824f7e63c20b433b"} Dec 09 13:25:14 crc kubenswrapper[4703]: E1209 13:25:14.077085 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:25:14 crc kubenswrapper[4703]: E1209 13:25:14.077788 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:25:25 crc kubenswrapper[4703]: E1209 13:25:25.075002 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:25:25 crc kubenswrapper[4703]: E1209 13:25:25.075113 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:25:38 crc kubenswrapper[4703]: E1209 13:25:38.073509 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:25:39 crc kubenswrapper[4703]: E1209 13:25:39.072836 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:25:49 crc kubenswrapper[4703]: I1209 13:25:49.891702 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8cwbw"] Dec 09 13:25:49 crc kubenswrapper[4703]: E1209 13:25:49.894589 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43ca3e6a-7682-46cb-97e4-51c7a8010e98" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 09 13:25:49 crc kubenswrapper[4703]: I1209 13:25:49.896378 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="43ca3e6a-7682-46cb-97e4-51c7a8010e98" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 09 13:25:49 crc kubenswrapper[4703]: E1209 13:25:49.896622 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42ffc936-e5e5-4919-b4a3-e3040d948852" containerName="extract-utilities" Dec 09 13:25:49 crc kubenswrapper[4703]: I1209 13:25:49.896729 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="42ffc936-e5e5-4919-b4a3-e3040d948852" containerName="extract-utilities" Dec 09 13:25:49 crc kubenswrapper[4703]: E1209 13:25:49.896837 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6906221-7cfa-4fc2-8e57-e090d3ae9721" containerName="extract-content" Dec 09 13:25:49 crc kubenswrapper[4703]: I1209 13:25:49.896919 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6906221-7cfa-4fc2-8e57-e090d3ae9721" containerName="extract-content" Dec 09 13:25:49 crc kubenswrapper[4703]: E1209 13:25:49.897033 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42ffc936-e5e5-4919-b4a3-e3040d948852" containerName="registry-server" Dec 09 13:25:49 crc kubenswrapper[4703]: I1209 13:25:49.897125 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="42ffc936-e5e5-4919-b4a3-e3040d948852" containerName="registry-server" Dec 09 13:25:49 crc kubenswrapper[4703]: E1209 13:25:49.897259 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6906221-7cfa-4fc2-8e57-e090d3ae9721" containerName="extract-utilities" Dec 09 13:25:49 crc kubenswrapper[4703]: I1209 13:25:49.897354 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6906221-7cfa-4fc2-8e57-e090d3ae9721" containerName="extract-utilities" Dec 09 13:25:49 crc kubenswrapper[4703]: E1209 13:25:49.897476 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42ffc936-e5e5-4919-b4a3-e3040d948852" containerName="extract-content" Dec 09 13:25:49 crc kubenswrapper[4703]: I1209 13:25:49.897590 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="42ffc936-e5e5-4919-b4a3-e3040d948852" containerName="extract-content" Dec 09 13:25:49 crc kubenswrapper[4703]: E1209 13:25:49.897683 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6906221-7cfa-4fc2-8e57-e090d3ae9721" containerName="registry-server" Dec 09 13:25:49 crc kubenswrapper[4703]: I1209 13:25:49.897748 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6906221-7cfa-4fc2-8e57-e090d3ae9721" containerName="registry-server" Dec 09 13:25:49 crc kubenswrapper[4703]: I1209 13:25:49.898348 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6906221-7cfa-4fc2-8e57-e090d3ae9721" containerName="registry-server" Dec 09 13:25:49 crc kubenswrapper[4703]: I1209 13:25:49.898480 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="42ffc936-e5e5-4919-b4a3-e3040d948852" containerName="registry-server" Dec 09 13:25:49 crc kubenswrapper[4703]: I1209 13:25:49.898599 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="43ca3e6a-7682-46cb-97e4-51c7a8010e98" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 09 13:25:49 crc kubenswrapper[4703]: I1209 13:25:49.901047 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8cwbw" Dec 09 13:25:49 crc kubenswrapper[4703]: I1209 13:25:49.912649 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8cwbw"] Dec 09 13:25:50 crc kubenswrapper[4703]: I1209 13:25:50.095251 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfjf4\" (UniqueName: \"kubernetes.io/projected/e670524e-1598-4d4d-8700-e1a58fc261b9-kube-api-access-wfjf4\") pod \"redhat-marketplace-8cwbw\" (UID: \"e670524e-1598-4d4d-8700-e1a58fc261b9\") " pod="openshift-marketplace/redhat-marketplace-8cwbw" Dec 09 13:25:50 crc kubenswrapper[4703]: I1209 13:25:50.095797 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e670524e-1598-4d4d-8700-e1a58fc261b9-catalog-content\") pod \"redhat-marketplace-8cwbw\" (UID: \"e670524e-1598-4d4d-8700-e1a58fc261b9\") " pod="openshift-marketplace/redhat-marketplace-8cwbw" Dec 09 13:25:50 crc kubenswrapper[4703]: I1209 13:25:50.096020 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e670524e-1598-4d4d-8700-e1a58fc261b9-utilities\") pod \"redhat-marketplace-8cwbw\" (UID: \"e670524e-1598-4d4d-8700-e1a58fc261b9\") " pod="openshift-marketplace/redhat-marketplace-8cwbw" Dec 09 13:25:50 crc kubenswrapper[4703]: I1209 13:25:50.198699 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfjf4\" (UniqueName: \"kubernetes.io/projected/e670524e-1598-4d4d-8700-e1a58fc261b9-kube-api-access-wfjf4\") pod \"redhat-marketplace-8cwbw\" (UID: \"e670524e-1598-4d4d-8700-e1a58fc261b9\") " pod="openshift-marketplace/redhat-marketplace-8cwbw" Dec 09 13:25:50 crc kubenswrapper[4703]: I1209 13:25:50.199003 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e670524e-1598-4d4d-8700-e1a58fc261b9-catalog-content\") pod \"redhat-marketplace-8cwbw\" (UID: \"e670524e-1598-4d4d-8700-e1a58fc261b9\") " pod="openshift-marketplace/redhat-marketplace-8cwbw" Dec 09 13:25:50 crc kubenswrapper[4703]: I1209 13:25:50.199214 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e670524e-1598-4d4d-8700-e1a58fc261b9-utilities\") pod \"redhat-marketplace-8cwbw\" (UID: \"e670524e-1598-4d4d-8700-e1a58fc261b9\") " pod="openshift-marketplace/redhat-marketplace-8cwbw" Dec 09 13:25:50 crc kubenswrapper[4703]: I1209 13:25:50.199972 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e670524e-1598-4d4d-8700-e1a58fc261b9-catalog-content\") pod \"redhat-marketplace-8cwbw\" (UID: \"e670524e-1598-4d4d-8700-e1a58fc261b9\") " pod="openshift-marketplace/redhat-marketplace-8cwbw" Dec 09 13:25:50 crc kubenswrapper[4703]: I1209 13:25:50.200031 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e670524e-1598-4d4d-8700-e1a58fc261b9-utilities\") pod \"redhat-marketplace-8cwbw\" (UID: \"e670524e-1598-4d4d-8700-e1a58fc261b9\") " pod="openshift-marketplace/redhat-marketplace-8cwbw" Dec 09 13:25:50 crc kubenswrapper[4703]: I1209 13:25:50.226936 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfjf4\" (UniqueName: \"kubernetes.io/projected/e670524e-1598-4d4d-8700-e1a58fc261b9-kube-api-access-wfjf4\") pod \"redhat-marketplace-8cwbw\" (UID: \"e670524e-1598-4d4d-8700-e1a58fc261b9\") " pod="openshift-marketplace/redhat-marketplace-8cwbw" Dec 09 13:25:50 crc kubenswrapper[4703]: I1209 13:25:50.240775 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8cwbw" Dec 09 13:25:50 crc kubenswrapper[4703]: I1209 13:25:50.877904 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8cwbw"] Dec 09 13:25:51 crc kubenswrapper[4703]: I1209 13:25:51.828044 4703 generic.go:334] "Generic (PLEG): container finished" podID="e670524e-1598-4d4d-8700-e1a58fc261b9" containerID="d77ec187edbc71906f0b8ccbb42abf055d1911c9717f87fc3b2e4483163f7c16" exitCode=0 Dec 09 13:25:51 crc kubenswrapper[4703]: I1209 13:25:51.828163 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8cwbw" event={"ID":"e670524e-1598-4d4d-8700-e1a58fc261b9","Type":"ContainerDied","Data":"d77ec187edbc71906f0b8ccbb42abf055d1911c9717f87fc3b2e4483163f7c16"} Dec 09 13:25:51 crc kubenswrapper[4703]: I1209 13:25:51.828432 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8cwbw" event={"ID":"e670524e-1598-4d4d-8700-e1a58fc261b9","Type":"ContainerStarted","Data":"d21e7a503ea069c9b042e85746ce2318627774df2e02c891d1e9d25ac9c3e58e"} Dec 09 13:25:52 crc kubenswrapper[4703]: E1209 13:25:52.072505 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:25:52 crc kubenswrapper[4703]: I1209 13:25:52.845623 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8cwbw" event={"ID":"e670524e-1598-4d4d-8700-e1a58fc261b9","Type":"ContainerStarted","Data":"1f039e9478d299092a100c1eb7d22141931f95e5683439416dd30af89b8dade3"} Dec 09 13:25:53 crc kubenswrapper[4703]: E1209 13:25:53.073358 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:25:53 crc kubenswrapper[4703]: I1209 13:25:53.861314 4703 generic.go:334] "Generic (PLEG): container finished" podID="e670524e-1598-4d4d-8700-e1a58fc261b9" containerID="1f039e9478d299092a100c1eb7d22141931f95e5683439416dd30af89b8dade3" exitCode=0 Dec 09 13:25:53 crc kubenswrapper[4703]: I1209 13:25:53.861388 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8cwbw" event={"ID":"e670524e-1598-4d4d-8700-e1a58fc261b9","Type":"ContainerDied","Data":"1f039e9478d299092a100c1eb7d22141931f95e5683439416dd30af89b8dade3"} Dec 09 13:25:55 crc kubenswrapper[4703]: I1209 13:25:55.885024 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8cwbw" event={"ID":"e670524e-1598-4d4d-8700-e1a58fc261b9","Type":"ContainerStarted","Data":"970a39a1b6867c9705ec2f9292cbbd63f741399ac5f6dbe30b73639c844b3640"} Dec 09 13:25:55 crc kubenswrapper[4703]: I1209 13:25:55.920439 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8cwbw" podStartSLOduration=4.026904707 podStartE2EDuration="6.920413278s" podCreationTimestamp="2025-12-09 13:25:49 +0000 UTC" firstStartedPulling="2025-12-09 13:25:51.830353557 +0000 UTC m=+4851.079117076" lastFinishedPulling="2025-12-09 13:25:54.723862128 +0000 UTC m=+4853.972625647" observedRunningTime="2025-12-09 13:25:55.906088745 +0000 UTC m=+4855.154852264" watchObservedRunningTime="2025-12-09 13:25:55.920413278 +0000 UTC m=+4855.169176797" Dec 09 13:26:00 crc kubenswrapper[4703]: I1209 13:26:00.241098 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8cwbw" Dec 09 13:26:00 crc kubenswrapper[4703]: I1209 13:26:00.243292 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8cwbw" Dec 09 13:26:00 crc kubenswrapper[4703]: I1209 13:26:00.307311 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8cwbw" Dec 09 13:26:00 crc kubenswrapper[4703]: I1209 13:26:00.995414 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8cwbw" Dec 09 13:26:01 crc kubenswrapper[4703]: I1209 13:26:01.061314 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8cwbw"] Dec 09 13:26:02 crc kubenswrapper[4703]: I1209 13:26:02.965660 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8cwbw" podUID="e670524e-1598-4d4d-8700-e1a58fc261b9" containerName="registry-server" containerID="cri-o://970a39a1b6867c9705ec2f9292cbbd63f741399ac5f6dbe30b73639c844b3640" gracePeriod=2 Dec 09 13:26:03 crc kubenswrapper[4703]: E1209 13:26:03.074260 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:26:03 crc kubenswrapper[4703]: I1209 13:26:03.518011 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8cwbw" Dec 09 13:26:03 crc kubenswrapper[4703]: I1209 13:26:03.649099 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e670524e-1598-4d4d-8700-e1a58fc261b9-utilities\") pod \"e670524e-1598-4d4d-8700-e1a58fc261b9\" (UID: \"e670524e-1598-4d4d-8700-e1a58fc261b9\") " Dec 09 13:26:03 crc kubenswrapper[4703]: I1209 13:26:03.649245 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wfjf4\" (UniqueName: \"kubernetes.io/projected/e670524e-1598-4d4d-8700-e1a58fc261b9-kube-api-access-wfjf4\") pod \"e670524e-1598-4d4d-8700-e1a58fc261b9\" (UID: \"e670524e-1598-4d4d-8700-e1a58fc261b9\") " Dec 09 13:26:03 crc kubenswrapper[4703]: I1209 13:26:03.649328 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e670524e-1598-4d4d-8700-e1a58fc261b9-catalog-content\") pod \"e670524e-1598-4d4d-8700-e1a58fc261b9\" (UID: \"e670524e-1598-4d4d-8700-e1a58fc261b9\") " Dec 09 13:26:03 crc kubenswrapper[4703]: I1209 13:26:03.651093 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e670524e-1598-4d4d-8700-e1a58fc261b9-utilities" (OuterVolumeSpecName: "utilities") pod "e670524e-1598-4d4d-8700-e1a58fc261b9" (UID: "e670524e-1598-4d4d-8700-e1a58fc261b9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 13:26:03 crc kubenswrapper[4703]: I1209 13:26:03.659818 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e670524e-1598-4d4d-8700-e1a58fc261b9-kube-api-access-wfjf4" (OuterVolumeSpecName: "kube-api-access-wfjf4") pod "e670524e-1598-4d4d-8700-e1a58fc261b9" (UID: "e670524e-1598-4d4d-8700-e1a58fc261b9"). InnerVolumeSpecName "kube-api-access-wfjf4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 13:26:03 crc kubenswrapper[4703]: I1209 13:26:03.674150 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e670524e-1598-4d4d-8700-e1a58fc261b9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e670524e-1598-4d4d-8700-e1a58fc261b9" (UID: "e670524e-1598-4d4d-8700-e1a58fc261b9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 13:26:03 crc kubenswrapper[4703]: I1209 13:26:03.753753 4703 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e670524e-1598-4d4d-8700-e1a58fc261b9-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 13:26:03 crc kubenswrapper[4703]: I1209 13:26:03.753798 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wfjf4\" (UniqueName: \"kubernetes.io/projected/e670524e-1598-4d4d-8700-e1a58fc261b9-kube-api-access-wfjf4\") on node \"crc\" DevicePath \"\"" Dec 09 13:26:03 crc kubenswrapper[4703]: I1209 13:26:03.753809 4703 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e670524e-1598-4d4d-8700-e1a58fc261b9-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 13:26:03 crc kubenswrapper[4703]: I1209 13:26:03.979072 4703 generic.go:334] "Generic (PLEG): container finished" podID="e670524e-1598-4d4d-8700-e1a58fc261b9" containerID="970a39a1b6867c9705ec2f9292cbbd63f741399ac5f6dbe30b73639c844b3640" exitCode=0 Dec 09 13:26:03 crc kubenswrapper[4703]: I1209 13:26:03.979222 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8cwbw" event={"ID":"e670524e-1598-4d4d-8700-e1a58fc261b9","Type":"ContainerDied","Data":"970a39a1b6867c9705ec2f9292cbbd63f741399ac5f6dbe30b73639c844b3640"} Dec 09 13:26:03 crc kubenswrapper[4703]: I1209 13:26:03.979286 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8cwbw" Dec 09 13:26:03 crc kubenswrapper[4703]: I1209 13:26:03.980703 4703 scope.go:117] "RemoveContainer" containerID="970a39a1b6867c9705ec2f9292cbbd63f741399ac5f6dbe30b73639c844b3640" Dec 09 13:26:03 crc kubenswrapper[4703]: I1209 13:26:03.980488 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8cwbw" event={"ID":"e670524e-1598-4d4d-8700-e1a58fc261b9","Type":"ContainerDied","Data":"d21e7a503ea069c9b042e85746ce2318627774df2e02c891d1e9d25ac9c3e58e"} Dec 09 13:26:04 crc kubenswrapper[4703]: I1209 13:26:04.006036 4703 scope.go:117] "RemoveContainer" containerID="1f039e9478d299092a100c1eb7d22141931f95e5683439416dd30af89b8dade3" Dec 09 13:26:04 crc kubenswrapper[4703]: I1209 13:26:04.031385 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8cwbw"] Dec 09 13:26:04 crc kubenswrapper[4703]: I1209 13:26:04.049091 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8cwbw"] Dec 09 13:26:04 crc kubenswrapper[4703]: I1209 13:26:04.053532 4703 scope.go:117] "RemoveContainer" containerID="d77ec187edbc71906f0b8ccbb42abf055d1911c9717f87fc3b2e4483163f7c16" Dec 09 13:26:04 crc kubenswrapper[4703]: I1209 13:26:04.100533 4703 scope.go:117] "RemoveContainer" containerID="970a39a1b6867c9705ec2f9292cbbd63f741399ac5f6dbe30b73639c844b3640" Dec 09 13:26:04 crc kubenswrapper[4703]: E1209 13:26:04.101659 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"970a39a1b6867c9705ec2f9292cbbd63f741399ac5f6dbe30b73639c844b3640\": container with ID starting with 970a39a1b6867c9705ec2f9292cbbd63f741399ac5f6dbe30b73639c844b3640 not found: ID does not exist" containerID="970a39a1b6867c9705ec2f9292cbbd63f741399ac5f6dbe30b73639c844b3640" Dec 09 13:26:04 crc kubenswrapper[4703]: I1209 13:26:04.101705 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"970a39a1b6867c9705ec2f9292cbbd63f741399ac5f6dbe30b73639c844b3640"} err="failed to get container status \"970a39a1b6867c9705ec2f9292cbbd63f741399ac5f6dbe30b73639c844b3640\": rpc error: code = NotFound desc = could not find container \"970a39a1b6867c9705ec2f9292cbbd63f741399ac5f6dbe30b73639c844b3640\": container with ID starting with 970a39a1b6867c9705ec2f9292cbbd63f741399ac5f6dbe30b73639c844b3640 not found: ID does not exist" Dec 09 13:26:04 crc kubenswrapper[4703]: I1209 13:26:04.101734 4703 scope.go:117] "RemoveContainer" containerID="1f039e9478d299092a100c1eb7d22141931f95e5683439416dd30af89b8dade3" Dec 09 13:26:04 crc kubenswrapper[4703]: E1209 13:26:04.102325 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f039e9478d299092a100c1eb7d22141931f95e5683439416dd30af89b8dade3\": container with ID starting with 1f039e9478d299092a100c1eb7d22141931f95e5683439416dd30af89b8dade3 not found: ID does not exist" containerID="1f039e9478d299092a100c1eb7d22141931f95e5683439416dd30af89b8dade3" Dec 09 13:26:04 crc kubenswrapper[4703]: I1209 13:26:04.102387 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f039e9478d299092a100c1eb7d22141931f95e5683439416dd30af89b8dade3"} err="failed to get container status \"1f039e9478d299092a100c1eb7d22141931f95e5683439416dd30af89b8dade3\": rpc error: code = NotFound desc = could not find container \"1f039e9478d299092a100c1eb7d22141931f95e5683439416dd30af89b8dade3\": container with ID starting with 1f039e9478d299092a100c1eb7d22141931f95e5683439416dd30af89b8dade3 not found: ID does not exist" Dec 09 13:26:04 crc kubenswrapper[4703]: I1209 13:26:04.102428 4703 scope.go:117] "RemoveContainer" containerID="d77ec187edbc71906f0b8ccbb42abf055d1911c9717f87fc3b2e4483163f7c16" Dec 09 13:26:04 crc kubenswrapper[4703]: E1209 13:26:04.103043 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d77ec187edbc71906f0b8ccbb42abf055d1911c9717f87fc3b2e4483163f7c16\": container with ID starting with d77ec187edbc71906f0b8ccbb42abf055d1911c9717f87fc3b2e4483163f7c16 not found: ID does not exist" containerID="d77ec187edbc71906f0b8ccbb42abf055d1911c9717f87fc3b2e4483163f7c16" Dec 09 13:26:04 crc kubenswrapper[4703]: I1209 13:26:04.103120 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d77ec187edbc71906f0b8ccbb42abf055d1911c9717f87fc3b2e4483163f7c16"} err="failed to get container status \"d77ec187edbc71906f0b8ccbb42abf055d1911c9717f87fc3b2e4483163f7c16\": rpc error: code = NotFound desc = could not find container \"d77ec187edbc71906f0b8ccbb42abf055d1911c9717f87fc3b2e4483163f7c16\": container with ID starting with d77ec187edbc71906f0b8ccbb42abf055d1911c9717f87fc3b2e4483163f7c16 not found: ID does not exist" Dec 09 13:26:05 crc kubenswrapper[4703]: E1209 13:26:05.073477 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:26:05 crc kubenswrapper[4703]: I1209 13:26:05.084103 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e670524e-1598-4d4d-8700-e1a58fc261b9" path="/var/lib/kubelet/pods/e670524e-1598-4d4d-8700-e1a58fc261b9/volumes" Dec 09 13:26:18 crc kubenswrapper[4703]: E1209 13:26:18.073759 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:26:19 crc kubenswrapper[4703]: E1209 13:26:19.072632 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:26:32 crc kubenswrapper[4703]: E1209 13:26:32.073492 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:26:33 crc kubenswrapper[4703]: E1209 13:26:33.072703 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:26:45 crc kubenswrapper[4703]: E1209 13:26:45.074366 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:26:47 crc kubenswrapper[4703]: E1209 13:26:47.073136 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:26:48 crc kubenswrapper[4703]: I1209 13:26:48.035522 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-txcq5"] Dec 09 13:26:48 crc kubenswrapper[4703]: E1209 13:26:48.038855 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e670524e-1598-4d4d-8700-e1a58fc261b9" containerName="registry-server" Dec 09 13:26:48 crc kubenswrapper[4703]: I1209 13:26:48.038880 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="e670524e-1598-4d4d-8700-e1a58fc261b9" containerName="registry-server" Dec 09 13:26:48 crc kubenswrapper[4703]: E1209 13:26:48.038916 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e670524e-1598-4d4d-8700-e1a58fc261b9" containerName="extract-utilities" Dec 09 13:26:48 crc kubenswrapper[4703]: I1209 13:26:48.038923 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="e670524e-1598-4d4d-8700-e1a58fc261b9" containerName="extract-utilities" Dec 09 13:26:48 crc kubenswrapper[4703]: E1209 13:26:48.038941 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e670524e-1598-4d4d-8700-e1a58fc261b9" containerName="extract-content" Dec 09 13:26:48 crc kubenswrapper[4703]: I1209 13:26:48.038948 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="e670524e-1598-4d4d-8700-e1a58fc261b9" containerName="extract-content" Dec 09 13:26:48 crc kubenswrapper[4703]: I1209 13:26:48.039199 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="e670524e-1598-4d4d-8700-e1a58fc261b9" containerName="registry-server" Dec 09 13:26:48 crc kubenswrapper[4703]: I1209 13:26:48.040165 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-txcq5" Dec 09 13:26:48 crc kubenswrapper[4703]: I1209 13:26:48.044600 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 09 13:26:48 crc kubenswrapper[4703]: I1209 13:26:48.045526 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 09 13:26:48 crc kubenswrapper[4703]: I1209 13:26:48.046460 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-8xnzm" Dec 09 13:26:48 crc kubenswrapper[4703]: I1209 13:26:48.046856 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 09 13:26:48 crc kubenswrapper[4703]: I1209 13:26:48.073052 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-txcq5"] Dec 09 13:26:48 crc kubenswrapper[4703]: I1209 13:26:48.138835 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b3b60855-8704-45ce-9688-2364ca1978f0-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-txcq5\" (UID: \"b3b60855-8704-45ce-9688-2364ca1978f0\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-txcq5" Dec 09 13:26:48 crc kubenswrapper[4703]: I1209 13:26:48.138948 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nfcv\" (UniqueName: \"kubernetes.io/projected/b3b60855-8704-45ce-9688-2364ca1978f0-kube-api-access-7nfcv\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-txcq5\" (UID: \"b3b60855-8704-45ce-9688-2364ca1978f0\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-txcq5" Dec 09 13:26:48 crc kubenswrapper[4703]: I1209 13:26:48.139289 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b3b60855-8704-45ce-9688-2364ca1978f0-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-txcq5\" (UID: \"b3b60855-8704-45ce-9688-2364ca1978f0\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-txcq5" Dec 09 13:26:48 crc kubenswrapper[4703]: I1209 13:26:48.242243 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b3b60855-8704-45ce-9688-2364ca1978f0-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-txcq5\" (UID: \"b3b60855-8704-45ce-9688-2364ca1978f0\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-txcq5" Dec 09 13:26:48 crc kubenswrapper[4703]: I1209 13:26:48.242398 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b3b60855-8704-45ce-9688-2364ca1978f0-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-txcq5\" (UID: \"b3b60855-8704-45ce-9688-2364ca1978f0\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-txcq5" Dec 09 13:26:48 crc kubenswrapper[4703]: I1209 13:26:48.242478 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7nfcv\" (UniqueName: \"kubernetes.io/projected/b3b60855-8704-45ce-9688-2364ca1978f0-kube-api-access-7nfcv\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-txcq5\" (UID: \"b3b60855-8704-45ce-9688-2364ca1978f0\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-txcq5" Dec 09 13:26:48 crc kubenswrapper[4703]: I1209 13:26:48.256096 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b3b60855-8704-45ce-9688-2364ca1978f0-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-txcq5\" (UID: \"b3b60855-8704-45ce-9688-2364ca1978f0\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-txcq5" Dec 09 13:26:48 crc kubenswrapper[4703]: I1209 13:26:48.269749 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b3b60855-8704-45ce-9688-2364ca1978f0-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-txcq5\" (UID: \"b3b60855-8704-45ce-9688-2364ca1978f0\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-txcq5" Dec 09 13:26:48 crc kubenswrapper[4703]: I1209 13:26:48.278098 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7nfcv\" (UniqueName: \"kubernetes.io/projected/b3b60855-8704-45ce-9688-2364ca1978f0-kube-api-access-7nfcv\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-txcq5\" (UID: \"b3b60855-8704-45ce-9688-2364ca1978f0\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-txcq5" Dec 09 13:26:48 crc kubenswrapper[4703]: I1209 13:26:48.362395 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-txcq5" Dec 09 13:26:49 crc kubenswrapper[4703]: I1209 13:26:48.927578 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-txcq5"] Dec 09 13:26:49 crc kubenswrapper[4703]: I1209 13:26:49.515027 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-txcq5" event={"ID":"b3b60855-8704-45ce-9688-2364ca1978f0","Type":"ContainerStarted","Data":"2aa8421421105df18df225a7ae2a288fb966f0149ac5a923d256898638e2ddbe"} Dec 09 13:26:50 crc kubenswrapper[4703]: I1209 13:26:50.534874 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-txcq5" event={"ID":"b3b60855-8704-45ce-9688-2364ca1978f0","Type":"ContainerStarted","Data":"ceef06294f3c1386364c167c693c3cb50b20df40ac8ca30062f2aa62cf0b83eb"} Dec 09 13:26:50 crc kubenswrapper[4703]: I1209 13:26:50.560871 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-txcq5" podStartSLOduration=2.097367645 podStartE2EDuration="2.560845219s" podCreationTimestamp="2025-12-09 13:26:48 +0000 UTC" firstStartedPulling="2025-12-09 13:26:48.93383663 +0000 UTC m=+4908.182600149" lastFinishedPulling="2025-12-09 13:26:49.397314194 +0000 UTC m=+4908.646077723" observedRunningTime="2025-12-09 13:26:50.555640819 +0000 UTC m=+4909.804404338" watchObservedRunningTime="2025-12-09 13:26:50.560845219 +0000 UTC m=+4909.809608738" Dec 09 13:26:58 crc kubenswrapper[4703]: E1209 13:26:58.073531 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:27:00 crc kubenswrapper[4703]: I1209 13:27:00.084143 4703 patch_prober.go:28] interesting pod/machine-config-daemon-q8sfk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 13:27:00 crc kubenswrapper[4703]: I1209 13:27:00.084554 4703 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 13:27:02 crc kubenswrapper[4703]: E1209 13:27:02.073361 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:27:12 crc kubenswrapper[4703]: E1209 13:27:12.074266 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:27:15 crc kubenswrapper[4703]: E1209 13:27:15.072558 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:27:26 crc kubenswrapper[4703]: E1209 13:27:26.073439 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:27:27 crc kubenswrapper[4703]: E1209 13:27:27.072928 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:27:30 crc kubenswrapper[4703]: I1209 13:27:30.083992 4703 patch_prober.go:28] interesting pod/machine-config-daemon-q8sfk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 13:27:30 crc kubenswrapper[4703]: I1209 13:27:30.084667 4703 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 13:27:39 crc kubenswrapper[4703]: E1209 13:27:39.075000 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:27:40 crc kubenswrapper[4703]: E1209 13:27:40.072625 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:27:53 crc kubenswrapper[4703]: E1209 13:27:53.074349 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:27:53 crc kubenswrapper[4703]: E1209 13:27:53.074373 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:28:00 crc kubenswrapper[4703]: I1209 13:28:00.083431 4703 patch_prober.go:28] interesting pod/machine-config-daemon-q8sfk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 13:28:00 crc kubenswrapper[4703]: I1209 13:28:00.084297 4703 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 13:28:00 crc kubenswrapper[4703]: I1209 13:28:00.084354 4703 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" Dec 09 13:28:00 crc kubenswrapper[4703]: I1209 13:28:00.085581 4703 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"59a313480bf76c604abfa3e91cb3f559c36326e7764983df824f7e63c20b433b"} pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 13:28:00 crc kubenswrapper[4703]: I1209 13:28:00.085652 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" containerName="machine-config-daemon" containerID="cri-o://59a313480bf76c604abfa3e91cb3f559c36326e7764983df824f7e63c20b433b" gracePeriod=600 Dec 09 13:28:00 crc kubenswrapper[4703]: E1209 13:28:00.230799 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 13:28:00 crc kubenswrapper[4703]: I1209 13:28:00.284636 4703 generic.go:334] "Generic (PLEG): container finished" podID="32956ceb-8540-406e-8693-e86efb46cd42" containerID="59a313480bf76c604abfa3e91cb3f559c36326e7764983df824f7e63c20b433b" exitCode=0 Dec 09 13:28:00 crc kubenswrapper[4703]: I1209 13:28:00.284737 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" event={"ID":"32956ceb-8540-406e-8693-e86efb46cd42","Type":"ContainerDied","Data":"59a313480bf76c604abfa3e91cb3f559c36326e7764983df824f7e63c20b433b"} Dec 09 13:28:00 crc kubenswrapper[4703]: I1209 13:28:00.285064 4703 scope.go:117] "RemoveContainer" containerID="006e78894076f90d553f2aff7ff4794812b06251e7c8d5ded4877eefe1f794df" Dec 09 13:28:00 crc kubenswrapper[4703]: I1209 13:28:00.286812 4703 scope.go:117] "RemoveContainer" containerID="59a313480bf76c604abfa3e91cb3f559c36326e7764983df824f7e63c20b433b" Dec 09 13:28:00 crc kubenswrapper[4703]: E1209 13:28:00.287413 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 13:28:08 crc kubenswrapper[4703]: E1209 13:28:08.074697 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:28:08 crc kubenswrapper[4703]: E1209 13:28:08.075089 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:28:15 crc kubenswrapper[4703]: I1209 13:28:15.077292 4703 scope.go:117] "RemoveContainer" containerID="59a313480bf76c604abfa3e91cb3f559c36326e7764983df824f7e63c20b433b" Dec 09 13:28:15 crc kubenswrapper[4703]: E1209 13:28:15.078267 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 13:28:19 crc kubenswrapper[4703]: E1209 13:28:19.073133 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:28:23 crc kubenswrapper[4703]: E1209 13:28:23.072804 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:28:30 crc kubenswrapper[4703]: I1209 13:28:30.070316 4703 scope.go:117] "RemoveContainer" containerID="59a313480bf76c604abfa3e91cb3f559c36326e7764983df824f7e63c20b433b" Dec 09 13:28:30 crc kubenswrapper[4703]: E1209 13:28:30.071395 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 13:28:30 crc kubenswrapper[4703]: E1209 13:28:30.074716 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:28:35 crc kubenswrapper[4703]: E1209 13:28:35.073970 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:28:41 crc kubenswrapper[4703]: I1209 13:28:41.079202 4703 scope.go:117] "RemoveContainer" containerID="59a313480bf76c604abfa3e91cb3f559c36326e7764983df824f7e63c20b433b" Dec 09 13:28:41 crc kubenswrapper[4703]: E1209 13:28:41.080433 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 13:28:44 crc kubenswrapper[4703]: E1209 13:28:44.073953 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:28:47 crc kubenswrapper[4703]: E1209 13:28:47.072460 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:28:54 crc kubenswrapper[4703]: I1209 13:28:54.070740 4703 scope.go:117] "RemoveContainer" containerID="59a313480bf76c604abfa3e91cb3f559c36326e7764983df824f7e63c20b433b" Dec 09 13:28:54 crc kubenswrapper[4703]: E1209 13:28:54.072266 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 13:28:59 crc kubenswrapper[4703]: E1209 13:28:59.074966 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:29:00 crc kubenswrapper[4703]: E1209 13:29:00.073219 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:29:05 crc kubenswrapper[4703]: I1209 13:29:05.070944 4703 scope.go:117] "RemoveContainer" containerID="59a313480bf76c604abfa3e91cb3f559c36326e7764983df824f7e63c20b433b" Dec 09 13:29:05 crc kubenswrapper[4703]: E1209 13:29:05.071724 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 13:29:12 crc kubenswrapper[4703]: E1209 13:29:12.072994 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:29:14 crc kubenswrapper[4703]: E1209 13:29:14.074471 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:29:17 crc kubenswrapper[4703]: I1209 13:29:17.071116 4703 scope.go:117] "RemoveContainer" containerID="59a313480bf76c604abfa3e91cb3f559c36326e7764983df824f7e63c20b433b" Dec 09 13:29:17 crc kubenswrapper[4703]: E1209 13:29:17.072005 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 13:29:25 crc kubenswrapper[4703]: E1209 13:29:25.077299 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:29:26 crc kubenswrapper[4703]: E1209 13:29:26.073358 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:29:32 crc kubenswrapper[4703]: I1209 13:29:32.070020 4703 scope.go:117] "RemoveContainer" containerID="59a313480bf76c604abfa3e91cb3f559c36326e7764983df824f7e63c20b433b" Dec 09 13:29:32 crc kubenswrapper[4703]: E1209 13:29:32.071183 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 13:29:36 crc kubenswrapper[4703]: I1209 13:29:36.019338 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-95qrf"] Dec 09 13:29:36 crc kubenswrapper[4703]: I1209 13:29:36.022937 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-95qrf" Dec 09 13:29:36 crc kubenswrapper[4703]: I1209 13:29:36.036738 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-95qrf"] Dec 09 13:29:36 crc kubenswrapper[4703]: I1209 13:29:36.140385 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/088f63a5-f5ac-4530-b239-2fe25f7cbd0f-utilities\") pod \"community-operators-95qrf\" (UID: \"088f63a5-f5ac-4530-b239-2fe25f7cbd0f\") " pod="openshift-marketplace/community-operators-95qrf" Dec 09 13:29:36 crc kubenswrapper[4703]: I1209 13:29:36.140689 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2plq\" (UniqueName: \"kubernetes.io/projected/088f63a5-f5ac-4530-b239-2fe25f7cbd0f-kube-api-access-j2plq\") pod \"community-operators-95qrf\" (UID: \"088f63a5-f5ac-4530-b239-2fe25f7cbd0f\") " pod="openshift-marketplace/community-operators-95qrf" Dec 09 13:29:36 crc kubenswrapper[4703]: I1209 13:29:36.140755 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/088f63a5-f5ac-4530-b239-2fe25f7cbd0f-catalog-content\") pod \"community-operators-95qrf\" (UID: \"088f63a5-f5ac-4530-b239-2fe25f7cbd0f\") " pod="openshift-marketplace/community-operators-95qrf" Dec 09 13:29:36 crc kubenswrapper[4703]: I1209 13:29:36.245663 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2plq\" (UniqueName: \"kubernetes.io/projected/088f63a5-f5ac-4530-b239-2fe25f7cbd0f-kube-api-access-j2plq\") pod \"community-operators-95qrf\" (UID: \"088f63a5-f5ac-4530-b239-2fe25f7cbd0f\") " pod="openshift-marketplace/community-operators-95qrf" Dec 09 13:29:36 crc kubenswrapper[4703]: I1209 13:29:36.245753 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/088f63a5-f5ac-4530-b239-2fe25f7cbd0f-catalog-content\") pod \"community-operators-95qrf\" (UID: \"088f63a5-f5ac-4530-b239-2fe25f7cbd0f\") " pod="openshift-marketplace/community-operators-95qrf" Dec 09 13:29:36 crc kubenswrapper[4703]: I1209 13:29:36.245812 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/088f63a5-f5ac-4530-b239-2fe25f7cbd0f-utilities\") pod \"community-operators-95qrf\" (UID: \"088f63a5-f5ac-4530-b239-2fe25f7cbd0f\") " pod="openshift-marketplace/community-operators-95qrf" Dec 09 13:29:36 crc kubenswrapper[4703]: I1209 13:29:36.246982 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/088f63a5-f5ac-4530-b239-2fe25f7cbd0f-utilities\") pod \"community-operators-95qrf\" (UID: \"088f63a5-f5ac-4530-b239-2fe25f7cbd0f\") " pod="openshift-marketplace/community-operators-95qrf" Dec 09 13:29:36 crc kubenswrapper[4703]: I1209 13:29:36.247078 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/088f63a5-f5ac-4530-b239-2fe25f7cbd0f-catalog-content\") pod \"community-operators-95qrf\" (UID: \"088f63a5-f5ac-4530-b239-2fe25f7cbd0f\") " pod="openshift-marketplace/community-operators-95qrf" Dec 09 13:29:36 crc kubenswrapper[4703]: I1209 13:29:36.271399 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2plq\" (UniqueName: \"kubernetes.io/projected/088f63a5-f5ac-4530-b239-2fe25f7cbd0f-kube-api-access-j2plq\") pod \"community-operators-95qrf\" (UID: \"088f63a5-f5ac-4530-b239-2fe25f7cbd0f\") " pod="openshift-marketplace/community-operators-95qrf" Dec 09 13:29:36 crc kubenswrapper[4703]: I1209 13:29:36.351965 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-95qrf" Dec 09 13:29:37 crc kubenswrapper[4703]: I1209 13:29:37.038126 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-95qrf"] Dec 09 13:29:37 crc kubenswrapper[4703]: I1209 13:29:37.427690 4703 generic.go:334] "Generic (PLEG): container finished" podID="088f63a5-f5ac-4530-b239-2fe25f7cbd0f" containerID="d7d65208323147a2597a7bd50bf16950dd7b7085ce5a44c6362a76556057589c" exitCode=0 Dec 09 13:29:37 crc kubenswrapper[4703]: I1209 13:29:37.427750 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-95qrf" event={"ID":"088f63a5-f5ac-4530-b239-2fe25f7cbd0f","Type":"ContainerDied","Data":"d7d65208323147a2597a7bd50bf16950dd7b7085ce5a44c6362a76556057589c"} Dec 09 13:29:37 crc kubenswrapper[4703]: I1209 13:29:37.428160 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-95qrf" event={"ID":"088f63a5-f5ac-4530-b239-2fe25f7cbd0f","Type":"ContainerStarted","Data":"f9a77e42b44e26f9568bb4f31d8fcb5d23b9eb1acfbc7752e68efd1b7182b31c"} Dec 09 13:29:37 crc kubenswrapper[4703]: I1209 13:29:37.430057 4703 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 09 13:29:38 crc kubenswrapper[4703]: I1209 13:29:38.441088 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-95qrf" event={"ID":"088f63a5-f5ac-4530-b239-2fe25f7cbd0f","Type":"ContainerStarted","Data":"cf123a4cc1b5296a00bfeb1555a6a2c412d8b038c618a250a3dd299baad4a67c"} Dec 09 13:29:39 crc kubenswrapper[4703]: E1209 13:29:39.203468 4703 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 13:29:39 crc kubenswrapper[4703]: E1209 13:29:39.203552 4703 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 13:29:39 crc kubenswrapper[4703]: E1209 13:29:39.203769 4703 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n565h678h645h65h685h657h568h564h8fhcfhc4h58fh66bh564h649h5d5hb5h67dh57fh97h58ch557h4h68h59fh5c8h65ch586h5bch5b8h5c6h7q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p7hbq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(ce42c586-f397-4f98-be45-f56d36115d7a): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 13:29:39 crc kubenswrapper[4703]: E1209 13:29:39.205012 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:29:39 crc kubenswrapper[4703]: I1209 13:29:39.464022 4703 generic.go:334] "Generic (PLEG): container finished" podID="088f63a5-f5ac-4530-b239-2fe25f7cbd0f" containerID="cf123a4cc1b5296a00bfeb1555a6a2c412d8b038c618a250a3dd299baad4a67c" exitCode=0 Dec 09 13:29:39 crc kubenswrapper[4703]: I1209 13:29:39.464078 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-95qrf" event={"ID":"088f63a5-f5ac-4530-b239-2fe25f7cbd0f","Type":"ContainerDied","Data":"cf123a4cc1b5296a00bfeb1555a6a2c412d8b038c618a250a3dd299baad4a67c"} Dec 09 13:29:40 crc kubenswrapper[4703]: I1209 13:29:40.480592 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-95qrf" event={"ID":"088f63a5-f5ac-4530-b239-2fe25f7cbd0f","Type":"ContainerStarted","Data":"4d129ce043037646643f0cecfa1ceebe709d21ba0729f4a87f7254fffafbf6f8"} Dec 09 13:29:40 crc kubenswrapper[4703]: I1209 13:29:40.508443 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-95qrf" podStartSLOduration=2.956544401 podStartE2EDuration="5.508424927s" podCreationTimestamp="2025-12-09 13:29:35 +0000 UTC" firstStartedPulling="2025-12-09 13:29:37.429834028 +0000 UTC m=+5076.678597537" lastFinishedPulling="2025-12-09 13:29:39.981714544 +0000 UTC m=+5079.230478063" observedRunningTime="2025-12-09 13:29:40.499138981 +0000 UTC m=+5079.747902500" watchObservedRunningTime="2025-12-09 13:29:40.508424927 +0000 UTC m=+5079.757188436" Dec 09 13:29:41 crc kubenswrapper[4703]: E1209 13:29:41.079898 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:29:44 crc kubenswrapper[4703]: I1209 13:29:44.071378 4703 scope.go:117] "RemoveContainer" containerID="59a313480bf76c604abfa3e91cb3f559c36326e7764983df824f7e63c20b433b" Dec 09 13:29:44 crc kubenswrapper[4703]: E1209 13:29:44.073294 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 13:29:46 crc kubenswrapper[4703]: I1209 13:29:46.352144 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-95qrf" Dec 09 13:29:46 crc kubenswrapper[4703]: I1209 13:29:46.352709 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-95qrf" Dec 09 13:29:46 crc kubenswrapper[4703]: I1209 13:29:46.407840 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-95qrf" Dec 09 13:29:46 crc kubenswrapper[4703]: I1209 13:29:46.605765 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-95qrf" Dec 09 13:29:46 crc kubenswrapper[4703]: I1209 13:29:46.669960 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-95qrf"] Dec 09 13:29:48 crc kubenswrapper[4703]: I1209 13:29:48.573363 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-95qrf" podUID="088f63a5-f5ac-4530-b239-2fe25f7cbd0f" containerName="registry-server" containerID="cri-o://4d129ce043037646643f0cecfa1ceebe709d21ba0729f4a87f7254fffafbf6f8" gracePeriod=2 Dec 09 13:29:49 crc kubenswrapper[4703]: I1209 13:29:49.587845 4703 generic.go:334] "Generic (PLEG): container finished" podID="088f63a5-f5ac-4530-b239-2fe25f7cbd0f" containerID="4d129ce043037646643f0cecfa1ceebe709d21ba0729f4a87f7254fffafbf6f8" exitCode=0 Dec 09 13:29:49 crc kubenswrapper[4703]: I1209 13:29:49.587936 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-95qrf" event={"ID":"088f63a5-f5ac-4530-b239-2fe25f7cbd0f","Type":"ContainerDied","Data":"4d129ce043037646643f0cecfa1ceebe709d21ba0729f4a87f7254fffafbf6f8"} Dec 09 13:29:49 crc kubenswrapper[4703]: I1209 13:29:49.588259 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-95qrf" event={"ID":"088f63a5-f5ac-4530-b239-2fe25f7cbd0f","Type":"ContainerDied","Data":"f9a77e42b44e26f9568bb4f31d8fcb5d23b9eb1acfbc7752e68efd1b7182b31c"} Dec 09 13:29:49 crc kubenswrapper[4703]: I1209 13:29:49.588276 4703 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f9a77e42b44e26f9568bb4f31d8fcb5d23b9eb1acfbc7752e68efd1b7182b31c" Dec 09 13:29:49 crc kubenswrapper[4703]: I1209 13:29:49.652133 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-95qrf" Dec 09 13:29:49 crc kubenswrapper[4703]: I1209 13:29:49.816924 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j2plq\" (UniqueName: \"kubernetes.io/projected/088f63a5-f5ac-4530-b239-2fe25f7cbd0f-kube-api-access-j2plq\") pod \"088f63a5-f5ac-4530-b239-2fe25f7cbd0f\" (UID: \"088f63a5-f5ac-4530-b239-2fe25f7cbd0f\") " Dec 09 13:29:49 crc kubenswrapper[4703]: I1209 13:29:49.817759 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/088f63a5-f5ac-4530-b239-2fe25f7cbd0f-catalog-content\") pod \"088f63a5-f5ac-4530-b239-2fe25f7cbd0f\" (UID: \"088f63a5-f5ac-4530-b239-2fe25f7cbd0f\") " Dec 09 13:29:49 crc kubenswrapper[4703]: I1209 13:29:49.817923 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/088f63a5-f5ac-4530-b239-2fe25f7cbd0f-utilities\") pod \"088f63a5-f5ac-4530-b239-2fe25f7cbd0f\" (UID: \"088f63a5-f5ac-4530-b239-2fe25f7cbd0f\") " Dec 09 13:29:49 crc kubenswrapper[4703]: I1209 13:29:49.819300 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/088f63a5-f5ac-4530-b239-2fe25f7cbd0f-utilities" (OuterVolumeSpecName: "utilities") pod "088f63a5-f5ac-4530-b239-2fe25f7cbd0f" (UID: "088f63a5-f5ac-4530-b239-2fe25f7cbd0f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 13:29:49 crc kubenswrapper[4703]: I1209 13:29:49.824583 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/088f63a5-f5ac-4530-b239-2fe25f7cbd0f-kube-api-access-j2plq" (OuterVolumeSpecName: "kube-api-access-j2plq") pod "088f63a5-f5ac-4530-b239-2fe25f7cbd0f" (UID: "088f63a5-f5ac-4530-b239-2fe25f7cbd0f"). InnerVolumeSpecName "kube-api-access-j2plq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 13:29:49 crc kubenswrapper[4703]: I1209 13:29:49.878566 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/088f63a5-f5ac-4530-b239-2fe25f7cbd0f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "088f63a5-f5ac-4530-b239-2fe25f7cbd0f" (UID: "088f63a5-f5ac-4530-b239-2fe25f7cbd0f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 13:29:49 crc kubenswrapper[4703]: I1209 13:29:49.921566 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j2plq\" (UniqueName: \"kubernetes.io/projected/088f63a5-f5ac-4530-b239-2fe25f7cbd0f-kube-api-access-j2plq\") on node \"crc\" DevicePath \"\"" Dec 09 13:29:49 crc kubenswrapper[4703]: I1209 13:29:49.921612 4703 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/088f63a5-f5ac-4530-b239-2fe25f7cbd0f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 13:29:49 crc kubenswrapper[4703]: I1209 13:29:49.921624 4703 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/088f63a5-f5ac-4530-b239-2fe25f7cbd0f-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 13:29:50 crc kubenswrapper[4703]: I1209 13:29:50.602284 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-95qrf" Dec 09 13:29:50 crc kubenswrapper[4703]: I1209 13:29:50.646136 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-95qrf"] Dec 09 13:29:50 crc kubenswrapper[4703]: I1209 13:29:50.656534 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-95qrf"] Dec 09 13:29:51 crc kubenswrapper[4703]: I1209 13:29:51.084597 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="088f63a5-f5ac-4530-b239-2fe25f7cbd0f" path="/var/lib/kubelet/pods/088f63a5-f5ac-4530-b239-2fe25f7cbd0f/volumes" Dec 09 13:29:53 crc kubenswrapper[4703]: E1209 13:29:53.073714 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:29:56 crc kubenswrapper[4703]: I1209 13:29:56.070600 4703 scope.go:117] "RemoveContainer" containerID="59a313480bf76c604abfa3e91cb3f559c36326e7764983df824f7e63c20b433b" Dec 09 13:29:56 crc kubenswrapper[4703]: E1209 13:29:56.072314 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:29:56 crc kubenswrapper[4703]: E1209 13:29:56.074392 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 13:30:00 crc kubenswrapper[4703]: I1209 13:30:00.175470 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421450-nwqrj"] Dec 09 13:30:00 crc kubenswrapper[4703]: E1209 13:30:00.177166 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="088f63a5-f5ac-4530-b239-2fe25f7cbd0f" containerName="registry-server" Dec 09 13:30:00 crc kubenswrapper[4703]: I1209 13:30:00.177220 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="088f63a5-f5ac-4530-b239-2fe25f7cbd0f" containerName="registry-server" Dec 09 13:30:00 crc kubenswrapper[4703]: E1209 13:30:00.177284 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="088f63a5-f5ac-4530-b239-2fe25f7cbd0f" containerName="extract-content" Dec 09 13:30:00 crc kubenswrapper[4703]: I1209 13:30:00.177298 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="088f63a5-f5ac-4530-b239-2fe25f7cbd0f" containerName="extract-content" Dec 09 13:30:00 crc kubenswrapper[4703]: E1209 13:30:00.177361 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="088f63a5-f5ac-4530-b239-2fe25f7cbd0f" containerName="extract-utilities" Dec 09 13:30:00 crc kubenswrapper[4703]: I1209 13:30:00.177375 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="088f63a5-f5ac-4530-b239-2fe25f7cbd0f" containerName="extract-utilities" Dec 09 13:30:00 crc kubenswrapper[4703]: I1209 13:30:00.177717 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="088f63a5-f5ac-4530-b239-2fe25f7cbd0f" containerName="registry-server" Dec 09 13:30:00 crc kubenswrapper[4703]: I1209 13:30:00.179247 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421450-nwqrj" Dec 09 13:30:00 crc kubenswrapper[4703]: I1209 13:30:00.182400 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 09 13:30:00 crc kubenswrapper[4703]: I1209 13:30:00.183423 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 09 13:30:00 crc kubenswrapper[4703]: I1209 13:30:00.186387 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/de28432e-fe5d-4f9a-878c-a2b66e8c71a5-config-volume\") pod \"collect-profiles-29421450-nwqrj\" (UID: \"de28432e-fe5d-4f9a-878c-a2b66e8c71a5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421450-nwqrj" Dec 09 13:30:00 crc kubenswrapper[4703]: I1209 13:30:00.186732 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmpdm\" (UniqueName: \"kubernetes.io/projected/de28432e-fe5d-4f9a-878c-a2b66e8c71a5-kube-api-access-pmpdm\") pod \"collect-profiles-29421450-nwqrj\" (UID: \"de28432e-fe5d-4f9a-878c-a2b66e8c71a5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421450-nwqrj" Dec 09 13:30:00 crc kubenswrapper[4703]: I1209 13:30:00.186836 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/de28432e-fe5d-4f9a-878c-a2b66e8c71a5-secret-volume\") pod \"collect-profiles-29421450-nwqrj\" (UID: \"de28432e-fe5d-4f9a-878c-a2b66e8c71a5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421450-nwqrj" Dec 09 13:30:00 crc kubenswrapper[4703]: I1209 13:30:00.189939 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421450-nwqrj"] Dec 09 13:30:00 crc kubenswrapper[4703]: I1209 13:30:00.289570 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmpdm\" (UniqueName: \"kubernetes.io/projected/de28432e-fe5d-4f9a-878c-a2b66e8c71a5-kube-api-access-pmpdm\") pod \"collect-profiles-29421450-nwqrj\" (UID: \"de28432e-fe5d-4f9a-878c-a2b66e8c71a5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421450-nwqrj" Dec 09 13:30:00 crc kubenswrapper[4703]: I1209 13:30:00.289687 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/de28432e-fe5d-4f9a-878c-a2b66e8c71a5-secret-volume\") pod \"collect-profiles-29421450-nwqrj\" (UID: \"de28432e-fe5d-4f9a-878c-a2b66e8c71a5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421450-nwqrj" Dec 09 13:30:00 crc kubenswrapper[4703]: I1209 13:30:00.289830 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/de28432e-fe5d-4f9a-878c-a2b66e8c71a5-config-volume\") pod \"collect-profiles-29421450-nwqrj\" (UID: \"de28432e-fe5d-4f9a-878c-a2b66e8c71a5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421450-nwqrj" Dec 09 13:30:00 crc kubenswrapper[4703]: I1209 13:30:00.291102 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/de28432e-fe5d-4f9a-878c-a2b66e8c71a5-config-volume\") pod \"collect-profiles-29421450-nwqrj\" (UID: \"de28432e-fe5d-4f9a-878c-a2b66e8c71a5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421450-nwqrj" Dec 09 13:30:00 crc kubenswrapper[4703]: I1209 13:30:00.305090 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/de28432e-fe5d-4f9a-878c-a2b66e8c71a5-secret-volume\") pod \"collect-profiles-29421450-nwqrj\" (UID: \"de28432e-fe5d-4f9a-878c-a2b66e8c71a5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421450-nwqrj" Dec 09 13:30:00 crc kubenswrapper[4703]: I1209 13:30:00.312354 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmpdm\" (UniqueName: \"kubernetes.io/projected/de28432e-fe5d-4f9a-878c-a2b66e8c71a5-kube-api-access-pmpdm\") pod \"collect-profiles-29421450-nwqrj\" (UID: \"de28432e-fe5d-4f9a-878c-a2b66e8c71a5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421450-nwqrj" Dec 09 13:30:00 crc kubenswrapper[4703]: I1209 13:30:00.515492 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421450-nwqrj" Dec 09 13:30:01 crc kubenswrapper[4703]: I1209 13:30:01.016369 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421450-nwqrj"] Dec 09 13:30:01 crc kubenswrapper[4703]: I1209 13:30:01.769436 4703 generic.go:334] "Generic (PLEG): container finished" podID="de28432e-fe5d-4f9a-878c-a2b66e8c71a5" containerID="1a3aaea475e7598be080e9e13f80a4dcd1439dd3e6ff31558dcc3ccf0bd63ee9" exitCode=0 Dec 09 13:30:01 crc kubenswrapper[4703]: I1209 13:30:01.769848 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421450-nwqrj" event={"ID":"de28432e-fe5d-4f9a-878c-a2b66e8c71a5","Type":"ContainerDied","Data":"1a3aaea475e7598be080e9e13f80a4dcd1439dd3e6ff31558dcc3ccf0bd63ee9"} Dec 09 13:30:01 crc kubenswrapper[4703]: I1209 13:30:01.769889 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421450-nwqrj" event={"ID":"de28432e-fe5d-4f9a-878c-a2b66e8c71a5","Type":"ContainerStarted","Data":"97b83b1a6c2752e16452af4f2ea123a313e500937e7c20adb73af3559b69fc5e"} Dec 09 13:30:03 crc kubenswrapper[4703]: I1209 13:30:03.246300 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421450-nwqrj" Dec 09 13:30:03 crc kubenswrapper[4703]: I1209 13:30:03.374035 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/de28432e-fe5d-4f9a-878c-a2b66e8c71a5-config-volume\") pod \"de28432e-fe5d-4f9a-878c-a2b66e8c71a5\" (UID: \"de28432e-fe5d-4f9a-878c-a2b66e8c71a5\") " Dec 09 13:30:03 crc kubenswrapper[4703]: I1209 13:30:03.374284 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/de28432e-fe5d-4f9a-878c-a2b66e8c71a5-secret-volume\") pod \"de28432e-fe5d-4f9a-878c-a2b66e8c71a5\" (UID: \"de28432e-fe5d-4f9a-878c-a2b66e8c71a5\") " Dec 09 13:30:03 crc kubenswrapper[4703]: I1209 13:30:03.374493 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pmpdm\" (UniqueName: \"kubernetes.io/projected/de28432e-fe5d-4f9a-878c-a2b66e8c71a5-kube-api-access-pmpdm\") pod \"de28432e-fe5d-4f9a-878c-a2b66e8c71a5\" (UID: \"de28432e-fe5d-4f9a-878c-a2b66e8c71a5\") " Dec 09 13:30:03 crc kubenswrapper[4703]: I1209 13:30:03.375035 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de28432e-fe5d-4f9a-878c-a2b66e8c71a5-config-volume" (OuterVolumeSpecName: "config-volume") pod "de28432e-fe5d-4f9a-878c-a2b66e8c71a5" (UID: "de28432e-fe5d-4f9a-878c-a2b66e8c71a5"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 13:30:03 crc kubenswrapper[4703]: I1209 13:30:03.375230 4703 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/de28432e-fe5d-4f9a-878c-a2b66e8c71a5-config-volume\") on node \"crc\" DevicePath \"\"" Dec 09 13:30:03 crc kubenswrapper[4703]: I1209 13:30:03.381619 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de28432e-fe5d-4f9a-878c-a2b66e8c71a5-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "de28432e-fe5d-4f9a-878c-a2b66e8c71a5" (UID: "de28432e-fe5d-4f9a-878c-a2b66e8c71a5"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 13:30:03 crc kubenswrapper[4703]: I1209 13:30:03.382308 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de28432e-fe5d-4f9a-878c-a2b66e8c71a5-kube-api-access-pmpdm" (OuterVolumeSpecName: "kube-api-access-pmpdm") pod "de28432e-fe5d-4f9a-878c-a2b66e8c71a5" (UID: "de28432e-fe5d-4f9a-878c-a2b66e8c71a5"). InnerVolumeSpecName "kube-api-access-pmpdm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 13:30:03 crc kubenswrapper[4703]: I1209 13:30:03.477762 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pmpdm\" (UniqueName: \"kubernetes.io/projected/de28432e-fe5d-4f9a-878c-a2b66e8c71a5-kube-api-access-pmpdm\") on node \"crc\" DevicePath \"\"" Dec 09 13:30:03 crc kubenswrapper[4703]: I1209 13:30:03.477820 4703 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/de28432e-fe5d-4f9a-878c-a2b66e8c71a5-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 09 13:30:03 crc kubenswrapper[4703]: I1209 13:30:03.806687 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421450-nwqrj" event={"ID":"de28432e-fe5d-4f9a-878c-a2b66e8c71a5","Type":"ContainerDied","Data":"97b83b1a6c2752e16452af4f2ea123a313e500937e7c20adb73af3559b69fc5e"} Dec 09 13:30:03 crc kubenswrapper[4703]: I1209 13:30:03.806739 4703 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="97b83b1a6c2752e16452af4f2ea123a313e500937e7c20adb73af3559b69fc5e" Dec 09 13:30:03 crc kubenswrapper[4703]: I1209 13:30:03.806809 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421450-nwqrj" Dec 09 13:30:04 crc kubenswrapper[4703]: I1209 13:30:04.344033 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421405-hznf9"] Dec 09 13:30:04 crc kubenswrapper[4703]: I1209 13:30:04.355466 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421405-hznf9"] Dec 09 13:30:05 crc kubenswrapper[4703]: I1209 13:30:05.084995 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d99fd229-9d94-411b-a64e-3edf0afffe01" path="/var/lib/kubelet/pods/d99fd229-9d94-411b-a64e-3edf0afffe01/volumes" Dec 09 13:30:07 crc kubenswrapper[4703]: E1209 13:30:07.207986 4703 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 09 13:30:07 crc kubenswrapper[4703]: E1209 13:30:07.208420 4703 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 09 13:30:07 crc kubenswrapper[4703]: E1209 13:30:07.208673 4703 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6kdkz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-6ljb8_openstack(862e9d91-760e-43af-aba3-a23255b0fd7a): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 13:30:07 crc kubenswrapper[4703]: E1209 13:30:07.209904 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:30:08 crc kubenswrapper[4703]: E1209 13:30:08.071522 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:30:11 crc kubenswrapper[4703]: I1209 13:30:11.076827 4703 scope.go:117] "RemoveContainer" containerID="59a313480bf76c604abfa3e91cb3f559c36326e7764983df824f7e63c20b433b" Dec 09 13:30:11 crc kubenswrapper[4703]: E1209 13:30:11.077840 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 13:30:20 crc kubenswrapper[4703]: E1209 13:30:20.072002 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:30:22 crc kubenswrapper[4703]: E1209 13:30:22.072978 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:30:24 crc kubenswrapper[4703]: I1209 13:30:24.071145 4703 scope.go:117] "RemoveContainer" containerID="59a313480bf76c604abfa3e91cb3f559c36326e7764983df824f7e63c20b433b" Dec 09 13:30:24 crc kubenswrapper[4703]: E1209 13:30:24.072050 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 13:30:24 crc kubenswrapper[4703]: I1209 13:30:24.516048 4703 scope.go:117] "RemoveContainer" containerID="57b5e80d337fcdd8d89bc10f93c6b80354ca1c3d0e5acafa1a35085ebd40f142" Dec 09 13:30:31 crc kubenswrapper[4703]: E1209 13:30:31.080161 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:30:37 crc kubenswrapper[4703]: I1209 13:30:37.070332 4703 scope.go:117] "RemoveContainer" containerID="59a313480bf76c604abfa3e91cb3f559c36326e7764983df824f7e63c20b433b" Dec 09 13:30:37 crc kubenswrapper[4703]: E1209 13:30:37.071413 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 13:30:37 crc kubenswrapper[4703]: E1209 13:30:37.072428 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:30:44 crc kubenswrapper[4703]: E1209 13:30:44.072521 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:30:50 crc kubenswrapper[4703]: E1209 13:30:50.073033 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:30:52 crc kubenswrapper[4703]: I1209 13:30:52.070661 4703 scope.go:117] "RemoveContainer" containerID="59a313480bf76c604abfa3e91cb3f559c36326e7764983df824f7e63c20b433b" Dec 09 13:30:52 crc kubenswrapper[4703]: E1209 13:30:52.071482 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 13:30:57 crc kubenswrapper[4703]: E1209 13:30:57.074750 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:31:02 crc kubenswrapper[4703]: E1209 13:31:02.073432 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:31:06 crc kubenswrapper[4703]: I1209 13:31:06.069647 4703 scope.go:117] "RemoveContainer" containerID="59a313480bf76c604abfa3e91cb3f559c36326e7764983df824f7e63c20b433b" Dec 09 13:31:06 crc kubenswrapper[4703]: E1209 13:31:06.071854 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 13:31:11 crc kubenswrapper[4703]: E1209 13:31:11.080645 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:31:14 crc kubenswrapper[4703]: E1209 13:31:14.074027 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:31:18 crc kubenswrapper[4703]: I1209 13:31:18.070475 4703 scope.go:117] "RemoveContainer" containerID="59a313480bf76c604abfa3e91cb3f559c36326e7764983df824f7e63c20b433b" Dec 09 13:31:18 crc kubenswrapper[4703]: E1209 13:31:18.071760 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 13:31:20 crc kubenswrapper[4703]: I1209 13:31:20.675280 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-66jg7"] Dec 09 13:31:20 crc kubenswrapper[4703]: E1209 13:31:20.678027 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de28432e-fe5d-4f9a-878c-a2b66e8c71a5" containerName="collect-profiles" Dec 09 13:31:20 crc kubenswrapper[4703]: I1209 13:31:20.678154 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="de28432e-fe5d-4f9a-878c-a2b66e8c71a5" containerName="collect-profiles" Dec 09 13:31:20 crc kubenswrapper[4703]: I1209 13:31:20.678605 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="de28432e-fe5d-4f9a-878c-a2b66e8c71a5" containerName="collect-profiles" Dec 09 13:31:20 crc kubenswrapper[4703]: I1209 13:31:20.681016 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-66jg7" Dec 09 13:31:20 crc kubenswrapper[4703]: I1209 13:31:20.692914 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-66jg7"] Dec 09 13:31:20 crc kubenswrapper[4703]: I1209 13:31:20.814952 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tm9r5\" (UniqueName: \"kubernetes.io/projected/c345164c-369d-4bc8-bb87-937657bc641a-kube-api-access-tm9r5\") pod \"redhat-operators-66jg7\" (UID: \"c345164c-369d-4bc8-bb87-937657bc641a\") " pod="openshift-marketplace/redhat-operators-66jg7" Dec 09 13:31:20 crc kubenswrapper[4703]: I1209 13:31:20.815714 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c345164c-369d-4bc8-bb87-937657bc641a-catalog-content\") pod \"redhat-operators-66jg7\" (UID: \"c345164c-369d-4bc8-bb87-937657bc641a\") " pod="openshift-marketplace/redhat-operators-66jg7" Dec 09 13:31:20 crc kubenswrapper[4703]: I1209 13:31:20.815873 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c345164c-369d-4bc8-bb87-937657bc641a-utilities\") pod \"redhat-operators-66jg7\" (UID: \"c345164c-369d-4bc8-bb87-937657bc641a\") " pod="openshift-marketplace/redhat-operators-66jg7" Dec 09 13:31:20 crc kubenswrapper[4703]: I1209 13:31:20.918853 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tm9r5\" (UniqueName: \"kubernetes.io/projected/c345164c-369d-4bc8-bb87-937657bc641a-kube-api-access-tm9r5\") pod \"redhat-operators-66jg7\" (UID: \"c345164c-369d-4bc8-bb87-937657bc641a\") " pod="openshift-marketplace/redhat-operators-66jg7" Dec 09 13:31:20 crc kubenswrapper[4703]: I1209 13:31:20.919473 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c345164c-369d-4bc8-bb87-937657bc641a-catalog-content\") pod \"redhat-operators-66jg7\" (UID: \"c345164c-369d-4bc8-bb87-937657bc641a\") " pod="openshift-marketplace/redhat-operators-66jg7" Dec 09 13:31:20 crc kubenswrapper[4703]: I1209 13:31:20.919540 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c345164c-369d-4bc8-bb87-937657bc641a-utilities\") pod \"redhat-operators-66jg7\" (UID: \"c345164c-369d-4bc8-bb87-937657bc641a\") " pod="openshift-marketplace/redhat-operators-66jg7" Dec 09 13:31:20 crc kubenswrapper[4703]: I1209 13:31:20.920481 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c345164c-369d-4bc8-bb87-937657bc641a-utilities\") pod \"redhat-operators-66jg7\" (UID: \"c345164c-369d-4bc8-bb87-937657bc641a\") " pod="openshift-marketplace/redhat-operators-66jg7" Dec 09 13:31:20 crc kubenswrapper[4703]: I1209 13:31:20.920692 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c345164c-369d-4bc8-bb87-937657bc641a-catalog-content\") pod \"redhat-operators-66jg7\" (UID: \"c345164c-369d-4bc8-bb87-937657bc641a\") " pod="openshift-marketplace/redhat-operators-66jg7" Dec 09 13:31:20 crc kubenswrapper[4703]: I1209 13:31:20.944608 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tm9r5\" (UniqueName: \"kubernetes.io/projected/c345164c-369d-4bc8-bb87-937657bc641a-kube-api-access-tm9r5\") pod \"redhat-operators-66jg7\" (UID: \"c345164c-369d-4bc8-bb87-937657bc641a\") " pod="openshift-marketplace/redhat-operators-66jg7" Dec 09 13:31:21 crc kubenswrapper[4703]: I1209 13:31:21.014250 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-66jg7" Dec 09 13:31:21 crc kubenswrapper[4703]: I1209 13:31:21.541288 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-66jg7"] Dec 09 13:31:21 crc kubenswrapper[4703]: I1209 13:31:21.757171 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-66jg7" event={"ID":"c345164c-369d-4bc8-bb87-937657bc641a","Type":"ContainerStarted","Data":"817853d6a0af6698f19f6006fe493e3730e006a85efb88161455caeadd380641"} Dec 09 13:31:22 crc kubenswrapper[4703]: I1209 13:31:22.769704 4703 generic.go:334] "Generic (PLEG): container finished" podID="c345164c-369d-4bc8-bb87-937657bc641a" containerID="fbcc91b2787437ede5bb013dfaba18f76f714a09b603830bf2c733faa5867dea" exitCode=0 Dec 09 13:31:22 crc kubenswrapper[4703]: I1209 13:31:22.769757 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-66jg7" event={"ID":"c345164c-369d-4bc8-bb87-937657bc641a","Type":"ContainerDied","Data":"fbcc91b2787437ede5bb013dfaba18f76f714a09b603830bf2c733faa5867dea"} Dec 09 13:31:25 crc kubenswrapper[4703]: E1209 13:31:25.073435 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:31:29 crc kubenswrapper[4703]: E1209 13:31:29.082689 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:31:32 crc kubenswrapper[4703]: I1209 13:31:32.070617 4703 scope.go:117] "RemoveContainer" containerID="59a313480bf76c604abfa3e91cb3f559c36326e7764983df824f7e63c20b433b" Dec 09 13:31:32 crc kubenswrapper[4703]: E1209 13:31:32.071699 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 13:31:35 crc kubenswrapper[4703]: I1209 13:31:35.950928 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-66jg7" event={"ID":"c345164c-369d-4bc8-bb87-937657bc641a","Type":"ContainerStarted","Data":"e5dc0f01c14641678bff6e3e91f1bb9cb552406e6c88c4790b7731517a271915"} Dec 09 13:31:38 crc kubenswrapper[4703]: I1209 13:31:38.995115 4703 generic.go:334] "Generic (PLEG): container finished" podID="c345164c-369d-4bc8-bb87-937657bc641a" containerID="e5dc0f01c14641678bff6e3e91f1bb9cb552406e6c88c4790b7731517a271915" exitCode=0 Dec 09 13:31:38 crc kubenswrapper[4703]: I1209 13:31:38.995256 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-66jg7" event={"ID":"c345164c-369d-4bc8-bb87-937657bc641a","Type":"ContainerDied","Data":"e5dc0f01c14641678bff6e3e91f1bb9cb552406e6c88c4790b7731517a271915"} Dec 09 13:31:40 crc kubenswrapper[4703]: I1209 13:31:40.010131 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-66jg7" event={"ID":"c345164c-369d-4bc8-bb87-937657bc641a","Type":"ContainerStarted","Data":"d9b20896874de5af03c5256f8dca2c25f5b009d6d7bfec620ff8af97ec5fcc00"} Dec 09 13:31:40 crc kubenswrapper[4703]: I1209 13:31:40.033617 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-66jg7" podStartSLOduration=3.38088034 podStartE2EDuration="20.033586666s" podCreationTimestamp="2025-12-09 13:31:20 +0000 UTC" firstStartedPulling="2025-12-09 13:31:22.773968892 +0000 UTC m=+5182.022732411" lastFinishedPulling="2025-12-09 13:31:39.426675218 +0000 UTC m=+5198.675438737" observedRunningTime="2025-12-09 13:31:40.03259862 +0000 UTC m=+5199.281362149" watchObservedRunningTime="2025-12-09 13:31:40.033586666 +0000 UTC m=+5199.282350185" Dec 09 13:31:40 crc kubenswrapper[4703]: E1209 13:31:40.071838 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:31:41 crc kubenswrapper[4703]: I1209 13:31:41.015846 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-66jg7" Dec 09 13:31:41 crc kubenswrapper[4703]: I1209 13:31:41.016541 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-66jg7" Dec 09 13:31:42 crc kubenswrapper[4703]: E1209 13:31:42.073810 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:31:42 crc kubenswrapper[4703]: I1209 13:31:42.074353 4703 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-66jg7" podUID="c345164c-369d-4bc8-bb87-937657bc641a" containerName="registry-server" probeResult="failure" output=< Dec 09 13:31:42 crc kubenswrapper[4703]: timeout: failed to connect service ":50051" within 1s Dec 09 13:31:42 crc kubenswrapper[4703]: > Dec 09 13:31:45 crc kubenswrapper[4703]: I1209 13:31:45.070598 4703 scope.go:117] "RemoveContainer" containerID="59a313480bf76c604abfa3e91cb3f559c36326e7764983df824f7e63c20b433b" Dec 09 13:31:45 crc kubenswrapper[4703]: E1209 13:31:45.071371 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 13:31:51 crc kubenswrapper[4703]: E1209 13:31:51.080273 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:31:51 crc kubenswrapper[4703]: I1209 13:31:51.095870 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-66jg7" Dec 09 13:31:51 crc kubenswrapper[4703]: I1209 13:31:51.204078 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-66jg7" Dec 09 13:31:51 crc kubenswrapper[4703]: I1209 13:31:51.693134 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-66jg7"] Dec 09 13:31:51 crc kubenswrapper[4703]: I1209 13:31:51.870953 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-n5v8c"] Dec 09 13:31:51 crc kubenswrapper[4703]: I1209 13:31:51.871667 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-n5v8c" podUID="e5a41fd7-a4e8-406a-9a10-2171a304ec62" containerName="registry-server" containerID="cri-o://1c3570f4e3735076fcd4950fb348bd7c8d42feb423e60392b2c0baa6a3bb5b7b" gracePeriod=2 Dec 09 13:31:52 crc kubenswrapper[4703]: I1209 13:31:52.181414 4703 generic.go:334] "Generic (PLEG): container finished" podID="e5a41fd7-a4e8-406a-9a10-2171a304ec62" containerID="1c3570f4e3735076fcd4950fb348bd7c8d42feb423e60392b2c0baa6a3bb5b7b" exitCode=0 Dec 09 13:31:52 crc kubenswrapper[4703]: I1209 13:31:52.181504 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n5v8c" event={"ID":"e5a41fd7-a4e8-406a-9a10-2171a304ec62","Type":"ContainerDied","Data":"1c3570f4e3735076fcd4950fb348bd7c8d42feb423e60392b2c0baa6a3bb5b7b"} Dec 09 13:31:52 crc kubenswrapper[4703]: I1209 13:31:52.500398 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n5v8c" Dec 09 13:31:52 crc kubenswrapper[4703]: I1209 13:31:52.594717 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5a41fd7-a4e8-406a-9a10-2171a304ec62-utilities\") pod \"e5a41fd7-a4e8-406a-9a10-2171a304ec62\" (UID: \"e5a41fd7-a4e8-406a-9a10-2171a304ec62\") " Dec 09 13:31:52 crc kubenswrapper[4703]: I1209 13:31:52.594896 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5a41fd7-a4e8-406a-9a10-2171a304ec62-catalog-content\") pod \"e5a41fd7-a4e8-406a-9a10-2171a304ec62\" (UID: \"e5a41fd7-a4e8-406a-9a10-2171a304ec62\") " Dec 09 13:31:52 crc kubenswrapper[4703]: I1209 13:31:52.595128 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9krdq\" (UniqueName: \"kubernetes.io/projected/e5a41fd7-a4e8-406a-9a10-2171a304ec62-kube-api-access-9krdq\") pod \"e5a41fd7-a4e8-406a-9a10-2171a304ec62\" (UID: \"e5a41fd7-a4e8-406a-9a10-2171a304ec62\") " Dec 09 13:31:52 crc kubenswrapper[4703]: I1209 13:31:52.598493 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5a41fd7-a4e8-406a-9a10-2171a304ec62-utilities" (OuterVolumeSpecName: "utilities") pod "e5a41fd7-a4e8-406a-9a10-2171a304ec62" (UID: "e5a41fd7-a4e8-406a-9a10-2171a304ec62"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 13:31:52 crc kubenswrapper[4703]: I1209 13:31:52.620341 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5a41fd7-a4e8-406a-9a10-2171a304ec62-kube-api-access-9krdq" (OuterVolumeSpecName: "kube-api-access-9krdq") pod "e5a41fd7-a4e8-406a-9a10-2171a304ec62" (UID: "e5a41fd7-a4e8-406a-9a10-2171a304ec62"). InnerVolumeSpecName "kube-api-access-9krdq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 13:31:52 crc kubenswrapper[4703]: I1209 13:31:52.699002 4703 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5a41fd7-a4e8-406a-9a10-2171a304ec62-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 13:31:52 crc kubenswrapper[4703]: I1209 13:31:52.699053 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9krdq\" (UniqueName: \"kubernetes.io/projected/e5a41fd7-a4e8-406a-9a10-2171a304ec62-kube-api-access-9krdq\") on node \"crc\" DevicePath \"\"" Dec 09 13:31:52 crc kubenswrapper[4703]: I1209 13:31:52.748109 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5a41fd7-a4e8-406a-9a10-2171a304ec62-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e5a41fd7-a4e8-406a-9a10-2171a304ec62" (UID: "e5a41fd7-a4e8-406a-9a10-2171a304ec62"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 13:31:52 crc kubenswrapper[4703]: I1209 13:31:52.801703 4703 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5a41fd7-a4e8-406a-9a10-2171a304ec62-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 13:31:53 crc kubenswrapper[4703]: I1209 13:31:53.197708 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n5v8c" event={"ID":"e5a41fd7-a4e8-406a-9a10-2171a304ec62","Type":"ContainerDied","Data":"b9011a2e89c244b8a57ce8e6f56e00a407e4602aecc7369ca3ba964366bce277"} Dec 09 13:31:53 crc kubenswrapper[4703]: I1209 13:31:53.197793 4703 scope.go:117] "RemoveContainer" containerID="1c3570f4e3735076fcd4950fb348bd7c8d42feb423e60392b2c0baa6a3bb5b7b" Dec 09 13:31:53 crc kubenswrapper[4703]: I1209 13:31:53.197820 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n5v8c" Dec 09 13:31:53 crc kubenswrapper[4703]: I1209 13:31:53.230458 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-n5v8c"] Dec 09 13:31:53 crc kubenswrapper[4703]: I1209 13:31:53.232701 4703 scope.go:117] "RemoveContainer" containerID="93cee9e216d7cedc8637af40de84195564c3664b5a55770c37b7ba34bce9279b" Dec 09 13:31:53 crc kubenswrapper[4703]: I1209 13:31:53.253168 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-n5v8c"] Dec 09 13:31:53 crc kubenswrapper[4703]: I1209 13:31:53.262649 4703 scope.go:117] "RemoveContainer" containerID="e6698a957d327ba7faf45e79d69bebbb91f5144d5cf943ff3b8a2ae9c72c5fa1" Dec 09 13:31:55 crc kubenswrapper[4703]: I1209 13:31:55.082794 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5a41fd7-a4e8-406a-9a10-2171a304ec62" path="/var/lib/kubelet/pods/e5a41fd7-a4e8-406a-9a10-2171a304ec62/volumes" Dec 09 13:31:56 crc kubenswrapper[4703]: I1209 13:31:56.070622 4703 scope.go:117] "RemoveContainer" containerID="59a313480bf76c604abfa3e91cb3f559c36326e7764983df824f7e63c20b433b" Dec 09 13:31:56 crc kubenswrapper[4703]: E1209 13:31:56.071272 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 13:31:56 crc kubenswrapper[4703]: E1209 13:31:56.073272 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:32:02 crc kubenswrapper[4703]: E1209 13:32:02.072558 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:32:08 crc kubenswrapper[4703]: E1209 13:32:08.073327 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:32:11 crc kubenswrapper[4703]: I1209 13:32:11.079790 4703 scope.go:117] "RemoveContainer" containerID="59a313480bf76c604abfa3e91cb3f559c36326e7764983df824f7e63c20b433b" Dec 09 13:32:11 crc kubenswrapper[4703]: E1209 13:32:11.080740 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 13:32:17 crc kubenswrapper[4703]: E1209 13:32:17.073802 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:32:19 crc kubenswrapper[4703]: E1209 13:32:19.072711 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:32:26 crc kubenswrapper[4703]: I1209 13:32:26.070975 4703 scope.go:117] "RemoveContainer" containerID="59a313480bf76c604abfa3e91cb3f559c36326e7764983df824f7e63c20b433b" Dec 09 13:32:26 crc kubenswrapper[4703]: E1209 13:32:26.072084 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 13:32:29 crc kubenswrapper[4703]: E1209 13:32:29.116744 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:32:33 crc kubenswrapper[4703]: E1209 13:32:33.074898 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:32:39 crc kubenswrapper[4703]: I1209 13:32:39.070381 4703 scope.go:117] "RemoveContainer" containerID="59a313480bf76c604abfa3e91cb3f559c36326e7764983df824f7e63c20b433b" Dec 09 13:32:39 crc kubenswrapper[4703]: E1209 13:32:39.071645 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 13:32:40 crc kubenswrapper[4703]: E1209 13:32:40.074061 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:32:47 crc kubenswrapper[4703]: E1209 13:32:47.073672 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:32:50 crc kubenswrapper[4703]: I1209 13:32:50.070301 4703 scope.go:117] "RemoveContainer" containerID="59a313480bf76c604abfa3e91cb3f559c36326e7764983df824f7e63c20b433b" Dec 09 13:32:50 crc kubenswrapper[4703]: E1209 13:32:50.071206 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 13:32:55 crc kubenswrapper[4703]: E1209 13:32:55.074535 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:33:00 crc kubenswrapper[4703]: E1209 13:33:00.073567 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:33:01 crc kubenswrapper[4703]: I1209 13:33:01.083303 4703 scope.go:117] "RemoveContainer" containerID="59a313480bf76c604abfa3e91cb3f559c36326e7764983df824f7e63c20b433b" Dec 09 13:33:02 crc kubenswrapper[4703]: I1209 13:33:02.191172 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" event={"ID":"32956ceb-8540-406e-8693-e86efb46cd42","Type":"ContainerStarted","Data":"82cad877c09508c36831656f283fbcfe71c780c9bb7605d0060bb517204558a9"} Dec 09 13:33:08 crc kubenswrapper[4703]: E1209 13:33:08.074538 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:33:09 crc kubenswrapper[4703]: I1209 13:33:09.288924 4703 generic.go:334] "Generic (PLEG): container finished" podID="b3b60855-8704-45ce-9688-2364ca1978f0" containerID="ceef06294f3c1386364c167c693c3cb50b20df40ac8ca30062f2aa62cf0b83eb" exitCode=2 Dec 09 13:33:09 crc kubenswrapper[4703]: I1209 13:33:09.289305 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-txcq5" event={"ID":"b3b60855-8704-45ce-9688-2364ca1978f0","Type":"ContainerDied","Data":"ceef06294f3c1386364c167c693c3cb50b20df40ac8ca30062f2aa62cf0b83eb"} Dec 09 13:33:10 crc kubenswrapper[4703]: I1209 13:33:10.889410 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-txcq5" Dec 09 13:33:11 crc kubenswrapper[4703]: I1209 13:33:11.069382 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b3b60855-8704-45ce-9688-2364ca1978f0-inventory\") pod \"b3b60855-8704-45ce-9688-2364ca1978f0\" (UID: \"b3b60855-8704-45ce-9688-2364ca1978f0\") " Dec 09 13:33:11 crc kubenswrapper[4703]: I1209 13:33:11.069692 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b3b60855-8704-45ce-9688-2364ca1978f0-ssh-key\") pod \"b3b60855-8704-45ce-9688-2364ca1978f0\" (UID: \"b3b60855-8704-45ce-9688-2364ca1978f0\") " Dec 09 13:33:11 crc kubenswrapper[4703]: I1209 13:33:11.069749 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7nfcv\" (UniqueName: \"kubernetes.io/projected/b3b60855-8704-45ce-9688-2364ca1978f0-kube-api-access-7nfcv\") pod \"b3b60855-8704-45ce-9688-2364ca1978f0\" (UID: \"b3b60855-8704-45ce-9688-2364ca1978f0\") " Dec 09 13:33:11 crc kubenswrapper[4703]: I1209 13:33:11.077787 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3b60855-8704-45ce-9688-2364ca1978f0-kube-api-access-7nfcv" (OuterVolumeSpecName: "kube-api-access-7nfcv") pod "b3b60855-8704-45ce-9688-2364ca1978f0" (UID: "b3b60855-8704-45ce-9688-2364ca1978f0"). InnerVolumeSpecName "kube-api-access-7nfcv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 13:33:11 crc kubenswrapper[4703]: I1209 13:33:11.108528 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3b60855-8704-45ce-9688-2364ca1978f0-inventory" (OuterVolumeSpecName: "inventory") pod "b3b60855-8704-45ce-9688-2364ca1978f0" (UID: "b3b60855-8704-45ce-9688-2364ca1978f0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 13:33:11 crc kubenswrapper[4703]: I1209 13:33:11.121574 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3b60855-8704-45ce-9688-2364ca1978f0-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "b3b60855-8704-45ce-9688-2364ca1978f0" (UID: "b3b60855-8704-45ce-9688-2364ca1978f0"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 13:33:11 crc kubenswrapper[4703]: I1209 13:33:11.183682 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7nfcv\" (UniqueName: \"kubernetes.io/projected/b3b60855-8704-45ce-9688-2364ca1978f0-kube-api-access-7nfcv\") on node \"crc\" DevicePath \"\"" Dec 09 13:33:11 crc kubenswrapper[4703]: I1209 13:33:11.184013 4703 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b3b60855-8704-45ce-9688-2364ca1978f0-inventory\") on node \"crc\" DevicePath \"\"" Dec 09 13:33:11 crc kubenswrapper[4703]: I1209 13:33:11.184286 4703 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b3b60855-8704-45ce-9688-2364ca1978f0-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 09 13:33:11 crc kubenswrapper[4703]: I1209 13:33:11.312041 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-txcq5" event={"ID":"b3b60855-8704-45ce-9688-2364ca1978f0","Type":"ContainerDied","Data":"2aa8421421105df18df225a7ae2a288fb966f0149ac5a923d256898638e2ddbe"} Dec 09 13:33:11 crc kubenswrapper[4703]: I1209 13:33:11.312096 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-txcq5" Dec 09 13:33:11 crc kubenswrapper[4703]: I1209 13:33:11.312103 4703 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2aa8421421105df18df225a7ae2a288fb966f0149ac5a923d256898638e2ddbe" Dec 09 13:33:12 crc kubenswrapper[4703]: E1209 13:33:12.075755 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:33:21 crc kubenswrapper[4703]: E1209 13:33:21.085736 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:33:24 crc kubenswrapper[4703]: E1209 13:33:24.074181 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:33:33 crc kubenswrapper[4703]: E1209 13:33:33.074250 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:33:35 crc kubenswrapper[4703]: E1209 13:33:35.073175 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:33:42 crc kubenswrapper[4703]: I1209 13:33:42.165704 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-jl546/must-gather-kbc2l"] Dec 09 13:33:42 crc kubenswrapper[4703]: E1209 13:33:42.166838 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5a41fd7-a4e8-406a-9a10-2171a304ec62" containerName="registry-server" Dec 09 13:33:42 crc kubenswrapper[4703]: I1209 13:33:42.166852 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5a41fd7-a4e8-406a-9a10-2171a304ec62" containerName="registry-server" Dec 09 13:33:42 crc kubenswrapper[4703]: E1209 13:33:42.166869 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5a41fd7-a4e8-406a-9a10-2171a304ec62" containerName="extract-content" Dec 09 13:33:42 crc kubenswrapper[4703]: I1209 13:33:42.166875 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5a41fd7-a4e8-406a-9a10-2171a304ec62" containerName="extract-content" Dec 09 13:33:42 crc kubenswrapper[4703]: E1209 13:33:42.166895 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3b60855-8704-45ce-9688-2364ca1978f0" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 09 13:33:42 crc kubenswrapper[4703]: I1209 13:33:42.166907 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3b60855-8704-45ce-9688-2364ca1978f0" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 09 13:33:42 crc kubenswrapper[4703]: E1209 13:33:42.166924 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5a41fd7-a4e8-406a-9a10-2171a304ec62" containerName="extract-utilities" Dec 09 13:33:42 crc kubenswrapper[4703]: I1209 13:33:42.166930 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5a41fd7-a4e8-406a-9a10-2171a304ec62" containerName="extract-utilities" Dec 09 13:33:42 crc kubenswrapper[4703]: I1209 13:33:42.167170 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5a41fd7-a4e8-406a-9a10-2171a304ec62" containerName="registry-server" Dec 09 13:33:42 crc kubenswrapper[4703]: I1209 13:33:42.167202 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3b60855-8704-45ce-9688-2364ca1978f0" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 09 13:33:42 crc kubenswrapper[4703]: I1209 13:33:42.168532 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jl546/must-gather-kbc2l" Dec 09 13:33:42 crc kubenswrapper[4703]: I1209 13:33:42.171241 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-jl546"/"default-dockercfg-82mbz" Dec 09 13:33:42 crc kubenswrapper[4703]: I1209 13:33:42.171377 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-jl546"/"kube-root-ca.crt" Dec 09 13:33:42 crc kubenswrapper[4703]: I1209 13:33:42.171707 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-jl546"/"openshift-service-ca.crt" Dec 09 13:33:42 crc kubenswrapper[4703]: I1209 13:33:42.196643 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-jl546/must-gather-kbc2l"] Dec 09 13:33:42 crc kubenswrapper[4703]: I1209 13:33:42.283326 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/abea605e-e3a0-478a-a68c-4e5c49ca4524-must-gather-output\") pod \"must-gather-kbc2l\" (UID: \"abea605e-e3a0-478a-a68c-4e5c49ca4524\") " pod="openshift-must-gather-jl546/must-gather-kbc2l" Dec 09 13:33:42 crc kubenswrapper[4703]: I1209 13:33:42.284153 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sflrs\" (UniqueName: \"kubernetes.io/projected/abea605e-e3a0-478a-a68c-4e5c49ca4524-kube-api-access-sflrs\") pod \"must-gather-kbc2l\" (UID: \"abea605e-e3a0-478a-a68c-4e5c49ca4524\") " pod="openshift-must-gather-jl546/must-gather-kbc2l" Dec 09 13:33:42 crc kubenswrapper[4703]: I1209 13:33:42.387116 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sflrs\" (UniqueName: \"kubernetes.io/projected/abea605e-e3a0-478a-a68c-4e5c49ca4524-kube-api-access-sflrs\") pod \"must-gather-kbc2l\" (UID: \"abea605e-e3a0-478a-a68c-4e5c49ca4524\") " pod="openshift-must-gather-jl546/must-gather-kbc2l" Dec 09 13:33:42 crc kubenswrapper[4703]: I1209 13:33:42.387258 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/abea605e-e3a0-478a-a68c-4e5c49ca4524-must-gather-output\") pod \"must-gather-kbc2l\" (UID: \"abea605e-e3a0-478a-a68c-4e5c49ca4524\") " pod="openshift-must-gather-jl546/must-gather-kbc2l" Dec 09 13:33:42 crc kubenswrapper[4703]: I1209 13:33:42.387820 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/abea605e-e3a0-478a-a68c-4e5c49ca4524-must-gather-output\") pod \"must-gather-kbc2l\" (UID: \"abea605e-e3a0-478a-a68c-4e5c49ca4524\") " pod="openshift-must-gather-jl546/must-gather-kbc2l" Dec 09 13:33:42 crc kubenswrapper[4703]: I1209 13:33:42.424691 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sflrs\" (UniqueName: \"kubernetes.io/projected/abea605e-e3a0-478a-a68c-4e5c49ca4524-kube-api-access-sflrs\") pod \"must-gather-kbc2l\" (UID: \"abea605e-e3a0-478a-a68c-4e5c49ca4524\") " pod="openshift-must-gather-jl546/must-gather-kbc2l" Dec 09 13:33:42 crc kubenswrapper[4703]: I1209 13:33:42.489958 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jl546/must-gather-kbc2l" Dec 09 13:33:42 crc kubenswrapper[4703]: I1209 13:33:42.941021 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-jl546/must-gather-kbc2l"] Dec 09 13:33:43 crc kubenswrapper[4703]: I1209 13:33:43.690992 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jl546/must-gather-kbc2l" event={"ID":"abea605e-e3a0-478a-a68c-4e5c49ca4524","Type":"ContainerStarted","Data":"c528d5a7093db79aee3f7754e49be5254605489fa36662231f8ae804aee40e29"} Dec 09 13:33:45 crc kubenswrapper[4703]: E1209 13:33:45.073893 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:33:50 crc kubenswrapper[4703]: E1209 13:33:50.084118 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:33:52 crc kubenswrapper[4703]: I1209 13:33:52.792136 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jl546/must-gather-kbc2l" event={"ID":"abea605e-e3a0-478a-a68c-4e5c49ca4524","Type":"ContainerStarted","Data":"78c3f6eac63593023da83e166953248570629bf76d780a1d2b8a602c6af5c747"} Dec 09 13:33:52 crc kubenswrapper[4703]: I1209 13:33:52.793010 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jl546/must-gather-kbc2l" event={"ID":"abea605e-e3a0-478a-a68c-4e5c49ca4524","Type":"ContainerStarted","Data":"b3d70353cf6cf8331c233c51a12d1b6158cc12a8b655933ce419593d59124d47"} Dec 09 13:33:52 crc kubenswrapper[4703]: I1209 13:33:52.815112 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-jl546/must-gather-kbc2l" podStartSLOduration=2.023019044 podStartE2EDuration="10.815091288s" podCreationTimestamp="2025-12-09 13:33:42 +0000 UTC" firstStartedPulling="2025-12-09 13:33:42.972468213 +0000 UTC m=+5322.221231732" lastFinishedPulling="2025-12-09 13:33:51.764540447 +0000 UTC m=+5331.013303976" observedRunningTime="2025-12-09 13:33:52.8125244 +0000 UTC m=+5332.061287929" watchObservedRunningTime="2025-12-09 13:33:52.815091288 +0000 UTC m=+5332.063854807" Dec 09 13:33:55 crc kubenswrapper[4703]: E1209 13:33:55.431461 4703 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.201:51672->38.102.83.201:35489: write tcp 38.102.83.201:51672->38.102.83.201:35489: write: broken pipe Dec 09 13:33:56 crc kubenswrapper[4703]: E1209 13:33:56.071435 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:33:56 crc kubenswrapper[4703]: I1209 13:33:56.292203 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-jl546/crc-debug-c9xpg"] Dec 09 13:33:56 crc kubenswrapper[4703]: I1209 13:33:56.294182 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jl546/crc-debug-c9xpg" Dec 09 13:33:56 crc kubenswrapper[4703]: I1209 13:33:56.367647 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxdgm\" (UniqueName: \"kubernetes.io/projected/bd3198d1-ba2f-4cd5-8eb4-574c36ab90eb-kube-api-access-xxdgm\") pod \"crc-debug-c9xpg\" (UID: \"bd3198d1-ba2f-4cd5-8eb4-574c36ab90eb\") " pod="openshift-must-gather-jl546/crc-debug-c9xpg" Dec 09 13:33:56 crc kubenswrapper[4703]: I1209 13:33:56.368231 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bd3198d1-ba2f-4cd5-8eb4-574c36ab90eb-host\") pod \"crc-debug-c9xpg\" (UID: \"bd3198d1-ba2f-4cd5-8eb4-574c36ab90eb\") " pod="openshift-must-gather-jl546/crc-debug-c9xpg" Dec 09 13:33:56 crc kubenswrapper[4703]: I1209 13:33:56.470889 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxdgm\" (UniqueName: \"kubernetes.io/projected/bd3198d1-ba2f-4cd5-8eb4-574c36ab90eb-kube-api-access-xxdgm\") pod \"crc-debug-c9xpg\" (UID: \"bd3198d1-ba2f-4cd5-8eb4-574c36ab90eb\") " pod="openshift-must-gather-jl546/crc-debug-c9xpg" Dec 09 13:33:56 crc kubenswrapper[4703]: I1209 13:33:56.471237 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bd3198d1-ba2f-4cd5-8eb4-574c36ab90eb-host\") pod \"crc-debug-c9xpg\" (UID: \"bd3198d1-ba2f-4cd5-8eb4-574c36ab90eb\") " pod="openshift-must-gather-jl546/crc-debug-c9xpg" Dec 09 13:33:56 crc kubenswrapper[4703]: I1209 13:33:56.471386 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bd3198d1-ba2f-4cd5-8eb4-574c36ab90eb-host\") pod \"crc-debug-c9xpg\" (UID: \"bd3198d1-ba2f-4cd5-8eb4-574c36ab90eb\") " pod="openshift-must-gather-jl546/crc-debug-c9xpg" Dec 09 13:33:56 crc kubenswrapper[4703]: I1209 13:33:56.496139 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxdgm\" (UniqueName: \"kubernetes.io/projected/bd3198d1-ba2f-4cd5-8eb4-574c36ab90eb-kube-api-access-xxdgm\") pod \"crc-debug-c9xpg\" (UID: \"bd3198d1-ba2f-4cd5-8eb4-574c36ab90eb\") " pod="openshift-must-gather-jl546/crc-debug-c9xpg" Dec 09 13:33:56 crc kubenswrapper[4703]: I1209 13:33:56.621333 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jl546/crc-debug-c9xpg" Dec 09 13:33:56 crc kubenswrapper[4703]: W1209 13:33:56.806033 4703 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbd3198d1_ba2f_4cd5_8eb4_574c36ab90eb.slice/crio-21d5aa6ee80d7a1b3c15f3eb45ea0f6d9660f8d5e311c0e7a0ebc6a6296b83f2 WatchSource:0}: Error finding container 21d5aa6ee80d7a1b3c15f3eb45ea0f6d9660f8d5e311c0e7a0ebc6a6296b83f2: Status 404 returned error can't find the container with id 21d5aa6ee80d7a1b3c15f3eb45ea0f6d9660f8d5e311c0e7a0ebc6a6296b83f2 Dec 09 13:33:56 crc kubenswrapper[4703]: I1209 13:33:56.838462 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jl546/crc-debug-c9xpg" event={"ID":"bd3198d1-ba2f-4cd5-8eb4-574c36ab90eb","Type":"ContainerStarted","Data":"21d5aa6ee80d7a1b3c15f3eb45ea0f6d9660f8d5e311c0e7a0ebc6a6296b83f2"} Dec 09 13:34:02 crc kubenswrapper[4703]: E1209 13:34:02.072068 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:34:09 crc kubenswrapper[4703]: E1209 13:34:09.075734 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:34:12 crc kubenswrapper[4703]: I1209 13:34:12.050874 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jl546/crc-debug-c9xpg" event={"ID":"bd3198d1-ba2f-4cd5-8eb4-574c36ab90eb","Type":"ContainerStarted","Data":"4c4d661c161c7162407e5541c5331536ebadf32e850585dc5cc80ac2b17de941"} Dec 09 13:34:12 crc kubenswrapper[4703]: I1209 13:34:12.074528 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-jl546/crc-debug-c9xpg" podStartSLOduration=2.732334563 podStartE2EDuration="16.07450672s" podCreationTimestamp="2025-12-09 13:33:56 +0000 UTC" firstStartedPulling="2025-12-09 13:33:56.809292529 +0000 UTC m=+5336.058056048" lastFinishedPulling="2025-12-09 13:34:10.151464686 +0000 UTC m=+5349.400228205" observedRunningTime="2025-12-09 13:34:12.070545125 +0000 UTC m=+5351.319308644" watchObservedRunningTime="2025-12-09 13:34:12.07450672 +0000 UTC m=+5351.323270239" Dec 09 13:34:14 crc kubenswrapper[4703]: E1209 13:34:14.076107 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:34:21 crc kubenswrapper[4703]: E1209 13:34:21.093076 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:34:26 crc kubenswrapper[4703]: E1209 13:34:26.077277 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:34:33 crc kubenswrapper[4703]: I1209 13:34:33.307526 4703 generic.go:334] "Generic (PLEG): container finished" podID="bd3198d1-ba2f-4cd5-8eb4-574c36ab90eb" containerID="4c4d661c161c7162407e5541c5331536ebadf32e850585dc5cc80ac2b17de941" exitCode=0 Dec 09 13:34:33 crc kubenswrapper[4703]: I1209 13:34:33.307632 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jl546/crc-debug-c9xpg" event={"ID":"bd3198d1-ba2f-4cd5-8eb4-574c36ab90eb","Type":"ContainerDied","Data":"4c4d661c161c7162407e5541c5331536ebadf32e850585dc5cc80ac2b17de941"} Dec 09 13:34:34 crc kubenswrapper[4703]: I1209 13:34:34.462372 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jl546/crc-debug-c9xpg" Dec 09 13:34:34 crc kubenswrapper[4703]: I1209 13:34:34.511399 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-jl546/crc-debug-c9xpg"] Dec 09 13:34:34 crc kubenswrapper[4703]: I1209 13:34:34.522958 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-jl546/crc-debug-c9xpg"] Dec 09 13:34:34 crc kubenswrapper[4703]: I1209 13:34:34.556809 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bd3198d1-ba2f-4cd5-8eb4-574c36ab90eb-host\") pod \"bd3198d1-ba2f-4cd5-8eb4-574c36ab90eb\" (UID: \"bd3198d1-ba2f-4cd5-8eb4-574c36ab90eb\") " Dec 09 13:34:34 crc kubenswrapper[4703]: I1209 13:34:34.557422 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxdgm\" (UniqueName: \"kubernetes.io/projected/bd3198d1-ba2f-4cd5-8eb4-574c36ab90eb-kube-api-access-xxdgm\") pod \"bd3198d1-ba2f-4cd5-8eb4-574c36ab90eb\" (UID: \"bd3198d1-ba2f-4cd5-8eb4-574c36ab90eb\") " Dec 09 13:34:34 crc kubenswrapper[4703]: I1209 13:34:34.559088 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd3198d1-ba2f-4cd5-8eb4-574c36ab90eb-host" (OuterVolumeSpecName: "host") pod "bd3198d1-ba2f-4cd5-8eb4-574c36ab90eb" (UID: "bd3198d1-ba2f-4cd5-8eb4-574c36ab90eb"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 13:34:34 crc kubenswrapper[4703]: I1209 13:34:34.569514 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd3198d1-ba2f-4cd5-8eb4-574c36ab90eb-kube-api-access-xxdgm" (OuterVolumeSpecName: "kube-api-access-xxdgm") pod "bd3198d1-ba2f-4cd5-8eb4-574c36ab90eb" (UID: "bd3198d1-ba2f-4cd5-8eb4-574c36ab90eb"). InnerVolumeSpecName "kube-api-access-xxdgm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 13:34:34 crc kubenswrapper[4703]: I1209 13:34:34.662350 4703 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bd3198d1-ba2f-4cd5-8eb4-574c36ab90eb-host\") on node \"crc\" DevicePath \"\"" Dec 09 13:34:34 crc kubenswrapper[4703]: I1209 13:34:34.662389 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xxdgm\" (UniqueName: \"kubernetes.io/projected/bd3198d1-ba2f-4cd5-8eb4-574c36ab90eb-kube-api-access-xxdgm\") on node \"crc\" DevicePath \"\"" Dec 09 13:34:35 crc kubenswrapper[4703]: I1209 13:34:35.083514 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd3198d1-ba2f-4cd5-8eb4-574c36ab90eb" path="/var/lib/kubelet/pods/bd3198d1-ba2f-4cd5-8eb4-574c36ab90eb/volumes" Dec 09 13:34:35 crc kubenswrapper[4703]: I1209 13:34:35.332452 4703 scope.go:117] "RemoveContainer" containerID="4c4d661c161c7162407e5541c5331536ebadf32e850585dc5cc80ac2b17de941" Dec 09 13:34:35 crc kubenswrapper[4703]: I1209 13:34:35.332643 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jl546/crc-debug-c9xpg" Dec 09 13:34:35 crc kubenswrapper[4703]: I1209 13:34:35.788392 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-jl546/crc-debug-8dqcb"] Dec 09 13:34:35 crc kubenswrapper[4703]: E1209 13:34:35.789291 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd3198d1-ba2f-4cd5-8eb4-574c36ab90eb" containerName="container-00" Dec 09 13:34:35 crc kubenswrapper[4703]: I1209 13:34:35.789309 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd3198d1-ba2f-4cd5-8eb4-574c36ab90eb" containerName="container-00" Dec 09 13:34:35 crc kubenswrapper[4703]: I1209 13:34:35.789877 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd3198d1-ba2f-4cd5-8eb4-574c36ab90eb" containerName="container-00" Dec 09 13:34:35 crc kubenswrapper[4703]: I1209 13:34:35.791016 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jl546/crc-debug-8dqcb" Dec 09 13:34:35 crc kubenswrapper[4703]: I1209 13:34:35.816686 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57rfh\" (UniqueName: \"kubernetes.io/projected/59f9a797-774d-4d75-b1b3-5a0624ca4480-kube-api-access-57rfh\") pod \"crc-debug-8dqcb\" (UID: \"59f9a797-774d-4d75-b1b3-5a0624ca4480\") " pod="openshift-must-gather-jl546/crc-debug-8dqcb" Dec 09 13:34:35 crc kubenswrapper[4703]: I1209 13:34:35.816757 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/59f9a797-774d-4d75-b1b3-5a0624ca4480-host\") pod \"crc-debug-8dqcb\" (UID: \"59f9a797-774d-4d75-b1b3-5a0624ca4480\") " pod="openshift-must-gather-jl546/crc-debug-8dqcb" Dec 09 13:34:35 crc kubenswrapper[4703]: I1209 13:34:35.919336 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57rfh\" (UniqueName: \"kubernetes.io/projected/59f9a797-774d-4d75-b1b3-5a0624ca4480-kube-api-access-57rfh\") pod \"crc-debug-8dqcb\" (UID: \"59f9a797-774d-4d75-b1b3-5a0624ca4480\") " pod="openshift-must-gather-jl546/crc-debug-8dqcb" Dec 09 13:34:35 crc kubenswrapper[4703]: I1209 13:34:35.919408 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/59f9a797-774d-4d75-b1b3-5a0624ca4480-host\") pod \"crc-debug-8dqcb\" (UID: \"59f9a797-774d-4d75-b1b3-5a0624ca4480\") " pod="openshift-must-gather-jl546/crc-debug-8dqcb" Dec 09 13:34:35 crc kubenswrapper[4703]: I1209 13:34:35.919647 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/59f9a797-774d-4d75-b1b3-5a0624ca4480-host\") pod \"crc-debug-8dqcb\" (UID: \"59f9a797-774d-4d75-b1b3-5a0624ca4480\") " pod="openshift-must-gather-jl546/crc-debug-8dqcb" Dec 09 13:34:35 crc kubenswrapper[4703]: I1209 13:34:35.948044 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57rfh\" (UniqueName: \"kubernetes.io/projected/59f9a797-774d-4d75-b1b3-5a0624ca4480-kube-api-access-57rfh\") pod \"crc-debug-8dqcb\" (UID: \"59f9a797-774d-4d75-b1b3-5a0624ca4480\") " pod="openshift-must-gather-jl546/crc-debug-8dqcb" Dec 09 13:34:36 crc kubenswrapper[4703]: E1209 13:34:36.071703 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:34:36 crc kubenswrapper[4703]: I1209 13:34:36.110545 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jl546/crc-debug-8dqcb" Dec 09 13:34:36 crc kubenswrapper[4703]: I1209 13:34:36.351565 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jl546/crc-debug-8dqcb" event={"ID":"59f9a797-774d-4d75-b1b3-5a0624ca4480","Type":"ContainerStarted","Data":"2f3cf7cac6bd8c5a3df61b2baf012ac4203a23651b307b4e4d407c9d18e5cef7"} Dec 09 13:34:37 crc kubenswrapper[4703]: I1209 13:34:37.378447 4703 generic.go:334] "Generic (PLEG): container finished" podID="59f9a797-774d-4d75-b1b3-5a0624ca4480" containerID="cd7be395b23e584c3ffd91a4c38d3a410281a0b1636a129bb3ed95feac3b259a" exitCode=1 Dec 09 13:34:37 crc kubenswrapper[4703]: I1209 13:34:37.378498 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jl546/crc-debug-8dqcb" event={"ID":"59f9a797-774d-4d75-b1b3-5a0624ca4480","Type":"ContainerDied","Data":"cd7be395b23e584c3ffd91a4c38d3a410281a0b1636a129bb3ed95feac3b259a"} Dec 09 13:34:37 crc kubenswrapper[4703]: I1209 13:34:37.427480 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-jl546/crc-debug-8dqcb"] Dec 09 13:34:37 crc kubenswrapper[4703]: I1209 13:34:37.439074 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-jl546/crc-debug-8dqcb"] Dec 09 13:34:38 crc kubenswrapper[4703]: I1209 13:34:38.538386 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jl546/crc-debug-8dqcb" Dec 09 13:34:38 crc kubenswrapper[4703]: I1209 13:34:38.582457 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-57rfh\" (UniqueName: \"kubernetes.io/projected/59f9a797-774d-4d75-b1b3-5a0624ca4480-kube-api-access-57rfh\") pod \"59f9a797-774d-4d75-b1b3-5a0624ca4480\" (UID: \"59f9a797-774d-4d75-b1b3-5a0624ca4480\") " Dec 09 13:34:38 crc kubenswrapper[4703]: I1209 13:34:38.582818 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/59f9a797-774d-4d75-b1b3-5a0624ca4480-host\") pod \"59f9a797-774d-4d75-b1b3-5a0624ca4480\" (UID: \"59f9a797-774d-4d75-b1b3-5a0624ca4480\") " Dec 09 13:34:38 crc kubenswrapper[4703]: I1209 13:34:38.583459 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/59f9a797-774d-4d75-b1b3-5a0624ca4480-host" (OuterVolumeSpecName: "host") pod "59f9a797-774d-4d75-b1b3-5a0624ca4480" (UID: "59f9a797-774d-4d75-b1b3-5a0624ca4480"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 13:34:38 crc kubenswrapper[4703]: I1209 13:34:38.584169 4703 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/59f9a797-774d-4d75-b1b3-5a0624ca4480-host\") on node \"crc\" DevicePath \"\"" Dec 09 13:34:38 crc kubenswrapper[4703]: I1209 13:34:38.595713 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59f9a797-774d-4d75-b1b3-5a0624ca4480-kube-api-access-57rfh" (OuterVolumeSpecName: "kube-api-access-57rfh") pod "59f9a797-774d-4d75-b1b3-5a0624ca4480" (UID: "59f9a797-774d-4d75-b1b3-5a0624ca4480"). InnerVolumeSpecName "kube-api-access-57rfh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 13:34:38 crc kubenswrapper[4703]: I1209 13:34:38.686966 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-57rfh\" (UniqueName: \"kubernetes.io/projected/59f9a797-774d-4d75-b1b3-5a0624ca4480-kube-api-access-57rfh\") on node \"crc\" DevicePath \"\"" Dec 09 13:34:39 crc kubenswrapper[4703]: I1209 13:34:39.082915 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59f9a797-774d-4d75-b1b3-5a0624ca4480" path="/var/lib/kubelet/pods/59f9a797-774d-4d75-b1b3-5a0624ca4480/volumes" Dec 09 13:34:39 crc kubenswrapper[4703]: I1209 13:34:39.429821 4703 scope.go:117] "RemoveContainer" containerID="cd7be395b23e584c3ffd91a4c38d3a410281a0b1636a129bb3ed95feac3b259a" Dec 09 13:34:39 crc kubenswrapper[4703]: I1209 13:34:39.430248 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jl546/crc-debug-8dqcb" Dec 09 13:34:41 crc kubenswrapper[4703]: I1209 13:34:41.079784 4703 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 09 13:34:41 crc kubenswrapper[4703]: E1209 13:34:41.210803 4703 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 13:34:41 crc kubenswrapper[4703]: E1209 13:34:41.211418 4703 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 13:34:41 crc kubenswrapper[4703]: E1209 13:34:41.211668 4703 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n565h678h645h65h685h657h568h564h8fhcfhc4h58fh66bh564h649h5d5hb5h67dh57fh97h58ch557h4h68h59fh5c8h65ch586h5bch5b8h5c6h7q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p7hbq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(ce42c586-f397-4f98-be45-f56d36115d7a): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 13:34:41 crc kubenswrapper[4703]: E1209 13:34:41.212862 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:34:50 crc kubenswrapper[4703]: E1209 13:34:50.073609 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:34:52 crc kubenswrapper[4703]: E1209 13:34:52.073799 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:35:02 crc kubenswrapper[4703]: E1209 13:35:02.071991 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:35:03 crc kubenswrapper[4703]: E1209 13:35:03.072287 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:35:13 crc kubenswrapper[4703]: E1209 13:35:13.203941 4703 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 09 13:35:13 crc kubenswrapper[4703]: E1209 13:35:13.204735 4703 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 09 13:35:13 crc kubenswrapper[4703]: E1209 13:35:13.205008 4703 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6kdkz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-6ljb8_openstack(862e9d91-760e-43af-aba3-a23255b0fd7a): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 13:35:13 crc kubenswrapper[4703]: E1209 13:35:13.206323 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:35:16 crc kubenswrapper[4703]: E1209 13:35:16.074207 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:35:25 crc kubenswrapper[4703]: E1209 13:35:25.071981 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:35:27 crc kubenswrapper[4703]: I1209 13:35:27.831353 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_fe0eb656-a6e6-4b2e-9caf-53ef2d0a3f2b/init-config-reloader/0.log" Dec 09 13:35:28 crc kubenswrapper[4703]: I1209 13:35:28.039058 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_fe0eb656-a6e6-4b2e-9caf-53ef2d0a3f2b/init-config-reloader/0.log" Dec 09 13:35:28 crc kubenswrapper[4703]: I1209 13:35:28.082923 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_fe0eb656-a6e6-4b2e-9caf-53ef2d0a3f2b/config-reloader/0.log" Dec 09 13:35:28 crc kubenswrapper[4703]: I1209 13:35:28.096381 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_fe0eb656-a6e6-4b2e-9caf-53ef2d0a3f2b/alertmanager/0.log" Dec 09 13:35:28 crc kubenswrapper[4703]: I1209 13:35:28.258972 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6fc6c9d8f6-snwb8_bd9698c2-9ecf-4c99-9060-98201396be37/barbican-api/0.log" Dec 09 13:35:28 crc kubenswrapper[4703]: I1209 13:35:28.635355 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6fc6c9d8f6-snwb8_bd9698c2-9ecf-4c99-9060-98201396be37/barbican-api-log/0.log" Dec 09 13:35:28 crc kubenswrapper[4703]: I1209 13:35:28.800624 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-569b9f958-2lvsd_8a74b2dd-de01-4b4f-ae23-00856d55f81a/barbican-keystone-listener-log/0.log" Dec 09 13:35:28 crc kubenswrapper[4703]: I1209 13:35:28.802903 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-569b9f958-2lvsd_8a74b2dd-de01-4b4f-ae23-00856d55f81a/barbican-keystone-listener/0.log" Dec 09 13:35:28 crc kubenswrapper[4703]: I1209 13:35:28.915612 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-7f4f589d9f-q5cvn_dadce417-24c9-4497-91bc-4bbb67fae899/barbican-worker/0.log" Dec 09 13:35:29 crc kubenswrapper[4703]: I1209 13:35:29.071225 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-7f4f589d9f-q5cvn_dadce417-24c9-4497-91bc-4bbb67fae899/barbican-worker-log/0.log" Dec 09 13:35:29 crc kubenswrapper[4703]: I1209 13:35:29.197725 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-kdscp_c01bdc0a-4376-4b9d-8418-40e064327bfe/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Dec 09 13:35:29 crc kubenswrapper[4703]: I1209 13:35:29.414637 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_ce42c586-f397-4f98-be45-f56d36115d7a/ceilometer-notification-agent/0.log" Dec 09 13:35:29 crc kubenswrapper[4703]: I1209 13:35:29.482563 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_ce42c586-f397-4f98-be45-f56d36115d7a/proxy-httpd/0.log" Dec 09 13:35:29 crc kubenswrapper[4703]: I1209 13:35:29.499710 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_ce42c586-f397-4f98-be45-f56d36115d7a/sg-core/0.log" Dec 09 13:35:29 crc kubenswrapper[4703]: I1209 13:35:29.704630 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_b6029ff4-8d8a-442e-95d0-b7d3c0cbdc0e/cinder-api-log/0.log" Dec 09 13:35:29 crc kubenswrapper[4703]: I1209 13:35:29.723961 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_b6029ff4-8d8a-442e-95d0-b7d3c0cbdc0e/cinder-api/0.log" Dec 09 13:35:29 crc kubenswrapper[4703]: I1209 13:35:29.886053 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_b0caaf13-e6d6-4666-a620-b09e9988bb1c/cinder-scheduler/0.log" Dec 09 13:35:30 crc kubenswrapper[4703]: I1209 13:35:30.009498 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_b0caaf13-e6d6-4666-a620-b09e9988bb1c/probe/0.log" Dec 09 13:35:30 crc kubenswrapper[4703]: E1209 13:35:30.073140 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:35:30 crc kubenswrapper[4703]: I1209 13:35:30.084064 4703 patch_prober.go:28] interesting pod/machine-config-daemon-q8sfk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 13:35:30 crc kubenswrapper[4703]: I1209 13:35:30.084123 4703 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 13:35:30 crc kubenswrapper[4703]: I1209 13:35:30.103875 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-api-0_20fe1291-16c1-4602-b70d-fad8bda0f61b/cloudkitty-api-log/0.log" Dec 09 13:35:30 crc kubenswrapper[4703]: I1209 13:35:30.149149 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-api-0_20fe1291-16c1-4602-b70d-fad8bda0f61b/cloudkitty-api/0.log" Dec 09 13:35:30 crc kubenswrapper[4703]: I1209 13:35:30.441061 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-compactor-0_92ca14de-b3f5-4f21-96d6-a71281b49c5c/loki-compactor/0.log" Dec 09 13:35:30 crc kubenswrapper[4703]: I1209 13:35:30.556835 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-distributor-664b687b54-cjj5n_36c4cea6-8d9a-4979-83c0-28ba95bd7c7e/loki-distributor/0.log" Dec 09 13:35:31 crc kubenswrapper[4703]: I1209 13:35:31.960459 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-gateway-bc75944f-sdssq_378ce3ef-fa33-4466-afa9-cc57b84fed76/gateway/0.log" Dec 09 13:35:32 crc kubenswrapper[4703]: I1209 13:35:32.065484 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-gateway-bc75944f-5rr9c_b56f6841-bd74-4321-bd6d-a2478a62a8de/gateway/0.log" Dec 09 13:35:32 crc kubenswrapper[4703]: I1209 13:35:32.079079 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-index-gateway-0_ffe1d3a3-3faf-4228-b28b-fcfb12cba786/loki-index-gateway/0.log" Dec 09 13:35:32 crc kubenswrapper[4703]: I1209 13:35:32.353066 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-ingester-0_be3c1046-2c78-46ab-a62f-f4270561ca1c/loki-ingester/0.log" Dec 09 13:35:32 crc kubenswrapper[4703]: I1209 13:35:32.423431 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-querier-5467947bf7-tfqqj_c05374ec-bb40-45f2-bc03-f84a6eb40f42/loki-querier/0.log" Dec 09 13:35:32 crc kubenswrapper[4703]: I1209 13:35:32.585638 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-query-frontend-7c8cd744d9-9rn7g_810fed49-b38e-4404-a03c-05dc5aa59ccb/loki-query-frontend/0.log" Dec 09 13:35:32 crc kubenswrapper[4703]: I1209 13:35:32.810032 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-85f64749dc-xznm5_1769eb76-1a06-4ce4-accf-7a0b91a759c8/init/0.log" Dec 09 13:35:33 crc kubenswrapper[4703]: I1209 13:35:33.103667 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-85f64749dc-xznm5_1769eb76-1a06-4ce4-accf-7a0b91a759c8/init/0.log" Dec 09 13:35:33 crc kubenswrapper[4703]: I1209 13:35:33.109363 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-85f64749dc-xznm5_1769eb76-1a06-4ce4-accf-7a0b91a759c8/dnsmasq-dns/0.log" Dec 09 13:35:33 crc kubenswrapper[4703]: I1209 13:35:33.154004 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-7ghts_4157464d-e43c-4d62-89b5-ececeb2ff437/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Dec 09 13:35:33 crc kubenswrapper[4703]: I1209 13:35:33.402393 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-cf88n_cbef74eb-61bc-4efa-8621-6e089311a571/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Dec 09 13:35:34 crc kubenswrapper[4703]: I1209 13:35:34.177445 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-jjxfg_ff13d4cd-5b50-4df6-9c21-fe4eed8fa7bc/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Dec 09 13:35:34 crc kubenswrapper[4703]: I1209 13:35:34.197948 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-ftl6m_43ca3e6a-7682-46cb-97e4-51c7a8010e98/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Dec 09 13:35:34 crc kubenswrapper[4703]: I1209 13:35:34.894265 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-tq6f4_1e42421c-88e1-4119-a154-33226550fe4d/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Dec 09 13:35:34 crc kubenswrapper[4703]: I1209 13:35:34.915311 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-txcq5_b3b60855-8704-45ce-9688-2364ca1978f0/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Dec 09 13:35:35 crc kubenswrapper[4703]: I1209 13:35:35.125820 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-vnj2z_7340d18a-8eee-4c8f-88d0-13d7bb17a825/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Dec 09 13:35:35 crc kubenswrapper[4703]: I1209 13:35:35.173950 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_38e472f5-3c76-4959-b813-119ab542819e/glance-httpd/0.log" Dec 09 13:35:35 crc kubenswrapper[4703]: I1209 13:35:35.371864 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_38e472f5-3c76-4959-b813-119ab542819e/glance-log/0.log" Dec 09 13:35:35 crc kubenswrapper[4703]: I1209 13:35:35.501586 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_d5aa154d-168f-4bf7-a86f-85f3f8989c41/glance-log/0.log" Dec 09 13:35:35 crc kubenswrapper[4703]: I1209 13:35:35.543496 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_d5aa154d-168f-4bf7-a86f-85f3f8989c41/glance-httpd/0.log" Dec 09 13:35:35 crc kubenswrapper[4703]: I1209 13:35:35.789937 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29421421-mc2kd_e081ba82-5ecf-49af-9c37-aa05d051ee71/keystone-cron/0.log" Dec 09 13:35:35 crc kubenswrapper[4703]: I1209 13:35:35.834087 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-5f58796dd5-gpp8b_ce0e8ad7-3e3e-4b1a-a892-0824acf34a1b/keystone-api/0.log" Dec 09 13:35:36 crc kubenswrapper[4703]: I1209 13:35:36.049023 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_3341d0c1-348a-4664-90a6-94ee7865fd94/kube-state-metrics/0.log" Dec 09 13:35:36 crc kubenswrapper[4703]: E1209 13:35:36.073223 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:35:36 crc kubenswrapper[4703]: I1209 13:35:36.419788 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-79c5b588df-ksgrp_b68ace68-caf9-489c-88d9-0daf26bfdeb7/neutron-api/0.log" Dec 09 13:35:36 crc kubenswrapper[4703]: I1209 13:35:36.445028 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-proc-0_39bfa711-6e54-46c2-a7d4-e14927ffbc09/cloudkitty-proc/0.log" Dec 09 13:35:36 crc kubenswrapper[4703]: I1209 13:35:36.548845 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-79c5b588df-ksgrp_b68ace68-caf9-489c-88d9-0daf26bfdeb7/neutron-httpd/0.log" Dec 09 13:35:37 crc kubenswrapper[4703]: I1209 13:35:37.169498 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_d87ed196-d02a-4a6b-95b7-0835983307f4/nova-api-log/0.log" Dec 09 13:35:37 crc kubenswrapper[4703]: I1209 13:35:37.312762 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_16228bdd-1db6-4397-b09a-b372d7957ad8/nova-cell0-conductor-conductor/0.log" Dec 09 13:35:37 crc kubenswrapper[4703]: I1209 13:35:37.519336 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_8eceabb3-1419-4d9d-a3d7-fad5725b8ae9/nova-cell1-conductor-conductor/0.log" Dec 09 13:35:37 crc kubenswrapper[4703]: I1209 13:35:37.632809 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_d87ed196-d02a-4a6b-95b7-0835983307f4/nova-api-api/0.log" Dec 09 13:35:37 crc kubenswrapper[4703]: I1209 13:35:37.677960 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_24eb5ce0-a821-4df9-b399-0a05417ef984/nova-cell1-novncproxy-novncproxy/0.log" Dec 09 13:35:38 crc kubenswrapper[4703]: I1209 13:35:38.396062 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_aa203c51-b3d0-4fe3-9471-9e129b93794e/nova-metadata-log/0.log" Dec 09 13:35:38 crc kubenswrapper[4703]: I1209 13:35:38.602063 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_7e4de361-7a9f-42a1-9c5d-35e890f37e8b/nova-scheduler-scheduler/0.log" Dec 09 13:35:38 crc kubenswrapper[4703]: I1209 13:35:38.726166 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_9bdf302a-4c2d-41c3-b1be-c08e52c5244c/mysql-bootstrap/0.log" Dec 09 13:35:38 crc kubenswrapper[4703]: I1209 13:35:38.916904 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_9bdf302a-4c2d-41c3-b1be-c08e52c5244c/mysql-bootstrap/0.log" Dec 09 13:35:39 crc kubenswrapper[4703]: I1209 13:35:39.072035 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_9bdf302a-4c2d-41c3-b1be-c08e52c5244c/galera/0.log" Dec 09 13:35:39 crc kubenswrapper[4703]: I1209 13:35:39.165764 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_00872775-af9b-49e8-9a6e-08baa2171c88/mysql-bootstrap/0.log" Dec 09 13:35:39 crc kubenswrapper[4703]: I1209 13:35:39.491961 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_00872775-af9b-49e8-9a6e-08baa2171c88/mysql-bootstrap/0.log" Dec 09 13:35:39 crc kubenswrapper[4703]: I1209 13:35:39.509090 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_00872775-af9b-49e8-9a6e-08baa2171c88/galera/0.log" Dec 09 13:35:39 crc kubenswrapper[4703]: I1209 13:35:39.802554 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_82a9b4ac-cb47-454e-802d-0f24b798103b/openstackclient/0.log" Dec 09 13:35:39 crc kubenswrapper[4703]: I1209 13:35:39.882094 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-lsj78_4222b5eb-89d5-41be-ab08-6f3f3f4dab42/ovn-controller/0.log" Dec 09 13:35:40 crc kubenswrapper[4703]: I1209 13:35:40.105107 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-x2pt9_da4d90d1-a4e1-44cf-9fd7-b6fd623ebe70/openstack-network-exporter/0.log" Dec 09 13:35:40 crc kubenswrapper[4703]: I1209 13:35:40.422997 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-b4tvr_8444467c-d711-4618-8518-1c45921e6493/ovsdb-server-init/0.log" Dec 09 13:35:40 crc kubenswrapper[4703]: I1209 13:35:40.530870 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_aa203c51-b3d0-4fe3-9471-9e129b93794e/nova-metadata-metadata/0.log" Dec 09 13:35:40 crc kubenswrapper[4703]: I1209 13:35:40.591176 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-b4tvr_8444467c-d711-4618-8518-1c45921e6493/ovsdb-server-init/0.log" Dec 09 13:35:40 crc kubenswrapper[4703]: I1209 13:35:40.597567 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-b4tvr_8444467c-d711-4618-8518-1c45921e6493/ovs-vswitchd/0.log" Dec 09 13:35:40 crc kubenswrapper[4703]: I1209 13:35:40.757630 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-b4tvr_8444467c-d711-4618-8518-1c45921e6493/ovsdb-server/0.log" Dec 09 13:35:41 crc kubenswrapper[4703]: I1209 13:35:41.179796 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_fa6ba9a6-c0d8-4260-867d-ad91464cc39b/ovn-northd/0.log" Dec 09 13:35:41 crc kubenswrapper[4703]: I1209 13:35:41.224020 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_fa6ba9a6-c0d8-4260-867d-ad91464cc39b/openstack-network-exporter/0.log" Dec 09 13:35:41 crc kubenswrapper[4703]: I1209 13:35:41.469391 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_d93546da-e35c-418f-a1e1-9d7b65c42829/openstack-network-exporter/0.log" Dec 09 13:35:41 crc kubenswrapper[4703]: I1209 13:35:41.496090 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_d93546da-e35c-418f-a1e1-9d7b65c42829/ovsdbserver-nb/0.log" Dec 09 13:35:41 crc kubenswrapper[4703]: I1209 13:35:41.659519 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_d85de252-1b8c-45f0-a143-eaa5f2d52fcb/openstack-network-exporter/0.log" Dec 09 13:35:41 crc kubenswrapper[4703]: I1209 13:35:41.864556 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-865d57f67b-x7hvw_2ddebb8b-fcc7-4ee0-b7d4-af4b2969f8f3/placement-api/0.log" Dec 09 13:35:41 crc kubenswrapper[4703]: I1209 13:35:41.867443 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_d85de252-1b8c-45f0-a143-eaa5f2d52fcb/ovsdbserver-sb/0.log" Dec 09 13:35:42 crc kubenswrapper[4703]: I1209 13:35:42.111273 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-865d57f67b-x7hvw_2ddebb8b-fcc7-4ee0-b7d4-af4b2969f8f3/placement-log/0.log" Dec 09 13:35:42 crc kubenswrapper[4703]: I1209 13:35:42.175798 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_30ba4b6c-b025-4ab0-b589-a2c72caf1997/init-config-reloader/0.log" Dec 09 13:35:42 crc kubenswrapper[4703]: I1209 13:35:42.441608 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_30ba4b6c-b025-4ab0-b589-a2c72caf1997/config-reloader/0.log" Dec 09 13:35:42 crc kubenswrapper[4703]: I1209 13:35:42.484639 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_30ba4b6c-b025-4ab0-b589-a2c72caf1997/prometheus/0.log" Dec 09 13:35:42 crc kubenswrapper[4703]: I1209 13:35:42.523780 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_30ba4b6c-b025-4ab0-b589-a2c72caf1997/init-config-reloader/0.log" Dec 09 13:35:42 crc kubenswrapper[4703]: I1209 13:35:42.560238 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_30ba4b6c-b025-4ab0-b589-a2c72caf1997/thanos-sidecar/0.log" Dec 09 13:35:42 crc kubenswrapper[4703]: I1209 13:35:42.768966 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_d195f6e9-05a6-430c-b28f-847f7635f1ee/setup-container/0.log" Dec 09 13:35:43 crc kubenswrapper[4703]: I1209 13:35:43.044274 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_d195f6e9-05a6-430c-b28f-847f7635f1ee/setup-container/0.log" Dec 09 13:35:43 crc kubenswrapper[4703]: E1209 13:35:43.073546 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:35:43 crc kubenswrapper[4703]: I1209 13:35:43.125521 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_992a545d-2e79-43b3-819b-bd337432ba58/setup-container/0.log" Dec 09 13:35:43 crc kubenswrapper[4703]: I1209 13:35:43.148457 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_d195f6e9-05a6-430c-b28f-847f7635f1ee/rabbitmq/0.log" Dec 09 13:35:43 crc kubenswrapper[4703]: I1209 13:35:43.441351 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_992a545d-2e79-43b3-819b-bd337432ba58/rabbitmq/0.log" Dec 09 13:35:43 crc kubenswrapper[4703]: I1209 13:35:43.463991 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_992a545d-2e79-43b3-819b-bd337432ba58/setup-container/0.log" Dec 09 13:35:43 crc kubenswrapper[4703]: I1209 13:35:43.520867 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-fr24m_ea7666b8-7519-49e0-b20d-5aa60df946a4/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Dec 09 13:35:44 crc kubenswrapper[4703]: I1209 13:35:44.415339 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-9nm9n_b62f9f99-a686-48b2-90b3-5ccdcf42a687/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Dec 09 13:35:44 crc kubenswrapper[4703]: I1209 13:35:44.762603 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-85bddbcbcc-v2sfc_366b66fd-ba0b-44c9-b9c4-ad2038f94d86/proxy-server/0.log" Dec 09 13:35:44 crc kubenswrapper[4703]: I1209 13:35:44.768499 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-85bddbcbcc-v2sfc_366b66fd-ba0b-44c9-b9c4-ad2038f94d86/proxy-httpd/0.log" Dec 09 13:35:44 crc kubenswrapper[4703]: I1209 13:35:44.806971 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-cscwb_2adbe90b-b3dc-480b-9ab1-f6084b5dee94/swift-ring-rebalance/0.log" Dec 09 13:35:45 crc kubenswrapper[4703]: I1209 13:35:45.327729 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7ad4bf88-77c9-4a0a-8fad-75f44d1b4f44/account-auditor/0.log" Dec 09 13:35:45 crc kubenswrapper[4703]: I1209 13:35:45.385387 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7ad4bf88-77c9-4a0a-8fad-75f44d1b4f44/account-reaper/0.log" Dec 09 13:35:45 crc kubenswrapper[4703]: I1209 13:35:45.435273 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7ad4bf88-77c9-4a0a-8fad-75f44d1b4f44/account-replicator/0.log" Dec 09 13:35:45 crc kubenswrapper[4703]: I1209 13:35:45.552857 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7ad4bf88-77c9-4a0a-8fad-75f44d1b4f44/account-server/0.log" Dec 09 13:35:45 crc kubenswrapper[4703]: I1209 13:35:45.613919 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7ad4bf88-77c9-4a0a-8fad-75f44d1b4f44/container-auditor/0.log" Dec 09 13:35:45 crc kubenswrapper[4703]: I1209 13:35:45.688612 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7ad4bf88-77c9-4a0a-8fad-75f44d1b4f44/container-server/0.log" Dec 09 13:35:45 crc kubenswrapper[4703]: I1209 13:35:45.694000 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7ad4bf88-77c9-4a0a-8fad-75f44d1b4f44/container-replicator/0.log" Dec 09 13:35:45 crc kubenswrapper[4703]: I1209 13:35:45.904058 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7ad4bf88-77c9-4a0a-8fad-75f44d1b4f44/container-updater/0.log" Dec 09 13:35:45 crc kubenswrapper[4703]: I1209 13:35:45.905156 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7ad4bf88-77c9-4a0a-8fad-75f44d1b4f44/object-auditor/0.log" Dec 09 13:35:45 crc kubenswrapper[4703]: I1209 13:35:45.997677 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7ad4bf88-77c9-4a0a-8fad-75f44d1b4f44/object-replicator/0.log" Dec 09 13:35:46 crc kubenswrapper[4703]: I1209 13:35:46.030595 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7ad4bf88-77c9-4a0a-8fad-75f44d1b4f44/object-expirer/0.log" Dec 09 13:35:46 crc kubenswrapper[4703]: I1209 13:35:46.757944 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7ad4bf88-77c9-4a0a-8fad-75f44d1b4f44/object-updater/0.log" Dec 09 13:35:46 crc kubenswrapper[4703]: I1209 13:35:46.830612 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7ad4bf88-77c9-4a0a-8fad-75f44d1b4f44/object-server/0.log" Dec 09 13:35:46 crc kubenswrapper[4703]: I1209 13:35:46.837081 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7ad4bf88-77c9-4a0a-8fad-75f44d1b4f44/rsync/0.log" Dec 09 13:35:46 crc kubenswrapper[4703]: I1209 13:35:46.870733 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7ad4bf88-77c9-4a0a-8fad-75f44d1b4f44/swift-recon-cron/0.log" Dec 09 13:35:48 crc kubenswrapper[4703]: E1209 13:35:48.072162 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:35:50 crc kubenswrapper[4703]: I1209 13:35:50.469282 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_dad57d2e-6021-4515-9075-243ab3ce4aec/memcached/0.log" Dec 09 13:35:54 crc kubenswrapper[4703]: E1209 13:35:54.073839 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:36:00 crc kubenswrapper[4703]: E1209 13:36:00.071563 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:36:00 crc kubenswrapper[4703]: I1209 13:36:00.083615 4703 patch_prober.go:28] interesting pod/machine-config-daemon-q8sfk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 13:36:00 crc kubenswrapper[4703]: I1209 13:36:00.083700 4703 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 13:36:07 crc kubenswrapper[4703]: I1209 13:36:07.789658 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7dh4f"] Dec 09 13:36:07 crc kubenswrapper[4703]: E1209 13:36:07.790727 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59f9a797-774d-4d75-b1b3-5a0624ca4480" containerName="container-00" Dec 09 13:36:07 crc kubenswrapper[4703]: I1209 13:36:07.790742 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="59f9a797-774d-4d75-b1b3-5a0624ca4480" containerName="container-00" Dec 09 13:36:07 crc kubenswrapper[4703]: I1209 13:36:07.790990 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="59f9a797-774d-4d75-b1b3-5a0624ca4480" containerName="container-00" Dec 09 13:36:07 crc kubenswrapper[4703]: I1209 13:36:07.792792 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7dh4f" Dec 09 13:36:07 crc kubenswrapper[4703]: I1209 13:36:07.805930 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7dh4f"] Dec 09 13:36:07 crc kubenswrapper[4703]: I1209 13:36:07.875354 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5t4f8\" (UniqueName: \"kubernetes.io/projected/7c34f924-873b-4f3b-871c-696caae7d620-kube-api-access-5t4f8\") pod \"certified-operators-7dh4f\" (UID: \"7c34f924-873b-4f3b-871c-696caae7d620\") " pod="openshift-marketplace/certified-operators-7dh4f" Dec 09 13:36:07 crc kubenswrapper[4703]: I1209 13:36:07.875534 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c34f924-873b-4f3b-871c-696caae7d620-catalog-content\") pod \"certified-operators-7dh4f\" (UID: \"7c34f924-873b-4f3b-871c-696caae7d620\") " pod="openshift-marketplace/certified-operators-7dh4f" Dec 09 13:36:07 crc kubenswrapper[4703]: I1209 13:36:07.875673 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c34f924-873b-4f3b-871c-696caae7d620-utilities\") pod \"certified-operators-7dh4f\" (UID: \"7c34f924-873b-4f3b-871c-696caae7d620\") " pod="openshift-marketplace/certified-operators-7dh4f" Dec 09 13:36:07 crc kubenswrapper[4703]: I1209 13:36:07.978818 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5t4f8\" (UniqueName: \"kubernetes.io/projected/7c34f924-873b-4f3b-871c-696caae7d620-kube-api-access-5t4f8\") pod \"certified-operators-7dh4f\" (UID: \"7c34f924-873b-4f3b-871c-696caae7d620\") " pod="openshift-marketplace/certified-operators-7dh4f" Dec 09 13:36:07 crc kubenswrapper[4703]: I1209 13:36:07.978891 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c34f924-873b-4f3b-871c-696caae7d620-catalog-content\") pod \"certified-operators-7dh4f\" (UID: \"7c34f924-873b-4f3b-871c-696caae7d620\") " pod="openshift-marketplace/certified-operators-7dh4f" Dec 09 13:36:07 crc kubenswrapper[4703]: I1209 13:36:07.978928 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c34f924-873b-4f3b-871c-696caae7d620-utilities\") pod \"certified-operators-7dh4f\" (UID: \"7c34f924-873b-4f3b-871c-696caae7d620\") " pod="openshift-marketplace/certified-operators-7dh4f" Dec 09 13:36:07 crc kubenswrapper[4703]: I1209 13:36:07.979564 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c34f924-873b-4f3b-871c-696caae7d620-utilities\") pod \"certified-operators-7dh4f\" (UID: \"7c34f924-873b-4f3b-871c-696caae7d620\") " pod="openshift-marketplace/certified-operators-7dh4f" Dec 09 13:36:07 crc kubenswrapper[4703]: I1209 13:36:07.979674 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c34f924-873b-4f3b-871c-696caae7d620-catalog-content\") pod \"certified-operators-7dh4f\" (UID: \"7c34f924-873b-4f3b-871c-696caae7d620\") " pod="openshift-marketplace/certified-operators-7dh4f" Dec 09 13:36:08 crc kubenswrapper[4703]: I1209 13:36:08.017872 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5t4f8\" (UniqueName: \"kubernetes.io/projected/7c34f924-873b-4f3b-871c-696caae7d620-kube-api-access-5t4f8\") pod \"certified-operators-7dh4f\" (UID: \"7c34f924-873b-4f3b-871c-696caae7d620\") " pod="openshift-marketplace/certified-operators-7dh4f" Dec 09 13:36:08 crc kubenswrapper[4703]: I1209 13:36:08.153640 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7dh4f" Dec 09 13:36:08 crc kubenswrapper[4703]: I1209 13:36:08.823828 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7dh4f"] Dec 09 13:36:09 crc kubenswrapper[4703]: E1209 13:36:09.078310 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:36:09 crc kubenswrapper[4703]: I1209 13:36:09.595094 4703 generic.go:334] "Generic (PLEG): container finished" podID="7c34f924-873b-4f3b-871c-696caae7d620" containerID="0e1d3fc2c418c0ad08cb70a89da4594e5fabbc1f51937a53298d672921672c64" exitCode=0 Dec 09 13:36:09 crc kubenswrapper[4703]: I1209 13:36:09.595197 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7dh4f" event={"ID":"7c34f924-873b-4f3b-871c-696caae7d620","Type":"ContainerDied","Data":"0e1d3fc2c418c0ad08cb70a89da4594e5fabbc1f51937a53298d672921672c64"} Dec 09 13:36:09 crc kubenswrapper[4703]: I1209 13:36:09.595236 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7dh4f" event={"ID":"7c34f924-873b-4f3b-871c-696caae7d620","Type":"ContainerStarted","Data":"5c888022060330e4bdfbb6f4530bb6a0a3582a21f654f2e88f539df079fbd998"} Dec 09 13:36:10 crc kubenswrapper[4703]: I1209 13:36:10.401213 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-75k5m"] Dec 09 13:36:10 crc kubenswrapper[4703]: I1209 13:36:10.405616 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-75k5m" Dec 09 13:36:10 crc kubenswrapper[4703]: I1209 13:36:10.419028 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-75k5m"] Dec 09 13:36:10 crc kubenswrapper[4703]: I1209 13:36:10.454786 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kt4mf\" (UniqueName: \"kubernetes.io/projected/a61257f4-093f-4d25-ad4e-d33ee8ab012b-kube-api-access-kt4mf\") pod \"redhat-marketplace-75k5m\" (UID: \"a61257f4-093f-4d25-ad4e-d33ee8ab012b\") " pod="openshift-marketplace/redhat-marketplace-75k5m" Dec 09 13:36:10 crc kubenswrapper[4703]: I1209 13:36:10.455030 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a61257f4-093f-4d25-ad4e-d33ee8ab012b-catalog-content\") pod \"redhat-marketplace-75k5m\" (UID: \"a61257f4-093f-4d25-ad4e-d33ee8ab012b\") " pod="openshift-marketplace/redhat-marketplace-75k5m" Dec 09 13:36:10 crc kubenswrapper[4703]: I1209 13:36:10.455105 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a61257f4-093f-4d25-ad4e-d33ee8ab012b-utilities\") pod \"redhat-marketplace-75k5m\" (UID: \"a61257f4-093f-4d25-ad4e-d33ee8ab012b\") " pod="openshift-marketplace/redhat-marketplace-75k5m" Dec 09 13:36:10 crc kubenswrapper[4703]: I1209 13:36:10.557484 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kt4mf\" (UniqueName: \"kubernetes.io/projected/a61257f4-093f-4d25-ad4e-d33ee8ab012b-kube-api-access-kt4mf\") pod \"redhat-marketplace-75k5m\" (UID: \"a61257f4-093f-4d25-ad4e-d33ee8ab012b\") " pod="openshift-marketplace/redhat-marketplace-75k5m" Dec 09 13:36:10 crc kubenswrapper[4703]: I1209 13:36:10.557610 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a61257f4-093f-4d25-ad4e-d33ee8ab012b-catalog-content\") pod \"redhat-marketplace-75k5m\" (UID: \"a61257f4-093f-4d25-ad4e-d33ee8ab012b\") " pod="openshift-marketplace/redhat-marketplace-75k5m" Dec 09 13:36:10 crc kubenswrapper[4703]: I1209 13:36:10.557649 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a61257f4-093f-4d25-ad4e-d33ee8ab012b-utilities\") pod \"redhat-marketplace-75k5m\" (UID: \"a61257f4-093f-4d25-ad4e-d33ee8ab012b\") " pod="openshift-marketplace/redhat-marketplace-75k5m" Dec 09 13:36:10 crc kubenswrapper[4703]: I1209 13:36:10.560203 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a61257f4-093f-4d25-ad4e-d33ee8ab012b-catalog-content\") pod \"redhat-marketplace-75k5m\" (UID: \"a61257f4-093f-4d25-ad4e-d33ee8ab012b\") " pod="openshift-marketplace/redhat-marketplace-75k5m" Dec 09 13:36:10 crc kubenswrapper[4703]: I1209 13:36:10.560694 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a61257f4-093f-4d25-ad4e-d33ee8ab012b-utilities\") pod \"redhat-marketplace-75k5m\" (UID: \"a61257f4-093f-4d25-ad4e-d33ee8ab012b\") " pod="openshift-marketplace/redhat-marketplace-75k5m" Dec 09 13:36:10 crc kubenswrapper[4703]: I1209 13:36:10.581098 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kt4mf\" (UniqueName: \"kubernetes.io/projected/a61257f4-093f-4d25-ad4e-d33ee8ab012b-kube-api-access-kt4mf\") pod \"redhat-marketplace-75k5m\" (UID: \"a61257f4-093f-4d25-ad4e-d33ee8ab012b\") " pod="openshift-marketplace/redhat-marketplace-75k5m" Dec 09 13:36:10 crc kubenswrapper[4703]: I1209 13:36:10.747184 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-75k5m" Dec 09 13:36:11 crc kubenswrapper[4703]: I1209 13:36:11.354991 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-75k5m"] Dec 09 13:36:11 crc kubenswrapper[4703]: W1209 13:36:11.372631 4703 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda61257f4_093f_4d25_ad4e_d33ee8ab012b.slice/crio-fed9ef35bd55f65c90cbdbafdeb7bfa3e1c28d3f6caf7d41a22c4011be2fc200 WatchSource:0}: Error finding container fed9ef35bd55f65c90cbdbafdeb7bfa3e1c28d3f6caf7d41a22c4011be2fc200: Status 404 returned error can't find the container with id fed9ef35bd55f65c90cbdbafdeb7bfa3e1c28d3f6caf7d41a22c4011be2fc200 Dec 09 13:36:11 crc kubenswrapper[4703]: I1209 13:36:11.636175 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-75k5m" event={"ID":"a61257f4-093f-4d25-ad4e-d33ee8ab012b","Type":"ContainerStarted","Data":"fed9ef35bd55f65c90cbdbafdeb7bfa3e1c28d3f6caf7d41a22c4011be2fc200"} Dec 09 13:36:12 crc kubenswrapper[4703]: I1209 13:36:12.651615 4703 generic.go:334] "Generic (PLEG): container finished" podID="a61257f4-093f-4d25-ad4e-d33ee8ab012b" containerID="3e3a78c22c401f38288ff0da64be8126e3a1ac4b693d1f08563968b278f9ab54" exitCode=0 Dec 09 13:36:12 crc kubenswrapper[4703]: I1209 13:36:12.651968 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-75k5m" event={"ID":"a61257f4-093f-4d25-ad4e-d33ee8ab012b","Type":"ContainerDied","Data":"3e3a78c22c401f38288ff0da64be8126e3a1ac4b693d1f08563968b278f9ab54"} Dec 09 13:36:15 crc kubenswrapper[4703]: E1209 13:36:15.079561 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:36:17 crc kubenswrapper[4703]: I1209 13:36:17.713977 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-75k5m" event={"ID":"a61257f4-093f-4d25-ad4e-d33ee8ab012b","Type":"ContainerStarted","Data":"45d90ccb5ce3d55f5c6a3376bf72520e41086158df72befbc6d2a6678cc591fb"} Dec 09 13:36:17 crc kubenswrapper[4703]: I1209 13:36:17.716252 4703 generic.go:334] "Generic (PLEG): container finished" podID="7c34f924-873b-4f3b-871c-696caae7d620" containerID="cd40ac94ca1878c05f3a6804df4f4a4bb393e9e21ee8e1f96a32926c674ca2ca" exitCode=0 Dec 09 13:36:17 crc kubenswrapper[4703]: I1209 13:36:17.716320 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7dh4f" event={"ID":"7c34f924-873b-4f3b-871c-696caae7d620","Type":"ContainerDied","Data":"cd40ac94ca1878c05f3a6804df4f4a4bb393e9e21ee8e1f96a32926c674ca2ca"} Dec 09 13:36:18 crc kubenswrapper[4703]: I1209 13:36:18.730905 4703 generic.go:334] "Generic (PLEG): container finished" podID="a61257f4-093f-4d25-ad4e-d33ee8ab012b" containerID="45d90ccb5ce3d55f5c6a3376bf72520e41086158df72befbc6d2a6678cc591fb" exitCode=0 Dec 09 13:36:18 crc kubenswrapper[4703]: I1209 13:36:18.731351 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-75k5m" event={"ID":"a61257f4-093f-4d25-ad4e-d33ee8ab012b","Type":"ContainerDied","Data":"45d90ccb5ce3d55f5c6a3376bf72520e41086158df72befbc6d2a6678cc591fb"} Dec 09 13:36:19 crc kubenswrapper[4703]: I1209 13:36:19.744974 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-75k5m" event={"ID":"a61257f4-093f-4d25-ad4e-d33ee8ab012b","Type":"ContainerStarted","Data":"7568ba7a460b6bebbce44379c761a33d03dd97998a09bd867f84c42a78c072fc"} Dec 09 13:36:19 crc kubenswrapper[4703]: I1209 13:36:19.747215 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7dh4f" event={"ID":"7c34f924-873b-4f3b-871c-696caae7d620","Type":"ContainerStarted","Data":"b806ebe8b8a3d9cee0043f018066ba893ac5f530fba3a621a962d351c13cc5df"} Dec 09 13:36:19 crc kubenswrapper[4703]: I1209 13:36:19.776658 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-75k5m" podStartSLOduration=3.07484579 podStartE2EDuration="9.77663097s" podCreationTimestamp="2025-12-09 13:36:10 +0000 UTC" firstStartedPulling="2025-12-09 13:36:12.656030147 +0000 UTC m=+5471.904793656" lastFinishedPulling="2025-12-09 13:36:19.357815317 +0000 UTC m=+5478.606578836" observedRunningTime="2025-12-09 13:36:19.76950998 +0000 UTC m=+5479.018273499" watchObservedRunningTime="2025-12-09 13:36:19.77663097 +0000 UTC m=+5479.025394489" Dec 09 13:36:19 crc kubenswrapper[4703]: I1209 13:36:19.792157 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7dh4f" podStartSLOduration=3.173088646 podStartE2EDuration="12.792137522s" podCreationTimestamp="2025-12-09 13:36:07 +0000 UTC" firstStartedPulling="2025-12-09 13:36:09.597314347 +0000 UTC m=+5468.846077866" lastFinishedPulling="2025-12-09 13:36:19.216363223 +0000 UTC m=+5478.465126742" observedRunningTime="2025-12-09 13:36:19.789871352 +0000 UTC m=+5479.038634871" watchObservedRunningTime="2025-12-09 13:36:19.792137522 +0000 UTC m=+5479.040901041" Dec 09 13:36:20 crc kubenswrapper[4703]: I1209 13:36:20.748460 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-75k5m" Dec 09 13:36:20 crc kubenswrapper[4703]: I1209 13:36:20.748994 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-75k5m" Dec 09 13:36:21 crc kubenswrapper[4703]: I1209 13:36:21.798558 4703 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-75k5m" podUID="a61257f4-093f-4d25-ad4e-d33ee8ab012b" containerName="registry-server" probeResult="failure" output=< Dec 09 13:36:21 crc kubenswrapper[4703]: timeout: failed to connect service ":50051" within 1s Dec 09 13:36:21 crc kubenswrapper[4703]: > Dec 09 13:36:22 crc kubenswrapper[4703]: I1209 13:36:22.638095 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9c3d187de28e81e2aa863f5c385ad60315fa95c68b744528ef728d9f38vq9n6_b5ad644c-f525-4885-89b3-1d61751802f3/util/0.log" Dec 09 13:36:22 crc kubenswrapper[4703]: I1209 13:36:22.872002 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9c3d187de28e81e2aa863f5c385ad60315fa95c68b744528ef728d9f38vq9n6_b5ad644c-f525-4885-89b3-1d61751802f3/util/0.log" Dec 09 13:36:22 crc kubenswrapper[4703]: I1209 13:36:22.876080 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9c3d187de28e81e2aa863f5c385ad60315fa95c68b744528ef728d9f38vq9n6_b5ad644c-f525-4885-89b3-1d61751802f3/pull/0.log" Dec 09 13:36:22 crc kubenswrapper[4703]: I1209 13:36:22.922088 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9c3d187de28e81e2aa863f5c385ad60315fa95c68b744528ef728d9f38vq9n6_b5ad644c-f525-4885-89b3-1d61751802f3/pull/0.log" Dec 09 13:36:23 crc kubenswrapper[4703]: E1209 13:36:23.073406 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:36:23 crc kubenswrapper[4703]: I1209 13:36:23.107971 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9c3d187de28e81e2aa863f5c385ad60315fa95c68b744528ef728d9f38vq9n6_b5ad644c-f525-4885-89b3-1d61751802f3/pull/0.log" Dec 09 13:36:23 crc kubenswrapper[4703]: I1209 13:36:23.127890 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9c3d187de28e81e2aa863f5c385ad60315fa95c68b744528ef728d9f38vq9n6_b5ad644c-f525-4885-89b3-1d61751802f3/extract/0.log" Dec 09 13:36:23 crc kubenswrapper[4703]: I1209 13:36:23.149250 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9c3d187de28e81e2aa863f5c385ad60315fa95c68b744528ef728d9f38vq9n6_b5ad644c-f525-4885-89b3-1d61751802f3/util/0.log" Dec 09 13:36:23 crc kubenswrapper[4703]: I1209 13:36:23.287414 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-rqtl7_56aba94b-3065-4e94-a683-ddcb0f0f1734/kube-rbac-proxy/0.log" Dec 09 13:36:23 crc kubenswrapper[4703]: I1209 13:36:23.449126 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-rqtl7_56aba94b-3065-4e94-a683-ddcb0f0f1734/manager/0.log" Dec 09 13:36:23 crc kubenswrapper[4703]: I1209 13:36:23.476304 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-6c677c69b-4njvk_874a8c8a-8438-4764-9660-31185bf873e6/kube-rbac-proxy/0.log" Dec 09 13:36:23 crc kubenswrapper[4703]: I1209 13:36:23.593289 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-6c677c69b-4njvk_874a8c8a-8438-4764-9660-31185bf873e6/manager/0.log" Dec 09 13:36:23 crc kubenswrapper[4703]: I1209 13:36:23.658551 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-697fb699cf-cnjkv_fc2c796b-a300-435a-bce4-be428b7b4ac6/kube-rbac-proxy/0.log" Dec 09 13:36:23 crc kubenswrapper[4703]: I1209 13:36:23.700830 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-697fb699cf-cnjkv_fc2c796b-a300-435a-bce4-be428b7b4ac6/manager/0.log" Dec 09 13:36:23 crc kubenswrapper[4703]: I1209 13:36:23.855274 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-5697bb5779-v9r65_8aff308b-1702-4057-80f7-517462396b76/kube-rbac-proxy/0.log" Dec 09 13:36:23 crc kubenswrapper[4703]: I1209 13:36:23.909921 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-5697bb5779-v9r65_8aff308b-1702-4057-80f7-517462396b76/manager/0.log" Dec 09 13:36:24 crc kubenswrapper[4703]: I1209 13:36:24.095008 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-9x27n_063b1e9b-8501-497e-b999-280076922605/manager/0.log" Dec 09 13:36:24 crc kubenswrapper[4703]: I1209 13:36:24.096084 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-9x27n_063b1e9b-8501-497e-b999-280076922605/kube-rbac-proxy/0.log" Dec 09 13:36:24 crc kubenswrapper[4703]: I1209 13:36:24.170007 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-srhz6_13a53c62-2578-4060-8dbf-17fccd6080b1/kube-rbac-proxy/0.log" Dec 09 13:36:24 crc kubenswrapper[4703]: I1209 13:36:24.347772 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-srhz6_13a53c62-2578-4060-8dbf-17fccd6080b1/manager/0.log" Dec 09 13:36:24 crc kubenswrapper[4703]: I1209 13:36:24.460415 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-78d48bff9d-2gwjv_28c19114-205f-4c58-8ca3-7a7a19b0968b/kube-rbac-proxy/0.log" Dec 09 13:36:24 crc kubenswrapper[4703]: I1209 13:36:24.612898 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-967d97867-4qrc4_53f0e694-8a8f-4985-a614-f8ca11f6cf32/kube-rbac-proxy/0.log" Dec 09 13:36:24 crc kubenswrapper[4703]: I1209 13:36:24.776269 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-967d97867-4qrc4_53f0e694-8a8f-4985-a614-f8ca11f6cf32/manager/0.log" Dec 09 13:36:24 crc kubenswrapper[4703]: I1209 13:36:24.781969 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-78d48bff9d-2gwjv_28c19114-205f-4c58-8ca3-7a7a19b0968b/manager/0.log" Dec 09 13:36:24 crc kubenswrapper[4703]: I1209 13:36:24.890087 4703 scope.go:117] "RemoveContainer" containerID="4d129ce043037646643f0cecfa1ceebe709d21ba0729f4a87f7254fffafbf6f8" Dec 09 13:36:24 crc kubenswrapper[4703]: I1209 13:36:24.934488 4703 scope.go:117] "RemoveContainer" containerID="d7d65208323147a2597a7bd50bf16950dd7b7085ce5a44c6362a76556057589c" Dec 09 13:36:24 crc kubenswrapper[4703]: I1209 13:36:24.961058 4703 scope.go:117] "RemoveContainer" containerID="cf123a4cc1b5296a00bfeb1555a6a2c412d8b038c618a250a3dd299baad4a67c" Dec 09 13:36:24 crc kubenswrapper[4703]: I1209 13:36:24.961685 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7765d96ddf-bv7zd_73fb1f5d-7761-420a-b327-568cab0fb0d2/kube-rbac-proxy/0.log" Dec 09 13:36:25 crc kubenswrapper[4703]: I1209 13:36:25.070324 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7765d96ddf-bv7zd_73fb1f5d-7761-420a-b327-568cab0fb0d2/manager/0.log" Dec 09 13:36:25 crc kubenswrapper[4703]: I1209 13:36:25.200919 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-5b5fd79c9c-8xf75_df921247-4ca5-4916-b42e-15fc060d72c4/kube-rbac-proxy/0.log" Dec 09 13:36:25 crc kubenswrapper[4703]: I1209 13:36:25.251835 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-5b5fd79c9c-8xf75_df921247-4ca5-4916-b42e-15fc060d72c4/manager/0.log" Dec 09 13:36:25 crc kubenswrapper[4703]: I1209 13:36:25.299849 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-79c8c4686c-tv5tz_de1bc545-3573-49b4-9a2d-8db33d6f37d1/kube-rbac-proxy/0.log" Dec 09 13:36:25 crc kubenswrapper[4703]: I1209 13:36:25.458007 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-79c8c4686c-tv5tz_de1bc545-3573-49b4-9a2d-8db33d6f37d1/manager/0.log" Dec 09 13:36:25 crc kubenswrapper[4703]: I1209 13:36:25.511875 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-k7rqx_1facff82-6e9e-4bed-8145-1b00dcc84f51/kube-rbac-proxy/0.log" Dec 09 13:36:25 crc kubenswrapper[4703]: I1209 13:36:25.584138 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-k7rqx_1facff82-6e9e-4bed-8145-1b00dcc84f51/manager/0.log" Dec 09 13:36:25 crc kubenswrapper[4703]: I1209 13:36:25.726703 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-gpbmp_eb139c31-b7c2-4d15-b9be-c541adf0c87f/kube-rbac-proxy/0.log" Dec 09 13:36:25 crc kubenswrapper[4703]: I1209 13:36:25.880779 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-gpbmp_eb139c31-b7c2-4d15-b9be-c541adf0c87f/manager/0.log" Dec 09 13:36:25 crc kubenswrapper[4703]: I1209 13:36:25.981979 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-md4r8_2866fa2f-a90b-4137-8ef3-23e9e1140899/kube-rbac-proxy/0.log" Dec 09 13:36:26 crc kubenswrapper[4703]: I1209 13:36:26.063496 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-md4r8_2866fa2f-a90b-4137-8ef3-23e9e1140899/manager/0.log" Dec 09 13:36:26 crc kubenswrapper[4703]: I1209 13:36:26.157832 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-84b575879fgcmht_a3ff4025-d356-4b31-b42b-5e198155ba91/kube-rbac-proxy/0.log" Dec 09 13:36:26 crc kubenswrapper[4703]: I1209 13:36:26.227546 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-84b575879fgcmht_a3ff4025-d356-4b31-b42b-5e198155ba91/manager/0.log" Dec 09 13:36:26 crc kubenswrapper[4703]: I1209 13:36:26.708086 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-p777b_546d3606-c78f-46dc-8869-9bac880e6757/registry-server/0.log" Dec 09 13:36:26 crc kubenswrapper[4703]: I1209 13:36:26.815590 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-6979fbd8bc-6qblc_7b10fb00-08de-4e39-ba8e-58a46ec09b19/operator/0.log" Dec 09 13:36:26 crc kubenswrapper[4703]: I1209 13:36:26.880038 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-lrk9g_acd94eef-86bb-4acc-8790-011d87eb0da4/kube-rbac-proxy/0.log" Dec 09 13:36:27 crc kubenswrapper[4703]: I1209 13:36:27.010161 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-lrk9g_acd94eef-86bb-4acc-8790-011d87eb0da4/manager/0.log" Dec 09 13:36:27 crc kubenswrapper[4703]: I1209 13:36:27.616484 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-586c894b5-s992d_ba302959-e371-4d55-a320-062f7aeeefea/manager/0.log" Dec 09 13:36:27 crc kubenswrapper[4703]: I1209 13:36:27.759716 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-pswqz_37c73372-42dd-44a5-a5cc-e7d324be6981/kube-rbac-proxy/0.log" Dec 09 13:36:27 crc kubenswrapper[4703]: I1209 13:36:27.821763 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-pswqz_37c73372-42dd-44a5-a5cc-e7d324be6981/manager/0.log" Dec 09 13:36:27 crc kubenswrapper[4703]: I1209 13:36:27.864139 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-842l9_b37cc5b3-46d5-403e-be1b-46eebc75f0ef/operator/0.log" Dec 09 13:36:27 crc kubenswrapper[4703]: I1209 13:36:27.993023 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-9d58d64bc-zcck5_4868f7dd-ada4-4df8-9bc8-ae5ca73f2935/kube-rbac-proxy/0.log" Dec 09 13:36:28 crc kubenswrapper[4703]: I1209 13:36:28.095666 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-9d58d64bc-zcck5_4868f7dd-ada4-4df8-9bc8-ae5ca73f2935/manager/0.log" Dec 09 13:36:28 crc kubenswrapper[4703]: I1209 13:36:28.147276 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-797ff5dd46-77fms_db6f122b-a853-4ecb-8d82-2a8a04c8224e/kube-rbac-proxy/0.log" Dec 09 13:36:28 crc kubenswrapper[4703]: I1209 13:36:28.154321 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7dh4f" Dec 09 13:36:28 crc kubenswrapper[4703]: I1209 13:36:28.154392 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7dh4f" Dec 09 13:36:28 crc kubenswrapper[4703]: I1209 13:36:28.225290 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7dh4f" Dec 09 13:36:28 crc kubenswrapper[4703]: I1209 13:36:28.386320 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-kvpf4_79e382ca-8d65-45fc-8dbf-3626827cb50f/kube-rbac-proxy/0.log" Dec 09 13:36:28 crc kubenswrapper[4703]: I1209 13:36:28.421826 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-kvpf4_79e382ca-8d65-45fc-8dbf-3626827cb50f/manager/0.log" Dec 09 13:36:28 crc kubenswrapper[4703]: I1209 13:36:28.564240 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-667bd8d554-959xw_d34eb12d-d10a-406c-bfd9-9f772f9e63eb/kube-rbac-proxy/0.log" Dec 09 13:36:28 crc kubenswrapper[4703]: I1209 13:36:28.712685 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-667bd8d554-959xw_d34eb12d-d10a-406c-bfd9-9f772f9e63eb/manager/0.log" Dec 09 13:36:28 crc kubenswrapper[4703]: I1209 13:36:28.839040 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-797ff5dd46-77fms_db6f122b-a853-4ecb-8d82-2a8a04c8224e/manager/0.log" Dec 09 13:36:28 crc kubenswrapper[4703]: I1209 13:36:28.945008 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7dh4f" Dec 09 13:36:29 crc kubenswrapper[4703]: I1209 13:36:29.019524 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7dh4f"] Dec 09 13:36:29 crc kubenswrapper[4703]: E1209 13:36:29.075005 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:36:29 crc kubenswrapper[4703]: I1209 13:36:29.093330 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5d764"] Dec 09 13:36:29 crc kubenswrapper[4703]: I1209 13:36:29.093630 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5d764" podUID="0d4f1675-f3ef-46ce-a4d3-24f581829298" containerName="registry-server" containerID="cri-o://eaf656918f1a36030ffec613e18ddf1e37e39349549fda80a31eea67d1b29328" gracePeriod=2 Dec 09 13:36:29 crc kubenswrapper[4703]: I1209 13:36:29.925038 4703 generic.go:334] "Generic (PLEG): container finished" podID="0d4f1675-f3ef-46ce-a4d3-24f581829298" containerID="eaf656918f1a36030ffec613e18ddf1e37e39349549fda80a31eea67d1b29328" exitCode=0 Dec 09 13:36:29 crc kubenswrapper[4703]: I1209 13:36:29.925313 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5d764" event={"ID":"0d4f1675-f3ef-46ce-a4d3-24f581829298","Type":"ContainerDied","Data":"eaf656918f1a36030ffec613e18ddf1e37e39349549fda80a31eea67d1b29328"} Dec 09 13:36:30 crc kubenswrapper[4703]: I1209 13:36:30.086152 4703 patch_prober.go:28] interesting pod/machine-config-daemon-q8sfk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 13:36:30 crc kubenswrapper[4703]: I1209 13:36:30.086267 4703 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 13:36:30 crc kubenswrapper[4703]: I1209 13:36:30.086383 4703 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" Dec 09 13:36:30 crc kubenswrapper[4703]: I1209 13:36:30.088357 4703 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"82cad877c09508c36831656f283fbcfe71c780c9bb7605d0060bb517204558a9"} pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 13:36:30 crc kubenswrapper[4703]: I1209 13:36:30.088441 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" containerName="machine-config-daemon" containerID="cri-o://82cad877c09508c36831656f283fbcfe71c780c9bb7605d0060bb517204558a9" gracePeriod=600 Dec 09 13:36:30 crc kubenswrapper[4703]: I1209 13:36:30.426436 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5d764" Dec 09 13:36:30 crc kubenswrapper[4703]: I1209 13:36:30.511177 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d4f1675-f3ef-46ce-a4d3-24f581829298-utilities\") pod \"0d4f1675-f3ef-46ce-a4d3-24f581829298\" (UID: \"0d4f1675-f3ef-46ce-a4d3-24f581829298\") " Dec 09 13:36:30 crc kubenswrapper[4703]: I1209 13:36:30.511698 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d4f1675-f3ef-46ce-a4d3-24f581829298-utilities" (OuterVolumeSpecName: "utilities") pod "0d4f1675-f3ef-46ce-a4d3-24f581829298" (UID: "0d4f1675-f3ef-46ce-a4d3-24f581829298"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 13:36:30 crc kubenswrapper[4703]: I1209 13:36:30.511906 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d4f1675-f3ef-46ce-a4d3-24f581829298-catalog-content\") pod \"0d4f1675-f3ef-46ce-a4d3-24f581829298\" (UID: \"0d4f1675-f3ef-46ce-a4d3-24f581829298\") " Dec 09 13:36:30 crc kubenswrapper[4703]: I1209 13:36:30.512002 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99hhf\" (UniqueName: \"kubernetes.io/projected/0d4f1675-f3ef-46ce-a4d3-24f581829298-kube-api-access-99hhf\") pod \"0d4f1675-f3ef-46ce-a4d3-24f581829298\" (UID: \"0d4f1675-f3ef-46ce-a4d3-24f581829298\") " Dec 09 13:36:30 crc kubenswrapper[4703]: I1209 13:36:30.513522 4703 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d4f1675-f3ef-46ce-a4d3-24f581829298-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 13:36:30 crc kubenswrapper[4703]: I1209 13:36:30.530991 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d4f1675-f3ef-46ce-a4d3-24f581829298-kube-api-access-99hhf" (OuterVolumeSpecName: "kube-api-access-99hhf") pod "0d4f1675-f3ef-46ce-a4d3-24f581829298" (UID: "0d4f1675-f3ef-46ce-a4d3-24f581829298"). InnerVolumeSpecName "kube-api-access-99hhf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 13:36:30 crc kubenswrapper[4703]: I1209 13:36:30.594247 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d4f1675-f3ef-46ce-a4d3-24f581829298-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0d4f1675-f3ef-46ce-a4d3-24f581829298" (UID: "0d4f1675-f3ef-46ce-a4d3-24f581829298"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 13:36:30 crc kubenswrapper[4703]: I1209 13:36:30.615840 4703 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d4f1675-f3ef-46ce-a4d3-24f581829298-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 13:36:30 crc kubenswrapper[4703]: I1209 13:36:30.615887 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-99hhf\" (UniqueName: \"kubernetes.io/projected/0d4f1675-f3ef-46ce-a4d3-24f581829298-kube-api-access-99hhf\") on node \"crc\" DevicePath \"\"" Dec 09 13:36:30 crc kubenswrapper[4703]: I1209 13:36:30.816461 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-75k5m" Dec 09 13:36:30 crc kubenswrapper[4703]: I1209 13:36:30.887490 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-75k5m" Dec 09 13:36:30 crc kubenswrapper[4703]: I1209 13:36:30.942020 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5d764" event={"ID":"0d4f1675-f3ef-46ce-a4d3-24f581829298","Type":"ContainerDied","Data":"f30168627a37e01d968e7ca050e7ff0f8365817e859b8c1f31e38e368f3e4019"} Dec 09 13:36:30 crc kubenswrapper[4703]: I1209 13:36:30.942104 4703 scope.go:117] "RemoveContainer" containerID="eaf656918f1a36030ffec613e18ddf1e37e39349549fda80a31eea67d1b29328" Dec 09 13:36:30 crc kubenswrapper[4703]: I1209 13:36:30.942118 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5d764" Dec 09 13:36:30 crc kubenswrapper[4703]: I1209 13:36:30.954466 4703 generic.go:334] "Generic (PLEG): container finished" podID="32956ceb-8540-406e-8693-e86efb46cd42" containerID="82cad877c09508c36831656f283fbcfe71c780c9bb7605d0060bb517204558a9" exitCode=0 Dec 09 13:36:30 crc kubenswrapper[4703]: I1209 13:36:30.955244 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" event={"ID":"32956ceb-8540-406e-8693-e86efb46cd42","Type":"ContainerDied","Data":"82cad877c09508c36831656f283fbcfe71c780c9bb7605d0060bb517204558a9"} Dec 09 13:36:30 crc kubenswrapper[4703]: I1209 13:36:30.955475 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" event={"ID":"32956ceb-8540-406e-8693-e86efb46cd42","Type":"ContainerStarted","Data":"075036586a4acd047eda3bd628a4c1107d844816490ede14d5fd144b3a839241"} Dec 09 13:36:30 crc kubenswrapper[4703]: I1209 13:36:30.998716 4703 scope.go:117] "RemoveContainer" containerID="de178cf5ac9455227e06a845671bf5f8b530f4a7e890683a77504cf9a68743ba" Dec 09 13:36:31 crc kubenswrapper[4703]: I1209 13:36:31.088403 4703 scope.go:117] "RemoveContainer" containerID="10b1852da64f22d31c1bf1f64d0532c5f9d2a0f2bbd2489ac213d821bf8c50bc" Dec 09 13:36:31 crc kubenswrapper[4703]: I1209 13:36:31.099177 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5d764"] Dec 09 13:36:31 crc kubenswrapper[4703]: I1209 13:36:31.100829 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5d764"] Dec 09 13:36:31 crc kubenswrapper[4703]: I1209 13:36:31.155546 4703 scope.go:117] "RemoveContainer" containerID="59a313480bf76c604abfa3e91cb3f559c36326e7764983df824f7e63c20b433b" Dec 09 13:36:32 crc kubenswrapper[4703]: I1209 13:36:32.720942 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-75k5m"] Dec 09 13:36:32 crc kubenswrapper[4703]: I1209 13:36:32.722026 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-75k5m" podUID="a61257f4-093f-4d25-ad4e-d33ee8ab012b" containerName="registry-server" containerID="cri-o://7568ba7a460b6bebbce44379c761a33d03dd97998a09bd867f84c42a78c072fc" gracePeriod=2 Dec 09 13:36:32 crc kubenswrapper[4703]: I1209 13:36:32.987618 4703 generic.go:334] "Generic (PLEG): container finished" podID="a61257f4-093f-4d25-ad4e-d33ee8ab012b" containerID="7568ba7a460b6bebbce44379c761a33d03dd97998a09bd867f84c42a78c072fc" exitCode=0 Dec 09 13:36:32 crc kubenswrapper[4703]: I1209 13:36:32.987700 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-75k5m" event={"ID":"a61257f4-093f-4d25-ad4e-d33ee8ab012b","Type":"ContainerDied","Data":"7568ba7a460b6bebbce44379c761a33d03dd97998a09bd867f84c42a78c072fc"} Dec 09 13:36:33 crc kubenswrapper[4703]: I1209 13:36:33.086092 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d4f1675-f3ef-46ce-a4d3-24f581829298" path="/var/lib/kubelet/pods/0d4f1675-f3ef-46ce-a4d3-24f581829298/volumes" Dec 09 13:36:33 crc kubenswrapper[4703]: I1209 13:36:33.435363 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-75k5m" Dec 09 13:36:33 crc kubenswrapper[4703]: I1209 13:36:33.536663 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a61257f4-093f-4d25-ad4e-d33ee8ab012b-catalog-content\") pod \"a61257f4-093f-4d25-ad4e-d33ee8ab012b\" (UID: \"a61257f4-093f-4d25-ad4e-d33ee8ab012b\") " Dec 09 13:36:33 crc kubenswrapper[4703]: I1209 13:36:33.536873 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a61257f4-093f-4d25-ad4e-d33ee8ab012b-utilities\") pod \"a61257f4-093f-4d25-ad4e-d33ee8ab012b\" (UID: \"a61257f4-093f-4d25-ad4e-d33ee8ab012b\") " Dec 09 13:36:33 crc kubenswrapper[4703]: I1209 13:36:33.536987 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kt4mf\" (UniqueName: \"kubernetes.io/projected/a61257f4-093f-4d25-ad4e-d33ee8ab012b-kube-api-access-kt4mf\") pod \"a61257f4-093f-4d25-ad4e-d33ee8ab012b\" (UID: \"a61257f4-093f-4d25-ad4e-d33ee8ab012b\") " Dec 09 13:36:33 crc kubenswrapper[4703]: I1209 13:36:33.538006 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a61257f4-093f-4d25-ad4e-d33ee8ab012b-utilities" (OuterVolumeSpecName: "utilities") pod "a61257f4-093f-4d25-ad4e-d33ee8ab012b" (UID: "a61257f4-093f-4d25-ad4e-d33ee8ab012b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 13:36:33 crc kubenswrapper[4703]: I1209 13:36:33.546086 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a61257f4-093f-4d25-ad4e-d33ee8ab012b-kube-api-access-kt4mf" (OuterVolumeSpecName: "kube-api-access-kt4mf") pod "a61257f4-093f-4d25-ad4e-d33ee8ab012b" (UID: "a61257f4-093f-4d25-ad4e-d33ee8ab012b"). InnerVolumeSpecName "kube-api-access-kt4mf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 13:36:33 crc kubenswrapper[4703]: I1209 13:36:33.573588 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a61257f4-093f-4d25-ad4e-d33ee8ab012b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a61257f4-093f-4d25-ad4e-d33ee8ab012b" (UID: "a61257f4-093f-4d25-ad4e-d33ee8ab012b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 13:36:33 crc kubenswrapper[4703]: I1209 13:36:33.640173 4703 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a61257f4-093f-4d25-ad4e-d33ee8ab012b-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 13:36:33 crc kubenswrapper[4703]: I1209 13:36:33.640241 4703 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a61257f4-093f-4d25-ad4e-d33ee8ab012b-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 13:36:33 crc kubenswrapper[4703]: I1209 13:36:33.640255 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kt4mf\" (UniqueName: \"kubernetes.io/projected/a61257f4-093f-4d25-ad4e-d33ee8ab012b-kube-api-access-kt4mf\") on node \"crc\" DevicePath \"\"" Dec 09 13:36:34 crc kubenswrapper[4703]: I1209 13:36:34.002794 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-75k5m" event={"ID":"a61257f4-093f-4d25-ad4e-d33ee8ab012b","Type":"ContainerDied","Data":"fed9ef35bd55f65c90cbdbafdeb7bfa3e1c28d3f6caf7d41a22c4011be2fc200"} Dec 09 13:36:34 crc kubenswrapper[4703]: I1209 13:36:34.003387 4703 scope.go:117] "RemoveContainer" containerID="7568ba7a460b6bebbce44379c761a33d03dd97998a09bd867f84c42a78c072fc" Dec 09 13:36:34 crc kubenswrapper[4703]: I1209 13:36:34.002904 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-75k5m" Dec 09 13:36:34 crc kubenswrapper[4703]: I1209 13:36:34.037894 4703 scope.go:117] "RemoveContainer" containerID="45d90ccb5ce3d55f5c6a3376bf72520e41086158df72befbc6d2a6678cc591fb" Dec 09 13:36:34 crc kubenswrapper[4703]: I1209 13:36:34.058733 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-75k5m"] Dec 09 13:36:34 crc kubenswrapper[4703]: I1209 13:36:34.067215 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-75k5m"] Dec 09 13:36:34 crc kubenswrapper[4703]: I1209 13:36:34.072815 4703 scope.go:117] "RemoveContainer" containerID="3e3a78c22c401f38288ff0da64be8126e3a1ac4b693d1f08563968b278f9ab54" Dec 09 13:36:35 crc kubenswrapper[4703]: I1209 13:36:35.085592 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a61257f4-093f-4d25-ad4e-d33ee8ab012b" path="/var/lib/kubelet/pods/a61257f4-093f-4d25-ad4e-d33ee8ab012b/volumes" Dec 09 13:36:38 crc kubenswrapper[4703]: E1209 13:36:38.073510 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:36:42 crc kubenswrapper[4703]: E1209 13:36:42.073727 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:36:49 crc kubenswrapper[4703]: E1209 13:36:49.073287 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:36:54 crc kubenswrapper[4703]: I1209 13:36:54.259169 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-6cpwj_42f7caf7-af08-4406-b189-3e4ce5fa6819/control-plane-machine-set-operator/0.log" Dec 09 13:36:54 crc kubenswrapper[4703]: I1209 13:36:54.556489 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-czvkf_558df921-8b36-45ba-8cc8-b25a1a2f6172/machine-api-operator/0.log" Dec 09 13:36:54 crc kubenswrapper[4703]: I1209 13:36:54.560758 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-czvkf_558df921-8b36-45ba-8cc8-b25a1a2f6172/kube-rbac-proxy/0.log" Dec 09 13:36:56 crc kubenswrapper[4703]: E1209 13:36:56.074574 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:37:02 crc kubenswrapper[4703]: E1209 13:37:02.072740 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:37:08 crc kubenswrapper[4703]: I1209 13:37:08.428686 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-bnzh5_ca3614a5-2aca-4c15-b0c1-418925c20ce9/cert-manager-controller/0.log" Dec 09 13:37:08 crc kubenswrapper[4703]: I1209 13:37:08.543777 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-sjnhl_73ce4d20-c9fc-4eda-b404-056f4dc06c03/cert-manager-cainjector/0.log" Dec 09 13:37:08 crc kubenswrapper[4703]: I1209 13:37:08.635568 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-2mlm8_0dec1409-3c5a-4672-bc18-06c9a59876a0/cert-manager-webhook/0.log" Dec 09 13:37:11 crc kubenswrapper[4703]: E1209 13:37:11.081448 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:37:15 crc kubenswrapper[4703]: E1209 13:37:15.073386 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:37:22 crc kubenswrapper[4703]: E1209 13:37:22.080572 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:37:24 crc kubenswrapper[4703]: I1209 13:37:24.592232 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7fbb5f6569-psqsd_70fee6a1-39ea-49c7-9af1-a1479caed970/nmstate-console-plugin/0.log" Dec 09 13:37:24 crc kubenswrapper[4703]: I1209 13:37:24.784423 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-8tjxd_f7efe488-c603-4417-bc88-d790f961bd6b/nmstate-handler/0.log" Dec 09 13:37:24 crc kubenswrapper[4703]: I1209 13:37:24.842268 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-x4ffh_3cd7328d-79a9-479a-b16d-e0b8562cb246/nmstate-metrics/0.log" Dec 09 13:37:24 crc kubenswrapper[4703]: I1209 13:37:24.842881 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-x4ffh_3cd7328d-79a9-479a-b16d-e0b8562cb246/kube-rbac-proxy/0.log" Dec 09 13:37:25 crc kubenswrapper[4703]: I1209 13:37:25.014610 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-5b5b58f5c8-6r94d_6d0f6831-61e4-49f6-996d-2c1dcc081e40/nmstate-operator/0.log" Dec 09 13:37:25 crc kubenswrapper[4703]: I1209 13:37:25.088960 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f6d4c5ccb-49s88_ad79ffaf-8e65-45be-bd0a-4abcd3bafb06/nmstate-webhook/0.log" Dec 09 13:37:28 crc kubenswrapper[4703]: E1209 13:37:28.073146 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:37:37 crc kubenswrapper[4703]: E1209 13:37:37.073160 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:37:41 crc kubenswrapper[4703]: I1209 13:37:41.320600 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-78784949d8-qbts8_253ad9ab-9f72-4252-91c2-8a79577155a2/kube-rbac-proxy/0.log" Dec 09 13:37:41 crc kubenswrapper[4703]: I1209 13:37:41.409921 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-78784949d8-qbts8_253ad9ab-9f72-4252-91c2-8a79577155a2/manager/0.log" Dec 09 13:37:42 crc kubenswrapper[4703]: E1209 13:37:42.072416 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:37:50 crc kubenswrapper[4703]: E1209 13:37:50.072673 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:37:53 crc kubenswrapper[4703]: E1209 13:37:53.072683 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:37:59 crc kubenswrapper[4703]: I1209 13:37:59.942442 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-76gdd_e44337b7-8419-419c-8281-64da2bc8d0aa/kube-rbac-proxy/0.log" Dec 09 13:38:00 crc kubenswrapper[4703]: I1209 13:38:00.058393 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-76gdd_e44337b7-8419-419c-8281-64da2bc8d0aa/controller/0.log" Dec 09 13:38:00 crc kubenswrapper[4703]: I1209 13:38:00.394641 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j5bbv_8e7bbe07-74ed-43ff-9034-1e93caf42289/cp-frr-files/0.log" Dec 09 13:38:01 crc kubenswrapper[4703]: I1209 13:38:01.174100 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j5bbv_8e7bbe07-74ed-43ff-9034-1e93caf42289/cp-reloader/0.log" Dec 09 13:38:01 crc kubenswrapper[4703]: I1209 13:38:01.174468 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j5bbv_8e7bbe07-74ed-43ff-9034-1e93caf42289/cp-frr-files/0.log" Dec 09 13:38:01 crc kubenswrapper[4703]: I1209 13:38:01.199497 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j5bbv_8e7bbe07-74ed-43ff-9034-1e93caf42289/cp-metrics/0.log" Dec 09 13:38:01 crc kubenswrapper[4703]: I1209 13:38:01.213906 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j5bbv_8e7bbe07-74ed-43ff-9034-1e93caf42289/cp-reloader/0.log" Dec 09 13:38:01 crc kubenswrapper[4703]: I1209 13:38:01.407430 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j5bbv_8e7bbe07-74ed-43ff-9034-1e93caf42289/cp-frr-files/0.log" Dec 09 13:38:01 crc kubenswrapper[4703]: I1209 13:38:01.462774 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j5bbv_8e7bbe07-74ed-43ff-9034-1e93caf42289/cp-reloader/0.log" Dec 09 13:38:01 crc kubenswrapper[4703]: I1209 13:38:01.493901 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j5bbv_8e7bbe07-74ed-43ff-9034-1e93caf42289/cp-metrics/0.log" Dec 09 13:38:01 crc kubenswrapper[4703]: I1209 13:38:01.514929 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j5bbv_8e7bbe07-74ed-43ff-9034-1e93caf42289/cp-metrics/0.log" Dec 09 13:38:01 crc kubenswrapper[4703]: I1209 13:38:01.716957 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j5bbv_8e7bbe07-74ed-43ff-9034-1e93caf42289/controller/0.log" Dec 09 13:38:01 crc kubenswrapper[4703]: I1209 13:38:01.756034 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j5bbv_8e7bbe07-74ed-43ff-9034-1e93caf42289/cp-frr-files/0.log" Dec 09 13:38:01 crc kubenswrapper[4703]: I1209 13:38:01.767065 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j5bbv_8e7bbe07-74ed-43ff-9034-1e93caf42289/cp-reloader/0.log" Dec 09 13:38:01 crc kubenswrapper[4703]: I1209 13:38:01.770297 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j5bbv_8e7bbe07-74ed-43ff-9034-1e93caf42289/cp-metrics/0.log" Dec 09 13:38:01 crc kubenswrapper[4703]: I1209 13:38:01.989277 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j5bbv_8e7bbe07-74ed-43ff-9034-1e93caf42289/kube-rbac-proxy-frr/0.log" Dec 09 13:38:02 crc kubenswrapper[4703]: I1209 13:38:02.004084 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j5bbv_8e7bbe07-74ed-43ff-9034-1e93caf42289/frr-metrics/0.log" Dec 09 13:38:02 crc kubenswrapper[4703]: I1209 13:38:02.032133 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j5bbv_8e7bbe07-74ed-43ff-9034-1e93caf42289/kube-rbac-proxy/0.log" Dec 09 13:38:02 crc kubenswrapper[4703]: I1209 13:38:02.283573 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j5bbv_8e7bbe07-74ed-43ff-9034-1e93caf42289/reloader/0.log" Dec 09 13:38:02 crc kubenswrapper[4703]: I1209 13:38:02.340908 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7fcb986d4-bw4n2_9dd2a2e2-1c03-4529-946e-abe76835d2f4/frr-k8s-webhook-server/0.log" Dec 09 13:38:02 crc kubenswrapper[4703]: I1209 13:38:02.521620 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6cdb947f87-7qh5t_099a4868-6d13-416d-bace-7c2a09de41a2/manager/0.log" Dec 09 13:38:02 crc kubenswrapper[4703]: I1209 13:38:02.770704 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-867f57ddcd-4kfk5_f8b98ceb-52b8-4666-8f76-06b1b3e6c01a/webhook-server/0.log" Dec 09 13:38:02 crc kubenswrapper[4703]: I1209 13:38:02.844115 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-r685h_06eb48ca-40a8-4c61-97f5-b6ed1f0691fe/kube-rbac-proxy/0.log" Dec 09 13:38:03 crc kubenswrapper[4703]: I1209 13:38:03.581844 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j5bbv_8e7bbe07-74ed-43ff-9034-1e93caf42289/frr/0.log" Dec 09 13:38:03 crc kubenswrapper[4703]: I1209 13:38:03.618232 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-r685h_06eb48ca-40a8-4c61-97f5-b6ed1f0691fe/speaker/0.log" Dec 09 13:38:04 crc kubenswrapper[4703]: E1209 13:38:04.073897 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:38:05 crc kubenswrapper[4703]: E1209 13:38:05.071864 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:38:15 crc kubenswrapper[4703]: E1209 13:38:15.072631 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:38:19 crc kubenswrapper[4703]: I1209 13:38:19.193261 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fccg8t_4fdac27b-2cf2-4a2d-851e-75dcb928860f/util/0.log" Dec 09 13:38:19 crc kubenswrapper[4703]: I1209 13:38:19.532783 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fccg8t_4fdac27b-2cf2-4a2d-851e-75dcb928860f/pull/0.log" Dec 09 13:38:19 crc kubenswrapper[4703]: I1209 13:38:19.541480 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fccg8t_4fdac27b-2cf2-4a2d-851e-75dcb928860f/util/0.log" Dec 09 13:38:19 crc kubenswrapper[4703]: I1209 13:38:19.567721 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fccg8t_4fdac27b-2cf2-4a2d-851e-75dcb928860f/pull/0.log" Dec 09 13:38:19 crc kubenswrapper[4703]: I1209 13:38:19.760924 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fccg8t_4fdac27b-2cf2-4a2d-851e-75dcb928860f/util/0.log" Dec 09 13:38:19 crc kubenswrapper[4703]: I1209 13:38:19.792092 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fccg8t_4fdac27b-2cf2-4a2d-851e-75dcb928860f/extract/0.log" Dec 09 13:38:19 crc kubenswrapper[4703]: I1209 13:38:19.816549 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fccg8t_4fdac27b-2cf2-4a2d-851e-75dcb928860f/pull/0.log" Dec 09 13:38:19 crc kubenswrapper[4703]: I1209 13:38:19.983073 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210p9nx6_fe409fef-afb0-4377-9bda-f6f1e9390efc/util/0.log" Dec 09 13:38:20 crc kubenswrapper[4703]: E1209 13:38:20.073197 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:38:20 crc kubenswrapper[4703]: I1209 13:38:20.218363 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210p9nx6_fe409fef-afb0-4377-9bda-f6f1e9390efc/pull/0.log" Dec 09 13:38:20 crc kubenswrapper[4703]: I1209 13:38:20.233056 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210p9nx6_fe409fef-afb0-4377-9bda-f6f1e9390efc/util/0.log" Dec 09 13:38:20 crc kubenswrapper[4703]: I1209 13:38:20.513301 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210p9nx6_fe409fef-afb0-4377-9bda-f6f1e9390efc/pull/0.log" Dec 09 13:38:20 crc kubenswrapper[4703]: I1209 13:38:20.713832 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210p9nx6_fe409fef-afb0-4377-9bda-f6f1e9390efc/util/0.log" Dec 09 13:38:20 crc kubenswrapper[4703]: I1209 13:38:20.731980 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210p9nx6_fe409fef-afb0-4377-9bda-f6f1e9390efc/pull/0.log" Dec 09 13:38:20 crc kubenswrapper[4703]: I1209 13:38:20.732243 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210p9nx6_fe409fef-afb0-4377-9bda-f6f1e9390efc/extract/0.log" Dec 09 13:38:20 crc kubenswrapper[4703]: I1209 13:38:20.916892 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c127cl5_61119125-f807-4f7b-b62e-e76f6cbfe8d2/util/0.log" Dec 09 13:38:21 crc kubenswrapper[4703]: I1209 13:38:21.127639 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c127cl5_61119125-f807-4f7b-b62e-e76f6cbfe8d2/util/0.log" Dec 09 13:38:21 crc kubenswrapper[4703]: I1209 13:38:21.147291 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c127cl5_61119125-f807-4f7b-b62e-e76f6cbfe8d2/pull/0.log" Dec 09 13:38:21 crc kubenswrapper[4703]: I1209 13:38:21.157356 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c127cl5_61119125-f807-4f7b-b62e-e76f6cbfe8d2/pull/0.log" Dec 09 13:38:21 crc kubenswrapper[4703]: I1209 13:38:21.355999 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c127cl5_61119125-f807-4f7b-b62e-e76f6cbfe8d2/util/0.log" Dec 09 13:38:21 crc kubenswrapper[4703]: I1209 13:38:21.373255 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c127cl5_61119125-f807-4f7b-b62e-e76f6cbfe8d2/pull/0.log" Dec 09 13:38:21 crc kubenswrapper[4703]: I1209 13:38:21.389771 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7b5aa1f5b38b68c96e281700110eb6f32773ca4b2682978fa6f2ffb2c127cl5_61119125-f807-4f7b-b62e-e76f6cbfe8d2/extract/0.log" Dec 09 13:38:21 crc kubenswrapper[4703]: I1209 13:38:21.636729 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83s57fh_870e333f-b82c-439d-9d53-ce57aa5c83c9/util/0.log" Dec 09 13:38:21 crc kubenswrapper[4703]: I1209 13:38:21.849772 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83s57fh_870e333f-b82c-439d-9d53-ce57aa5c83c9/pull/0.log" Dec 09 13:38:21 crc kubenswrapper[4703]: I1209 13:38:21.893201 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83s57fh_870e333f-b82c-439d-9d53-ce57aa5c83c9/util/0.log" Dec 09 13:38:21 crc kubenswrapper[4703]: I1209 13:38:21.927714 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83s57fh_870e333f-b82c-439d-9d53-ce57aa5c83c9/pull/0.log" Dec 09 13:38:22 crc kubenswrapper[4703]: I1209 13:38:22.117823 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83s57fh_870e333f-b82c-439d-9d53-ce57aa5c83c9/util/0.log" Dec 09 13:38:22 crc kubenswrapper[4703]: I1209 13:38:22.118992 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83s57fh_870e333f-b82c-439d-9d53-ce57aa5c83c9/pull/0.log" Dec 09 13:38:22 crc kubenswrapper[4703]: I1209 13:38:22.180824 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83s57fh_870e333f-b82c-439d-9d53-ce57aa5c83c9/extract/0.log" Dec 09 13:38:22 crc kubenswrapper[4703]: I1209 13:38:22.331278 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7dh4f_7c34f924-873b-4f3b-871c-696caae7d620/extract-utilities/0.log" Dec 09 13:38:22 crc kubenswrapper[4703]: I1209 13:38:22.544642 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7dh4f_7c34f924-873b-4f3b-871c-696caae7d620/extract-utilities/0.log" Dec 09 13:38:22 crc kubenswrapper[4703]: I1209 13:38:22.570004 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7dh4f_7c34f924-873b-4f3b-871c-696caae7d620/extract-content/0.log" Dec 09 13:38:22 crc kubenswrapper[4703]: I1209 13:38:22.582215 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7dh4f_7c34f924-873b-4f3b-871c-696caae7d620/extract-content/0.log" Dec 09 13:38:22 crc kubenswrapper[4703]: I1209 13:38:22.770893 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7dh4f_7c34f924-873b-4f3b-871c-696caae7d620/extract-utilities/0.log" Dec 09 13:38:22 crc kubenswrapper[4703]: I1209 13:38:22.811636 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7dh4f_7c34f924-873b-4f3b-871c-696caae7d620/extract-content/0.log" Dec 09 13:38:22 crc kubenswrapper[4703]: I1209 13:38:22.978875 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7dh4f_7c34f924-873b-4f3b-871c-696caae7d620/registry-server/0.log" Dec 09 13:38:23 crc kubenswrapper[4703]: I1209 13:38:23.017651 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-8rlrm_e5c5b466-1538-4449-b93c-abeee2b2c8ff/extract-utilities/0.log" Dec 09 13:38:24 crc kubenswrapper[4703]: I1209 13:38:24.014012 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-8rlrm_e5c5b466-1538-4449-b93c-abeee2b2c8ff/extract-content/0.log" Dec 09 13:38:24 crc kubenswrapper[4703]: I1209 13:38:24.016368 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-8rlrm_e5c5b466-1538-4449-b93c-abeee2b2c8ff/extract-utilities/0.log" Dec 09 13:38:24 crc kubenswrapper[4703]: I1209 13:38:24.055635 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-8rlrm_e5c5b466-1538-4449-b93c-abeee2b2c8ff/extract-content/0.log" Dec 09 13:38:24 crc kubenswrapper[4703]: I1209 13:38:24.200148 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-8rlrm_e5c5b466-1538-4449-b93c-abeee2b2c8ff/extract-content/0.log" Dec 09 13:38:24 crc kubenswrapper[4703]: I1209 13:38:24.253739 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-8rlrm_e5c5b466-1538-4449-b93c-abeee2b2c8ff/extract-utilities/0.log" Dec 09 13:38:24 crc kubenswrapper[4703]: I1209 13:38:24.315636 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-2s2wq_da71670f-b188-4a71-ac05-55ad7e238d62/marketplace-operator/0.log" Dec 09 13:38:24 crc kubenswrapper[4703]: I1209 13:38:24.470655 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-kvftx_71b2ce30-0051-4329-b922-c8647bb87bb1/extract-utilities/0.log" Dec 09 13:38:24 crc kubenswrapper[4703]: I1209 13:38:24.773831 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-kvftx_71b2ce30-0051-4329-b922-c8647bb87bb1/extract-content/0.log" Dec 09 13:38:24 crc kubenswrapper[4703]: I1209 13:38:24.790444 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-kvftx_71b2ce30-0051-4329-b922-c8647bb87bb1/extract-content/0.log" Dec 09 13:38:24 crc kubenswrapper[4703]: I1209 13:38:24.793025 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-kvftx_71b2ce30-0051-4329-b922-c8647bb87bb1/extract-utilities/0.log" Dec 09 13:38:25 crc kubenswrapper[4703]: I1209 13:38:25.042832 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-kvftx_71b2ce30-0051-4329-b922-c8647bb87bb1/extract-utilities/0.log" Dec 09 13:38:25 crc kubenswrapper[4703]: I1209 13:38:25.057175 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-kvftx_71b2ce30-0051-4329-b922-c8647bb87bb1/extract-content/0.log" Dec 09 13:38:25 crc kubenswrapper[4703]: I1209 13:38:25.231507 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-66jg7_c345164c-369d-4bc8-bb87-937657bc641a/extract-utilities/0.log" Dec 09 13:38:25 crc kubenswrapper[4703]: I1209 13:38:25.320066 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-kvftx_71b2ce30-0051-4329-b922-c8647bb87bb1/registry-server/0.log" Dec 09 13:38:25 crc kubenswrapper[4703]: I1209 13:38:25.359861 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-8rlrm_e5c5b466-1538-4449-b93c-abeee2b2c8ff/registry-server/0.log" Dec 09 13:38:26 crc kubenswrapper[4703]: I1209 13:38:26.016795 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-66jg7_c345164c-369d-4bc8-bb87-937657bc641a/extract-utilities/0.log" Dec 09 13:38:26 crc kubenswrapper[4703]: I1209 13:38:26.041213 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-66jg7_c345164c-369d-4bc8-bb87-937657bc641a/extract-content/0.log" Dec 09 13:38:26 crc kubenswrapper[4703]: I1209 13:38:26.064592 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-66jg7_c345164c-369d-4bc8-bb87-937657bc641a/extract-content/0.log" Dec 09 13:38:26 crc kubenswrapper[4703]: E1209 13:38:26.073114 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:38:26 crc kubenswrapper[4703]: I1209 13:38:26.214607 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-66jg7_c345164c-369d-4bc8-bb87-937657bc641a/extract-utilities/0.log" Dec 09 13:38:26 crc kubenswrapper[4703]: I1209 13:38:26.231748 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-66jg7_c345164c-369d-4bc8-bb87-937657bc641a/extract-content/0.log" Dec 09 13:38:26 crc kubenswrapper[4703]: I1209 13:38:26.430370 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-66jg7_c345164c-369d-4bc8-bb87-937657bc641a/registry-server/0.log" Dec 09 13:38:30 crc kubenswrapper[4703]: I1209 13:38:30.083273 4703 patch_prober.go:28] interesting pod/machine-config-daemon-q8sfk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 13:38:30 crc kubenswrapper[4703]: I1209 13:38:30.083878 4703 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 13:38:33 crc kubenswrapper[4703]: E1209 13:38:33.072259 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:38:40 crc kubenswrapper[4703]: I1209 13:38:40.993131 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-668cf9dfbb-dg7pb_4cec0df1-f871-497e-8d5a-03ed7c99c085/prometheus-operator/0.log" Dec 09 13:38:41 crc kubenswrapper[4703]: E1209 13:38:41.087431 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:38:41 crc kubenswrapper[4703]: I1209 13:38:41.159426 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-677d894988-5jcb6_c0b01c4e-ed19-4d86-b0f7-a459744771d5/prometheus-operator-admission-webhook/0.log" Dec 09 13:38:41 crc kubenswrapper[4703]: I1209 13:38:41.222858 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-677d894988-6gqww_b7467102-297a-4824-a598-f22317525002/prometheus-operator-admission-webhook/0.log" Dec 09 13:38:41 crc kubenswrapper[4703]: I1209 13:38:41.395401 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-d8bb48f5d-bfp54_a3216052-a675-4452-b6b4-a63dcce7a51f/operator/0.log" Dec 09 13:38:41 crc kubenswrapper[4703]: I1209 13:38:41.527574 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5446b9c989-tzn85_2f5ff9fe-6b53-49b8-ba78-30df51c9473e/perses-operator/0.log" Dec 09 13:38:44 crc kubenswrapper[4703]: E1209 13:38:44.072716 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:38:54 crc kubenswrapper[4703]: E1209 13:38:54.074333 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:38:55 crc kubenswrapper[4703]: E1209 13:38:55.072776 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:38:57 crc kubenswrapper[4703]: I1209 13:38:57.830090 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-78784949d8-qbts8_253ad9ab-9f72-4252-91c2-8a79577155a2/kube-rbac-proxy/0.log" Dec 09 13:38:58 crc kubenswrapper[4703]: I1209 13:38:58.041515 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-78784949d8-qbts8_253ad9ab-9f72-4252-91c2-8a79577155a2/manager/0.log" Dec 09 13:39:00 crc kubenswrapper[4703]: I1209 13:39:00.084040 4703 patch_prober.go:28] interesting pod/machine-config-daemon-q8sfk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 13:39:00 crc kubenswrapper[4703]: I1209 13:39:00.084591 4703 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 13:39:05 crc kubenswrapper[4703]: E1209 13:39:05.074549 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:39:10 crc kubenswrapper[4703]: E1209 13:39:10.071694 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:39:18 crc kubenswrapper[4703]: E1209 13:39:18.072953 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:39:23 crc kubenswrapper[4703]: E1209 13:39:23.072577 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:39:30 crc kubenswrapper[4703]: I1209 13:39:30.083609 4703 patch_prober.go:28] interesting pod/machine-config-daemon-q8sfk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 13:39:30 crc kubenswrapper[4703]: I1209 13:39:30.084640 4703 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 13:39:30 crc kubenswrapper[4703]: I1209 13:39:30.084714 4703 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" Dec 09 13:39:30 crc kubenswrapper[4703]: I1209 13:39:30.085831 4703 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"075036586a4acd047eda3bd628a4c1107d844816490ede14d5fd144b3a839241"} pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 13:39:30 crc kubenswrapper[4703]: I1209 13:39:30.085923 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" containerName="machine-config-daemon" containerID="cri-o://075036586a4acd047eda3bd628a4c1107d844816490ede14d5fd144b3a839241" gracePeriod=600 Dec 09 13:39:30 crc kubenswrapper[4703]: E1209 13:39:30.245693 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 13:39:30 crc kubenswrapper[4703]: E1209 13:39:30.356754 4703 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod32956ceb_8540_406e_8693_e86efb46cd42.slice/crio-conmon-075036586a4acd047eda3bd628a4c1107d844816490ede14d5fd144b3a839241.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod32956ceb_8540_406e_8693_e86efb46cd42.slice/crio-075036586a4acd047eda3bd628a4c1107d844816490ede14d5fd144b3a839241.scope\": RecentStats: unable to find data in memory cache]" Dec 09 13:39:31 crc kubenswrapper[4703]: I1209 13:39:31.194313 4703 generic.go:334] "Generic (PLEG): container finished" podID="32956ceb-8540-406e-8693-e86efb46cd42" containerID="075036586a4acd047eda3bd628a4c1107d844816490ede14d5fd144b3a839241" exitCode=0 Dec 09 13:39:31 crc kubenswrapper[4703]: I1209 13:39:31.194390 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" event={"ID":"32956ceb-8540-406e-8693-e86efb46cd42","Type":"ContainerDied","Data":"075036586a4acd047eda3bd628a4c1107d844816490ede14d5fd144b3a839241"} Dec 09 13:39:31 crc kubenswrapper[4703]: I1209 13:39:31.194732 4703 scope.go:117] "RemoveContainer" containerID="82cad877c09508c36831656f283fbcfe71c780c9bb7605d0060bb517204558a9" Dec 09 13:39:31 crc kubenswrapper[4703]: I1209 13:39:31.195577 4703 scope.go:117] "RemoveContainer" containerID="075036586a4acd047eda3bd628a4c1107d844816490ede14d5fd144b3a839241" Dec 09 13:39:31 crc kubenswrapper[4703]: E1209 13:39:31.195877 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 13:39:33 crc kubenswrapper[4703]: E1209 13:39:33.072102 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:39:34 crc kubenswrapper[4703]: E1209 13:39:34.076179 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:39:43 crc kubenswrapper[4703]: I1209 13:39:43.070206 4703 scope.go:117] "RemoveContainer" containerID="075036586a4acd047eda3bd628a4c1107d844816490ede14d5fd144b3a839241" Dec 09 13:39:43 crc kubenswrapper[4703]: E1209 13:39:43.071123 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 13:39:45 crc kubenswrapper[4703]: I1209 13:39:45.073320 4703 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 09 13:39:45 crc kubenswrapper[4703]: E1209 13:39:45.192171 4703 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 13:39:45 crc kubenswrapper[4703]: E1209 13:39:45.192288 4703 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 13:39:45 crc kubenswrapper[4703]: E1209 13:39:45.192545 4703 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n565h678h645h65h685h657h568h564h8fhcfhc4h58fh66bh564h649h5d5hb5h67dh57fh97h58ch557h4h68h59fh5c8h65ch586h5bch5b8h5c6h7q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p7hbq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(ce42c586-f397-4f98-be45-f56d36115d7a): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 13:39:45 crc kubenswrapper[4703]: E1209 13:39:45.193831 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:39:47 crc kubenswrapper[4703]: E1209 13:39:47.073921 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:39:57 crc kubenswrapper[4703]: I1209 13:39:57.070646 4703 scope.go:117] "RemoveContainer" containerID="075036586a4acd047eda3bd628a4c1107d844816490ede14d5fd144b3a839241" Dec 09 13:39:57 crc kubenswrapper[4703]: E1209 13:39:57.071871 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 13:39:57 crc kubenswrapper[4703]: E1209 13:39:57.073596 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:40:00 crc kubenswrapper[4703]: E1209 13:40:00.096933 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:40:09 crc kubenswrapper[4703]: E1209 13:40:09.074348 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:40:10 crc kubenswrapper[4703]: I1209 13:40:10.070138 4703 scope.go:117] "RemoveContainer" containerID="075036586a4acd047eda3bd628a4c1107d844816490ede14d5fd144b3a839241" Dec 09 13:40:10 crc kubenswrapper[4703]: E1209 13:40:10.070470 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 13:40:14 crc kubenswrapper[4703]: E1209 13:40:14.234548 4703 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 09 13:40:14 crc kubenswrapper[4703]: E1209 13:40:14.235259 4703 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 09 13:40:14 crc kubenswrapper[4703]: E1209 13:40:14.235437 4703 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6kdkz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-6ljb8_openstack(862e9d91-760e-43af-aba3-a23255b0fd7a): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 13:40:14 crc kubenswrapper[4703]: E1209 13:40:14.236668 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:40:21 crc kubenswrapper[4703]: E1209 13:40:21.085477 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:40:23 crc kubenswrapper[4703]: I1209 13:40:23.659635 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fhx26"] Dec 09 13:40:23 crc kubenswrapper[4703]: E1209 13:40:23.660796 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d4f1675-f3ef-46ce-a4d3-24f581829298" containerName="extract-content" Dec 09 13:40:23 crc kubenswrapper[4703]: I1209 13:40:23.660815 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d4f1675-f3ef-46ce-a4d3-24f581829298" containerName="extract-content" Dec 09 13:40:23 crc kubenswrapper[4703]: E1209 13:40:23.660834 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d4f1675-f3ef-46ce-a4d3-24f581829298" containerName="registry-server" Dec 09 13:40:23 crc kubenswrapper[4703]: I1209 13:40:23.660841 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d4f1675-f3ef-46ce-a4d3-24f581829298" containerName="registry-server" Dec 09 13:40:23 crc kubenswrapper[4703]: E1209 13:40:23.660858 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d4f1675-f3ef-46ce-a4d3-24f581829298" containerName="extract-utilities" Dec 09 13:40:23 crc kubenswrapper[4703]: I1209 13:40:23.660865 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d4f1675-f3ef-46ce-a4d3-24f581829298" containerName="extract-utilities" Dec 09 13:40:23 crc kubenswrapper[4703]: E1209 13:40:23.660885 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a61257f4-093f-4d25-ad4e-d33ee8ab012b" containerName="registry-server" Dec 09 13:40:23 crc kubenswrapper[4703]: I1209 13:40:23.660890 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="a61257f4-093f-4d25-ad4e-d33ee8ab012b" containerName="registry-server" Dec 09 13:40:23 crc kubenswrapper[4703]: E1209 13:40:23.660901 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a61257f4-093f-4d25-ad4e-d33ee8ab012b" containerName="extract-content" Dec 09 13:40:23 crc kubenswrapper[4703]: I1209 13:40:23.660907 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="a61257f4-093f-4d25-ad4e-d33ee8ab012b" containerName="extract-content" Dec 09 13:40:23 crc kubenswrapper[4703]: E1209 13:40:23.660924 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a61257f4-093f-4d25-ad4e-d33ee8ab012b" containerName="extract-utilities" Dec 09 13:40:23 crc kubenswrapper[4703]: I1209 13:40:23.660929 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="a61257f4-093f-4d25-ad4e-d33ee8ab012b" containerName="extract-utilities" Dec 09 13:40:23 crc kubenswrapper[4703]: I1209 13:40:23.661142 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="a61257f4-093f-4d25-ad4e-d33ee8ab012b" containerName="registry-server" Dec 09 13:40:23 crc kubenswrapper[4703]: I1209 13:40:23.661166 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d4f1675-f3ef-46ce-a4d3-24f581829298" containerName="registry-server" Dec 09 13:40:23 crc kubenswrapper[4703]: I1209 13:40:23.662951 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fhx26" Dec 09 13:40:23 crc kubenswrapper[4703]: I1209 13:40:23.683696 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fhx26"] Dec 09 13:40:23 crc kubenswrapper[4703]: I1209 13:40:23.786357 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7112e8c-daa5-475b-bb0e-1c550cf2af57-catalog-content\") pod \"community-operators-fhx26\" (UID: \"f7112e8c-daa5-475b-bb0e-1c550cf2af57\") " pod="openshift-marketplace/community-operators-fhx26" Dec 09 13:40:23 crc kubenswrapper[4703]: I1209 13:40:23.786831 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7112e8c-daa5-475b-bb0e-1c550cf2af57-utilities\") pod \"community-operators-fhx26\" (UID: \"f7112e8c-daa5-475b-bb0e-1c550cf2af57\") " pod="openshift-marketplace/community-operators-fhx26" Dec 09 13:40:23 crc kubenswrapper[4703]: I1209 13:40:23.787018 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xktdd\" (UniqueName: \"kubernetes.io/projected/f7112e8c-daa5-475b-bb0e-1c550cf2af57-kube-api-access-xktdd\") pod \"community-operators-fhx26\" (UID: \"f7112e8c-daa5-475b-bb0e-1c550cf2af57\") " pod="openshift-marketplace/community-operators-fhx26" Dec 09 13:40:23 crc kubenswrapper[4703]: I1209 13:40:23.889717 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7112e8c-daa5-475b-bb0e-1c550cf2af57-utilities\") pod \"community-operators-fhx26\" (UID: \"f7112e8c-daa5-475b-bb0e-1c550cf2af57\") " pod="openshift-marketplace/community-operators-fhx26" Dec 09 13:40:23 crc kubenswrapper[4703]: I1209 13:40:23.889793 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xktdd\" (UniqueName: \"kubernetes.io/projected/f7112e8c-daa5-475b-bb0e-1c550cf2af57-kube-api-access-xktdd\") pod \"community-operators-fhx26\" (UID: \"f7112e8c-daa5-475b-bb0e-1c550cf2af57\") " pod="openshift-marketplace/community-operators-fhx26" Dec 09 13:40:23 crc kubenswrapper[4703]: I1209 13:40:23.889851 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7112e8c-daa5-475b-bb0e-1c550cf2af57-catalog-content\") pod \"community-operators-fhx26\" (UID: \"f7112e8c-daa5-475b-bb0e-1c550cf2af57\") " pod="openshift-marketplace/community-operators-fhx26" Dec 09 13:40:23 crc kubenswrapper[4703]: I1209 13:40:23.890601 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7112e8c-daa5-475b-bb0e-1c550cf2af57-catalog-content\") pod \"community-operators-fhx26\" (UID: \"f7112e8c-daa5-475b-bb0e-1c550cf2af57\") " pod="openshift-marketplace/community-operators-fhx26" Dec 09 13:40:23 crc kubenswrapper[4703]: I1209 13:40:23.890705 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7112e8c-daa5-475b-bb0e-1c550cf2af57-utilities\") pod \"community-operators-fhx26\" (UID: \"f7112e8c-daa5-475b-bb0e-1c550cf2af57\") " pod="openshift-marketplace/community-operators-fhx26" Dec 09 13:40:23 crc kubenswrapper[4703]: I1209 13:40:23.926639 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xktdd\" (UniqueName: \"kubernetes.io/projected/f7112e8c-daa5-475b-bb0e-1c550cf2af57-kube-api-access-xktdd\") pod \"community-operators-fhx26\" (UID: \"f7112e8c-daa5-475b-bb0e-1c550cf2af57\") " pod="openshift-marketplace/community-operators-fhx26" Dec 09 13:40:23 crc kubenswrapper[4703]: I1209 13:40:23.988984 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fhx26" Dec 09 13:40:24 crc kubenswrapper[4703]: I1209 13:40:24.069781 4703 scope.go:117] "RemoveContainer" containerID="075036586a4acd047eda3bd628a4c1107d844816490ede14d5fd144b3a839241" Dec 09 13:40:24 crc kubenswrapper[4703]: E1209 13:40:24.070074 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 13:40:24 crc kubenswrapper[4703]: I1209 13:40:24.617476 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fhx26"] Dec 09 13:40:24 crc kubenswrapper[4703]: I1209 13:40:24.838313 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fhx26" event={"ID":"f7112e8c-daa5-475b-bb0e-1c550cf2af57","Type":"ContainerStarted","Data":"42d5d8469dd118eb4091519b26610c394256e11565a5d5957be42083cc03aa8a"} Dec 09 13:40:25 crc kubenswrapper[4703]: I1209 13:40:25.850860 4703 generic.go:334] "Generic (PLEG): container finished" podID="f7112e8c-daa5-475b-bb0e-1c550cf2af57" containerID="f3c2e29f8d0a3693b168d59421887139bd62a3267e6a39c63b6dd1c482c9ad20" exitCode=0 Dec 09 13:40:25 crc kubenswrapper[4703]: I1209 13:40:25.850932 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fhx26" event={"ID":"f7112e8c-daa5-475b-bb0e-1c550cf2af57","Type":"ContainerDied","Data":"f3c2e29f8d0a3693b168d59421887139bd62a3267e6a39c63b6dd1c482c9ad20"} Dec 09 13:40:26 crc kubenswrapper[4703]: E1209 13:40:26.071555 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:40:26 crc kubenswrapper[4703]: I1209 13:40:26.864442 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fhx26" event={"ID":"f7112e8c-daa5-475b-bb0e-1c550cf2af57","Type":"ContainerStarted","Data":"a7b4ed4b1f2648612bdbb3bb26acba24cf820679d517257fcec87e5d287fc961"} Dec 09 13:40:27 crc kubenswrapper[4703]: I1209 13:40:27.878571 4703 generic.go:334] "Generic (PLEG): container finished" podID="f7112e8c-daa5-475b-bb0e-1c550cf2af57" containerID="a7b4ed4b1f2648612bdbb3bb26acba24cf820679d517257fcec87e5d287fc961" exitCode=0 Dec 09 13:40:27 crc kubenswrapper[4703]: I1209 13:40:27.878980 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fhx26" event={"ID":"f7112e8c-daa5-475b-bb0e-1c550cf2af57","Type":"ContainerDied","Data":"a7b4ed4b1f2648612bdbb3bb26acba24cf820679d517257fcec87e5d287fc961"} Dec 09 13:40:28 crc kubenswrapper[4703]: I1209 13:40:28.894860 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fhx26" event={"ID":"f7112e8c-daa5-475b-bb0e-1c550cf2af57","Type":"ContainerStarted","Data":"7766df685911aaf2e8fe0a2a52218f945eb47318b6d677714770ff32bb86fbfd"} Dec 09 13:40:28 crc kubenswrapper[4703]: I1209 13:40:28.919993 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-fhx26" podStartSLOduration=3.499432977 podStartE2EDuration="5.919969163s" podCreationTimestamp="2025-12-09 13:40:23 +0000 UTC" firstStartedPulling="2025-12-09 13:40:25.854277348 +0000 UTC m=+5725.103040887" lastFinishedPulling="2025-12-09 13:40:28.274813554 +0000 UTC m=+5727.523577073" observedRunningTime="2025-12-09 13:40:28.91631945 +0000 UTC m=+5728.165082999" watchObservedRunningTime="2025-12-09 13:40:28.919969163 +0000 UTC m=+5728.168732682" Dec 09 13:40:33 crc kubenswrapper[4703]: I1209 13:40:33.989282 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fhx26" Dec 09 13:40:33 crc kubenswrapper[4703]: I1209 13:40:33.990012 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-fhx26" Dec 09 13:40:34 crc kubenswrapper[4703]: I1209 13:40:34.053003 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fhx26" Dec 09 13:40:35 crc kubenswrapper[4703]: I1209 13:40:35.023498 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-fhx26" Dec 09 13:40:35 crc kubenswrapper[4703]: I1209 13:40:35.070770 4703 scope.go:117] "RemoveContainer" containerID="075036586a4acd047eda3bd628a4c1107d844816490ede14d5fd144b3a839241" Dec 09 13:40:35 crc kubenswrapper[4703]: E1209 13:40:35.071146 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 13:40:35 crc kubenswrapper[4703]: E1209 13:40:35.076236 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:40:35 crc kubenswrapper[4703]: I1209 13:40:35.089527 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fhx26"] Dec 09 13:40:36 crc kubenswrapper[4703]: I1209 13:40:36.986382 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-fhx26" podUID="f7112e8c-daa5-475b-bb0e-1c550cf2af57" containerName="registry-server" containerID="cri-o://7766df685911aaf2e8fe0a2a52218f945eb47318b6d677714770ff32bb86fbfd" gracePeriod=2 Dec 09 13:40:37 crc kubenswrapper[4703]: I1209 13:40:37.567292 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fhx26" Dec 09 13:40:37 crc kubenswrapper[4703]: I1209 13:40:37.677560 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xktdd\" (UniqueName: \"kubernetes.io/projected/f7112e8c-daa5-475b-bb0e-1c550cf2af57-kube-api-access-xktdd\") pod \"f7112e8c-daa5-475b-bb0e-1c550cf2af57\" (UID: \"f7112e8c-daa5-475b-bb0e-1c550cf2af57\") " Dec 09 13:40:37 crc kubenswrapper[4703]: I1209 13:40:37.677853 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7112e8c-daa5-475b-bb0e-1c550cf2af57-catalog-content\") pod \"f7112e8c-daa5-475b-bb0e-1c550cf2af57\" (UID: \"f7112e8c-daa5-475b-bb0e-1c550cf2af57\") " Dec 09 13:40:37 crc kubenswrapper[4703]: I1209 13:40:37.678045 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7112e8c-daa5-475b-bb0e-1c550cf2af57-utilities\") pod \"f7112e8c-daa5-475b-bb0e-1c550cf2af57\" (UID: \"f7112e8c-daa5-475b-bb0e-1c550cf2af57\") " Dec 09 13:40:37 crc kubenswrapper[4703]: I1209 13:40:37.679343 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7112e8c-daa5-475b-bb0e-1c550cf2af57-utilities" (OuterVolumeSpecName: "utilities") pod "f7112e8c-daa5-475b-bb0e-1c550cf2af57" (UID: "f7112e8c-daa5-475b-bb0e-1c550cf2af57"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 13:40:37 crc kubenswrapper[4703]: I1209 13:40:37.687034 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7112e8c-daa5-475b-bb0e-1c550cf2af57-kube-api-access-xktdd" (OuterVolumeSpecName: "kube-api-access-xktdd") pod "f7112e8c-daa5-475b-bb0e-1c550cf2af57" (UID: "f7112e8c-daa5-475b-bb0e-1c550cf2af57"). InnerVolumeSpecName "kube-api-access-xktdd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 13:40:37 crc kubenswrapper[4703]: I1209 13:40:37.742364 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7112e8c-daa5-475b-bb0e-1c550cf2af57-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f7112e8c-daa5-475b-bb0e-1c550cf2af57" (UID: "f7112e8c-daa5-475b-bb0e-1c550cf2af57"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 13:40:37 crc kubenswrapper[4703]: I1209 13:40:37.781021 4703 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7112e8c-daa5-475b-bb0e-1c550cf2af57-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 13:40:37 crc kubenswrapper[4703]: I1209 13:40:37.781065 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xktdd\" (UniqueName: \"kubernetes.io/projected/f7112e8c-daa5-475b-bb0e-1c550cf2af57-kube-api-access-xktdd\") on node \"crc\" DevicePath \"\"" Dec 09 13:40:37 crc kubenswrapper[4703]: I1209 13:40:37.781076 4703 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7112e8c-daa5-475b-bb0e-1c550cf2af57-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 13:40:38 crc kubenswrapper[4703]: I1209 13:40:38.016716 4703 generic.go:334] "Generic (PLEG): container finished" podID="f7112e8c-daa5-475b-bb0e-1c550cf2af57" containerID="7766df685911aaf2e8fe0a2a52218f945eb47318b6d677714770ff32bb86fbfd" exitCode=0 Dec 09 13:40:38 crc kubenswrapper[4703]: I1209 13:40:38.016779 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fhx26" event={"ID":"f7112e8c-daa5-475b-bb0e-1c550cf2af57","Type":"ContainerDied","Data":"7766df685911aaf2e8fe0a2a52218f945eb47318b6d677714770ff32bb86fbfd"} Dec 09 13:40:38 crc kubenswrapper[4703]: I1209 13:40:38.016825 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fhx26" event={"ID":"f7112e8c-daa5-475b-bb0e-1c550cf2af57","Type":"ContainerDied","Data":"42d5d8469dd118eb4091519b26610c394256e11565a5d5957be42083cc03aa8a"} Dec 09 13:40:38 crc kubenswrapper[4703]: I1209 13:40:38.016851 4703 scope.go:117] "RemoveContainer" containerID="7766df685911aaf2e8fe0a2a52218f945eb47318b6d677714770ff32bb86fbfd" Dec 09 13:40:38 crc kubenswrapper[4703]: I1209 13:40:38.016856 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fhx26" Dec 09 13:40:38 crc kubenswrapper[4703]: I1209 13:40:38.058017 4703 scope.go:117] "RemoveContainer" containerID="a7b4ed4b1f2648612bdbb3bb26acba24cf820679d517257fcec87e5d287fc961" Dec 09 13:40:38 crc kubenswrapper[4703]: I1209 13:40:38.081879 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fhx26"] Dec 09 13:40:38 crc kubenswrapper[4703]: I1209 13:40:38.094967 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-fhx26"] Dec 09 13:40:38 crc kubenswrapper[4703]: I1209 13:40:38.462870 4703 scope.go:117] "RemoveContainer" containerID="f3c2e29f8d0a3693b168d59421887139bd62a3267e6a39c63b6dd1c482c9ad20" Dec 09 13:40:38 crc kubenswrapper[4703]: I1209 13:40:38.511426 4703 scope.go:117] "RemoveContainer" containerID="7766df685911aaf2e8fe0a2a52218f945eb47318b6d677714770ff32bb86fbfd" Dec 09 13:40:38 crc kubenswrapper[4703]: E1209 13:40:38.511979 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7766df685911aaf2e8fe0a2a52218f945eb47318b6d677714770ff32bb86fbfd\": container with ID starting with 7766df685911aaf2e8fe0a2a52218f945eb47318b6d677714770ff32bb86fbfd not found: ID does not exist" containerID="7766df685911aaf2e8fe0a2a52218f945eb47318b6d677714770ff32bb86fbfd" Dec 09 13:40:38 crc kubenswrapper[4703]: I1209 13:40:38.512027 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7766df685911aaf2e8fe0a2a52218f945eb47318b6d677714770ff32bb86fbfd"} err="failed to get container status \"7766df685911aaf2e8fe0a2a52218f945eb47318b6d677714770ff32bb86fbfd\": rpc error: code = NotFound desc = could not find container \"7766df685911aaf2e8fe0a2a52218f945eb47318b6d677714770ff32bb86fbfd\": container with ID starting with 7766df685911aaf2e8fe0a2a52218f945eb47318b6d677714770ff32bb86fbfd not found: ID does not exist" Dec 09 13:40:38 crc kubenswrapper[4703]: I1209 13:40:38.512060 4703 scope.go:117] "RemoveContainer" containerID="a7b4ed4b1f2648612bdbb3bb26acba24cf820679d517257fcec87e5d287fc961" Dec 09 13:40:38 crc kubenswrapper[4703]: E1209 13:40:38.512746 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7b4ed4b1f2648612bdbb3bb26acba24cf820679d517257fcec87e5d287fc961\": container with ID starting with a7b4ed4b1f2648612bdbb3bb26acba24cf820679d517257fcec87e5d287fc961 not found: ID does not exist" containerID="a7b4ed4b1f2648612bdbb3bb26acba24cf820679d517257fcec87e5d287fc961" Dec 09 13:40:38 crc kubenswrapper[4703]: I1209 13:40:38.512806 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7b4ed4b1f2648612bdbb3bb26acba24cf820679d517257fcec87e5d287fc961"} err="failed to get container status \"a7b4ed4b1f2648612bdbb3bb26acba24cf820679d517257fcec87e5d287fc961\": rpc error: code = NotFound desc = could not find container \"a7b4ed4b1f2648612bdbb3bb26acba24cf820679d517257fcec87e5d287fc961\": container with ID starting with a7b4ed4b1f2648612bdbb3bb26acba24cf820679d517257fcec87e5d287fc961 not found: ID does not exist" Dec 09 13:40:38 crc kubenswrapper[4703]: I1209 13:40:38.512858 4703 scope.go:117] "RemoveContainer" containerID="f3c2e29f8d0a3693b168d59421887139bd62a3267e6a39c63b6dd1c482c9ad20" Dec 09 13:40:38 crc kubenswrapper[4703]: E1209 13:40:38.513286 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3c2e29f8d0a3693b168d59421887139bd62a3267e6a39c63b6dd1c482c9ad20\": container with ID starting with f3c2e29f8d0a3693b168d59421887139bd62a3267e6a39c63b6dd1c482c9ad20 not found: ID does not exist" containerID="f3c2e29f8d0a3693b168d59421887139bd62a3267e6a39c63b6dd1c482c9ad20" Dec 09 13:40:38 crc kubenswrapper[4703]: I1209 13:40:38.513352 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3c2e29f8d0a3693b168d59421887139bd62a3267e6a39c63b6dd1c482c9ad20"} err="failed to get container status \"f3c2e29f8d0a3693b168d59421887139bd62a3267e6a39c63b6dd1c482c9ad20\": rpc error: code = NotFound desc = could not find container \"f3c2e29f8d0a3693b168d59421887139bd62a3267e6a39c63b6dd1c482c9ad20\": container with ID starting with f3c2e29f8d0a3693b168d59421887139bd62a3267e6a39c63b6dd1c482c9ad20 not found: ID does not exist" Dec 09 13:40:39 crc kubenswrapper[4703]: I1209 13:40:39.086060 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7112e8c-daa5-475b-bb0e-1c550cf2af57" path="/var/lib/kubelet/pods/f7112e8c-daa5-475b-bb0e-1c550cf2af57/volumes" Dec 09 13:40:40 crc kubenswrapper[4703]: E1209 13:40:40.071803 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:40:47 crc kubenswrapper[4703]: E1209 13:40:47.073869 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:40:49 crc kubenswrapper[4703]: I1209 13:40:49.070389 4703 scope.go:117] "RemoveContainer" containerID="075036586a4acd047eda3bd628a4c1107d844816490ede14d5fd144b3a839241" Dec 09 13:40:49 crc kubenswrapper[4703]: E1209 13:40:49.071289 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 13:40:50 crc kubenswrapper[4703]: I1209 13:40:50.159977 4703 generic.go:334] "Generic (PLEG): container finished" podID="abea605e-e3a0-478a-a68c-4e5c49ca4524" containerID="b3d70353cf6cf8331c233c51a12d1b6158cc12a8b655933ce419593d59124d47" exitCode=0 Dec 09 13:40:50 crc kubenswrapper[4703]: I1209 13:40:50.160065 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jl546/must-gather-kbc2l" event={"ID":"abea605e-e3a0-478a-a68c-4e5c49ca4524","Type":"ContainerDied","Data":"b3d70353cf6cf8331c233c51a12d1b6158cc12a8b655933ce419593d59124d47"} Dec 09 13:40:50 crc kubenswrapper[4703]: I1209 13:40:50.161341 4703 scope.go:117] "RemoveContainer" containerID="b3d70353cf6cf8331c233c51a12d1b6158cc12a8b655933ce419593d59124d47" Dec 09 13:40:50 crc kubenswrapper[4703]: I1209 13:40:50.837026 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-jl546_must-gather-kbc2l_abea605e-e3a0-478a-a68c-4e5c49ca4524/gather/0.log" Dec 09 13:40:54 crc kubenswrapper[4703]: E1209 13:40:54.072830 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:40:59 crc kubenswrapper[4703]: I1209 13:40:59.282613 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-jl546/must-gather-kbc2l"] Dec 09 13:40:59 crc kubenswrapper[4703]: I1209 13:40:59.284391 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-jl546/must-gather-kbc2l" podUID="abea605e-e3a0-478a-a68c-4e5c49ca4524" containerName="copy" containerID="cri-o://78c3f6eac63593023da83e166953248570629bf76d780a1d2b8a602c6af5c747" gracePeriod=2 Dec 09 13:40:59 crc kubenswrapper[4703]: I1209 13:40:59.294325 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-jl546/must-gather-kbc2l"] Dec 09 13:40:59 crc kubenswrapper[4703]: I1209 13:40:59.897868 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-jl546_must-gather-kbc2l_abea605e-e3a0-478a-a68c-4e5c49ca4524/copy/0.log" Dec 09 13:40:59 crc kubenswrapper[4703]: I1209 13:40:59.899023 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jl546/must-gather-kbc2l" Dec 09 13:41:00 crc kubenswrapper[4703]: I1209 13:41:00.006585 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sflrs\" (UniqueName: \"kubernetes.io/projected/abea605e-e3a0-478a-a68c-4e5c49ca4524-kube-api-access-sflrs\") pod \"abea605e-e3a0-478a-a68c-4e5c49ca4524\" (UID: \"abea605e-e3a0-478a-a68c-4e5c49ca4524\") " Dec 09 13:41:00 crc kubenswrapper[4703]: I1209 13:41:00.006792 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/abea605e-e3a0-478a-a68c-4e5c49ca4524-must-gather-output\") pod \"abea605e-e3a0-478a-a68c-4e5c49ca4524\" (UID: \"abea605e-e3a0-478a-a68c-4e5c49ca4524\") " Dec 09 13:41:00 crc kubenswrapper[4703]: I1209 13:41:00.014439 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abea605e-e3a0-478a-a68c-4e5c49ca4524-kube-api-access-sflrs" (OuterVolumeSpecName: "kube-api-access-sflrs") pod "abea605e-e3a0-478a-a68c-4e5c49ca4524" (UID: "abea605e-e3a0-478a-a68c-4e5c49ca4524"). InnerVolumeSpecName "kube-api-access-sflrs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 13:41:00 crc kubenswrapper[4703]: I1209 13:41:00.109596 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sflrs\" (UniqueName: \"kubernetes.io/projected/abea605e-e3a0-478a-a68c-4e5c49ca4524-kube-api-access-sflrs\") on node \"crc\" DevicePath \"\"" Dec 09 13:41:00 crc kubenswrapper[4703]: I1209 13:41:00.188716 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/abea605e-e3a0-478a-a68c-4e5c49ca4524-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "abea605e-e3a0-478a-a68c-4e5c49ca4524" (UID: "abea605e-e3a0-478a-a68c-4e5c49ca4524"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 13:41:00 crc kubenswrapper[4703]: I1209 13:41:00.213170 4703 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/abea605e-e3a0-478a-a68c-4e5c49ca4524-must-gather-output\") on node \"crc\" DevicePath \"\"" Dec 09 13:41:00 crc kubenswrapper[4703]: I1209 13:41:00.288692 4703 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-jl546_must-gather-kbc2l_abea605e-e3a0-478a-a68c-4e5c49ca4524/copy/0.log" Dec 09 13:41:00 crc kubenswrapper[4703]: I1209 13:41:00.290461 4703 generic.go:334] "Generic (PLEG): container finished" podID="abea605e-e3a0-478a-a68c-4e5c49ca4524" containerID="78c3f6eac63593023da83e166953248570629bf76d780a1d2b8a602c6af5c747" exitCode=143 Dec 09 13:41:00 crc kubenswrapper[4703]: I1209 13:41:00.290537 4703 scope.go:117] "RemoveContainer" containerID="78c3f6eac63593023da83e166953248570629bf76d780a1d2b8a602c6af5c747" Dec 09 13:41:00 crc kubenswrapper[4703]: I1209 13:41:00.290587 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jl546/must-gather-kbc2l" Dec 09 13:41:00 crc kubenswrapper[4703]: I1209 13:41:00.321431 4703 scope.go:117] "RemoveContainer" containerID="b3d70353cf6cf8331c233c51a12d1b6158cc12a8b655933ce419593d59124d47" Dec 09 13:41:00 crc kubenswrapper[4703]: I1209 13:41:00.440759 4703 scope.go:117] "RemoveContainer" containerID="78c3f6eac63593023da83e166953248570629bf76d780a1d2b8a602c6af5c747" Dec 09 13:41:00 crc kubenswrapper[4703]: E1209 13:41:00.441276 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78c3f6eac63593023da83e166953248570629bf76d780a1d2b8a602c6af5c747\": container with ID starting with 78c3f6eac63593023da83e166953248570629bf76d780a1d2b8a602c6af5c747 not found: ID does not exist" containerID="78c3f6eac63593023da83e166953248570629bf76d780a1d2b8a602c6af5c747" Dec 09 13:41:00 crc kubenswrapper[4703]: I1209 13:41:00.441345 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78c3f6eac63593023da83e166953248570629bf76d780a1d2b8a602c6af5c747"} err="failed to get container status \"78c3f6eac63593023da83e166953248570629bf76d780a1d2b8a602c6af5c747\": rpc error: code = NotFound desc = could not find container \"78c3f6eac63593023da83e166953248570629bf76d780a1d2b8a602c6af5c747\": container with ID starting with 78c3f6eac63593023da83e166953248570629bf76d780a1d2b8a602c6af5c747 not found: ID does not exist" Dec 09 13:41:00 crc kubenswrapper[4703]: I1209 13:41:00.441385 4703 scope.go:117] "RemoveContainer" containerID="b3d70353cf6cf8331c233c51a12d1b6158cc12a8b655933ce419593d59124d47" Dec 09 13:41:00 crc kubenswrapper[4703]: E1209 13:41:00.442607 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3d70353cf6cf8331c233c51a12d1b6158cc12a8b655933ce419593d59124d47\": container with ID starting with b3d70353cf6cf8331c233c51a12d1b6158cc12a8b655933ce419593d59124d47 not found: ID does not exist" containerID="b3d70353cf6cf8331c233c51a12d1b6158cc12a8b655933ce419593d59124d47" Dec 09 13:41:00 crc kubenswrapper[4703]: I1209 13:41:00.442698 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3d70353cf6cf8331c233c51a12d1b6158cc12a8b655933ce419593d59124d47"} err="failed to get container status \"b3d70353cf6cf8331c233c51a12d1b6158cc12a8b655933ce419593d59124d47\": rpc error: code = NotFound desc = could not find container \"b3d70353cf6cf8331c233c51a12d1b6158cc12a8b655933ce419593d59124d47\": container with ID starting with b3d70353cf6cf8331c233c51a12d1b6158cc12a8b655933ce419593d59124d47 not found: ID does not exist" Dec 09 13:41:01 crc kubenswrapper[4703]: I1209 13:41:01.090372 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abea605e-e3a0-478a-a68c-4e5c49ca4524" path="/var/lib/kubelet/pods/abea605e-e3a0-478a-a68c-4e5c49ca4524/volumes" Dec 09 13:41:02 crc kubenswrapper[4703]: E1209 13:41:02.072693 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:41:04 crc kubenswrapper[4703]: I1209 13:41:04.070020 4703 scope.go:117] "RemoveContainer" containerID="075036586a4acd047eda3bd628a4c1107d844816490ede14d5fd144b3a839241" Dec 09 13:41:04 crc kubenswrapper[4703]: E1209 13:41:04.071550 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 13:41:05 crc kubenswrapper[4703]: E1209 13:41:05.073625 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:41:14 crc kubenswrapper[4703]: E1209 13:41:14.073982 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:41:16 crc kubenswrapper[4703]: I1209 13:41:16.070706 4703 scope.go:117] "RemoveContainer" containerID="075036586a4acd047eda3bd628a4c1107d844816490ede14d5fd144b3a839241" Dec 09 13:41:16 crc kubenswrapper[4703]: E1209 13:41:16.071221 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 13:41:18 crc kubenswrapper[4703]: E1209 13:41:18.073542 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:41:28 crc kubenswrapper[4703]: E1209 13:41:28.075488 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:41:29 crc kubenswrapper[4703]: I1209 13:41:29.070567 4703 scope.go:117] "RemoveContainer" containerID="075036586a4acd047eda3bd628a4c1107d844816490ede14d5fd144b3a839241" Dec 09 13:41:29 crc kubenswrapper[4703]: E1209 13:41:29.070890 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 13:41:32 crc kubenswrapper[4703]: E1209 13:41:32.071547 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:41:40 crc kubenswrapper[4703]: E1209 13:41:40.073673 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:41:42 crc kubenswrapper[4703]: I1209 13:41:42.071103 4703 scope.go:117] "RemoveContainer" containerID="075036586a4acd047eda3bd628a4c1107d844816490ede14d5fd144b3a839241" Dec 09 13:41:42 crc kubenswrapper[4703]: E1209 13:41:42.071997 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 13:41:43 crc kubenswrapper[4703]: E1209 13:41:43.074328 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:41:44 crc kubenswrapper[4703]: I1209 13:41:44.326761 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dpjgp"] Dec 09 13:41:44 crc kubenswrapper[4703]: E1209 13:41:44.327931 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7112e8c-daa5-475b-bb0e-1c550cf2af57" containerName="extract-content" Dec 09 13:41:44 crc kubenswrapper[4703]: I1209 13:41:44.327966 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7112e8c-daa5-475b-bb0e-1c550cf2af57" containerName="extract-content" Dec 09 13:41:44 crc kubenswrapper[4703]: E1209 13:41:44.328019 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abea605e-e3a0-478a-a68c-4e5c49ca4524" containerName="copy" Dec 09 13:41:44 crc kubenswrapper[4703]: I1209 13:41:44.328030 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="abea605e-e3a0-478a-a68c-4e5c49ca4524" containerName="copy" Dec 09 13:41:44 crc kubenswrapper[4703]: E1209 13:41:44.328065 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7112e8c-daa5-475b-bb0e-1c550cf2af57" containerName="extract-utilities" Dec 09 13:41:44 crc kubenswrapper[4703]: I1209 13:41:44.328080 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7112e8c-daa5-475b-bb0e-1c550cf2af57" containerName="extract-utilities" Dec 09 13:41:44 crc kubenswrapper[4703]: E1209 13:41:44.328130 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7112e8c-daa5-475b-bb0e-1c550cf2af57" containerName="registry-server" Dec 09 13:41:44 crc kubenswrapper[4703]: I1209 13:41:44.328142 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7112e8c-daa5-475b-bb0e-1c550cf2af57" containerName="registry-server" Dec 09 13:41:44 crc kubenswrapper[4703]: E1209 13:41:44.328162 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abea605e-e3a0-478a-a68c-4e5c49ca4524" containerName="gather" Dec 09 13:41:44 crc kubenswrapper[4703]: I1209 13:41:44.328170 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="abea605e-e3a0-478a-a68c-4e5c49ca4524" containerName="gather" Dec 09 13:41:44 crc kubenswrapper[4703]: I1209 13:41:44.328473 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="abea605e-e3a0-478a-a68c-4e5c49ca4524" containerName="gather" Dec 09 13:41:44 crc kubenswrapper[4703]: I1209 13:41:44.328500 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="abea605e-e3a0-478a-a68c-4e5c49ca4524" containerName="copy" Dec 09 13:41:44 crc kubenswrapper[4703]: I1209 13:41:44.328514 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7112e8c-daa5-475b-bb0e-1c550cf2af57" containerName="registry-server" Dec 09 13:41:44 crc kubenswrapper[4703]: I1209 13:41:44.330812 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dpjgp" Dec 09 13:41:44 crc kubenswrapper[4703]: I1209 13:41:44.337610 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dpjgp"] Dec 09 13:41:44 crc kubenswrapper[4703]: I1209 13:41:44.472246 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02fc8dbb-3054-47b4-9f7e-7cc986784199-catalog-content\") pod \"redhat-operators-dpjgp\" (UID: \"02fc8dbb-3054-47b4-9f7e-7cc986784199\") " pod="openshift-marketplace/redhat-operators-dpjgp" Dec 09 13:41:44 crc kubenswrapper[4703]: I1209 13:41:44.472730 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02fc8dbb-3054-47b4-9f7e-7cc986784199-utilities\") pod \"redhat-operators-dpjgp\" (UID: \"02fc8dbb-3054-47b4-9f7e-7cc986784199\") " pod="openshift-marketplace/redhat-operators-dpjgp" Dec 09 13:41:44 crc kubenswrapper[4703]: I1209 13:41:44.472792 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvmk6\" (UniqueName: \"kubernetes.io/projected/02fc8dbb-3054-47b4-9f7e-7cc986784199-kube-api-access-tvmk6\") pod \"redhat-operators-dpjgp\" (UID: \"02fc8dbb-3054-47b4-9f7e-7cc986784199\") " pod="openshift-marketplace/redhat-operators-dpjgp" Dec 09 13:41:44 crc kubenswrapper[4703]: I1209 13:41:44.575512 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02fc8dbb-3054-47b4-9f7e-7cc986784199-utilities\") pod \"redhat-operators-dpjgp\" (UID: \"02fc8dbb-3054-47b4-9f7e-7cc986784199\") " pod="openshift-marketplace/redhat-operators-dpjgp" Dec 09 13:41:44 crc kubenswrapper[4703]: I1209 13:41:44.575634 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvmk6\" (UniqueName: \"kubernetes.io/projected/02fc8dbb-3054-47b4-9f7e-7cc986784199-kube-api-access-tvmk6\") pod \"redhat-operators-dpjgp\" (UID: \"02fc8dbb-3054-47b4-9f7e-7cc986784199\") " pod="openshift-marketplace/redhat-operators-dpjgp" Dec 09 13:41:44 crc kubenswrapper[4703]: I1209 13:41:44.575727 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02fc8dbb-3054-47b4-9f7e-7cc986784199-catalog-content\") pod \"redhat-operators-dpjgp\" (UID: \"02fc8dbb-3054-47b4-9f7e-7cc986784199\") " pod="openshift-marketplace/redhat-operators-dpjgp" Dec 09 13:41:44 crc kubenswrapper[4703]: I1209 13:41:44.576101 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02fc8dbb-3054-47b4-9f7e-7cc986784199-utilities\") pod \"redhat-operators-dpjgp\" (UID: \"02fc8dbb-3054-47b4-9f7e-7cc986784199\") " pod="openshift-marketplace/redhat-operators-dpjgp" Dec 09 13:41:44 crc kubenswrapper[4703]: I1209 13:41:44.576335 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02fc8dbb-3054-47b4-9f7e-7cc986784199-catalog-content\") pod \"redhat-operators-dpjgp\" (UID: \"02fc8dbb-3054-47b4-9f7e-7cc986784199\") " pod="openshift-marketplace/redhat-operators-dpjgp" Dec 09 13:41:44 crc kubenswrapper[4703]: I1209 13:41:44.609220 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvmk6\" (UniqueName: \"kubernetes.io/projected/02fc8dbb-3054-47b4-9f7e-7cc986784199-kube-api-access-tvmk6\") pod \"redhat-operators-dpjgp\" (UID: \"02fc8dbb-3054-47b4-9f7e-7cc986784199\") " pod="openshift-marketplace/redhat-operators-dpjgp" Dec 09 13:41:44 crc kubenswrapper[4703]: I1209 13:41:44.671606 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dpjgp" Dec 09 13:41:45 crc kubenswrapper[4703]: I1209 13:41:45.187940 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dpjgp"] Dec 09 13:41:45 crc kubenswrapper[4703]: I1209 13:41:45.871571 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dpjgp" event={"ID":"02fc8dbb-3054-47b4-9f7e-7cc986784199","Type":"ContainerStarted","Data":"d258eae6ea29990bfbb206dbf03adcf9eadb73891a89993aadaee33344f494ea"} Dec 09 13:41:46 crc kubenswrapper[4703]: I1209 13:41:46.882714 4703 generic.go:334] "Generic (PLEG): container finished" podID="02fc8dbb-3054-47b4-9f7e-7cc986784199" containerID="7332964c1b9a35e68d863c8106689bdcb3ffc727182d981ce9bfddfb613bbcb7" exitCode=0 Dec 09 13:41:46 crc kubenswrapper[4703]: I1209 13:41:46.882814 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dpjgp" event={"ID":"02fc8dbb-3054-47b4-9f7e-7cc986784199","Type":"ContainerDied","Data":"7332964c1b9a35e68d863c8106689bdcb3ffc727182d981ce9bfddfb613bbcb7"} Dec 09 13:41:48 crc kubenswrapper[4703]: I1209 13:41:48.910837 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dpjgp" event={"ID":"02fc8dbb-3054-47b4-9f7e-7cc986784199","Type":"ContainerStarted","Data":"98e15102262e4f7790d192bff59da6ee47ea12e0cb8724947644405ac9d4946a"} Dec 09 13:41:49 crc kubenswrapper[4703]: I1209 13:41:49.922665 4703 generic.go:334] "Generic (PLEG): container finished" podID="02fc8dbb-3054-47b4-9f7e-7cc986784199" containerID="98e15102262e4f7790d192bff59da6ee47ea12e0cb8724947644405ac9d4946a" exitCode=0 Dec 09 13:41:49 crc kubenswrapper[4703]: I1209 13:41:49.922722 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dpjgp" event={"ID":"02fc8dbb-3054-47b4-9f7e-7cc986784199","Type":"ContainerDied","Data":"98e15102262e4f7790d192bff59da6ee47ea12e0cb8724947644405ac9d4946a"} Dec 09 13:41:50 crc kubenswrapper[4703]: I1209 13:41:50.942792 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dpjgp" event={"ID":"02fc8dbb-3054-47b4-9f7e-7cc986784199","Type":"ContainerStarted","Data":"853b4410524683349c09e0e5533947de576ada5d75ddb059290bc4b60869a6b6"} Dec 09 13:41:50 crc kubenswrapper[4703]: I1209 13:41:50.969479 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dpjgp" podStartSLOduration=3.542798527 podStartE2EDuration="6.969445797s" podCreationTimestamp="2025-12-09 13:41:44 +0000 UTC" firstStartedPulling="2025-12-09 13:41:46.888061571 +0000 UTC m=+5806.136825090" lastFinishedPulling="2025-12-09 13:41:50.314708841 +0000 UTC m=+5809.563472360" observedRunningTime="2025-12-09 13:41:50.96131445 +0000 UTC m=+5810.210077979" watchObservedRunningTime="2025-12-09 13:41:50.969445797 +0000 UTC m=+5810.218209346" Dec 09 13:41:52 crc kubenswrapper[4703]: E1209 13:41:52.072620 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:41:54 crc kubenswrapper[4703]: I1209 13:41:54.071366 4703 scope.go:117] "RemoveContainer" containerID="075036586a4acd047eda3bd628a4c1107d844816490ede14d5fd144b3a839241" Dec 09 13:41:54 crc kubenswrapper[4703]: E1209 13:41:54.072137 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 13:41:54 crc kubenswrapper[4703]: I1209 13:41:54.671681 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dpjgp" Dec 09 13:41:54 crc kubenswrapper[4703]: I1209 13:41:54.671861 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dpjgp" Dec 09 13:41:55 crc kubenswrapper[4703]: I1209 13:41:55.738851 4703 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dpjgp" podUID="02fc8dbb-3054-47b4-9f7e-7cc986784199" containerName="registry-server" probeResult="failure" output=< Dec 09 13:41:55 crc kubenswrapper[4703]: timeout: failed to connect service ":50051" within 1s Dec 09 13:41:55 crc kubenswrapper[4703]: > Dec 09 13:41:58 crc kubenswrapper[4703]: E1209 13:41:58.072736 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:42:03 crc kubenswrapper[4703]: E1209 13:42:03.073396 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:42:04 crc kubenswrapper[4703]: I1209 13:42:04.727430 4703 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dpjgp" Dec 09 13:42:04 crc kubenswrapper[4703]: I1209 13:42:04.783943 4703 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dpjgp" Dec 09 13:42:05 crc kubenswrapper[4703]: I1209 13:42:05.001938 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dpjgp"] Dec 09 13:42:06 crc kubenswrapper[4703]: I1209 13:42:06.132493 4703 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dpjgp" podUID="02fc8dbb-3054-47b4-9f7e-7cc986784199" containerName="registry-server" containerID="cri-o://853b4410524683349c09e0e5533947de576ada5d75ddb059290bc4b60869a6b6" gracePeriod=2 Dec 09 13:42:06 crc kubenswrapper[4703]: I1209 13:42:06.663571 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dpjgp" Dec 09 13:42:06 crc kubenswrapper[4703]: I1209 13:42:06.775163 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tvmk6\" (UniqueName: \"kubernetes.io/projected/02fc8dbb-3054-47b4-9f7e-7cc986784199-kube-api-access-tvmk6\") pod \"02fc8dbb-3054-47b4-9f7e-7cc986784199\" (UID: \"02fc8dbb-3054-47b4-9f7e-7cc986784199\") " Dec 09 13:42:06 crc kubenswrapper[4703]: I1209 13:42:06.775348 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02fc8dbb-3054-47b4-9f7e-7cc986784199-catalog-content\") pod \"02fc8dbb-3054-47b4-9f7e-7cc986784199\" (UID: \"02fc8dbb-3054-47b4-9f7e-7cc986784199\") " Dec 09 13:42:06 crc kubenswrapper[4703]: I1209 13:42:06.775513 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02fc8dbb-3054-47b4-9f7e-7cc986784199-utilities\") pod \"02fc8dbb-3054-47b4-9f7e-7cc986784199\" (UID: \"02fc8dbb-3054-47b4-9f7e-7cc986784199\") " Dec 09 13:42:06 crc kubenswrapper[4703]: I1209 13:42:06.776532 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02fc8dbb-3054-47b4-9f7e-7cc986784199-utilities" (OuterVolumeSpecName: "utilities") pod "02fc8dbb-3054-47b4-9f7e-7cc986784199" (UID: "02fc8dbb-3054-47b4-9f7e-7cc986784199"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 13:42:06 crc kubenswrapper[4703]: I1209 13:42:06.785398 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02fc8dbb-3054-47b4-9f7e-7cc986784199-kube-api-access-tvmk6" (OuterVolumeSpecName: "kube-api-access-tvmk6") pod "02fc8dbb-3054-47b4-9f7e-7cc986784199" (UID: "02fc8dbb-3054-47b4-9f7e-7cc986784199"). InnerVolumeSpecName "kube-api-access-tvmk6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 13:42:06 crc kubenswrapper[4703]: I1209 13:42:06.879114 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tvmk6\" (UniqueName: \"kubernetes.io/projected/02fc8dbb-3054-47b4-9f7e-7cc986784199-kube-api-access-tvmk6\") on node \"crc\" DevicePath \"\"" Dec 09 13:42:06 crc kubenswrapper[4703]: I1209 13:42:06.879156 4703 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02fc8dbb-3054-47b4-9f7e-7cc986784199-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 13:42:06 crc kubenswrapper[4703]: I1209 13:42:06.907114 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02fc8dbb-3054-47b4-9f7e-7cc986784199-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "02fc8dbb-3054-47b4-9f7e-7cc986784199" (UID: "02fc8dbb-3054-47b4-9f7e-7cc986784199"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 13:42:06 crc kubenswrapper[4703]: I1209 13:42:06.982278 4703 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02fc8dbb-3054-47b4-9f7e-7cc986784199-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 13:42:07 crc kubenswrapper[4703]: I1209 13:42:07.144571 4703 generic.go:334] "Generic (PLEG): container finished" podID="02fc8dbb-3054-47b4-9f7e-7cc986784199" containerID="853b4410524683349c09e0e5533947de576ada5d75ddb059290bc4b60869a6b6" exitCode=0 Dec 09 13:42:07 crc kubenswrapper[4703]: I1209 13:42:07.144633 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dpjgp" event={"ID":"02fc8dbb-3054-47b4-9f7e-7cc986784199","Type":"ContainerDied","Data":"853b4410524683349c09e0e5533947de576ada5d75ddb059290bc4b60869a6b6"} Dec 09 13:42:07 crc kubenswrapper[4703]: I1209 13:42:07.144677 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dpjgp" event={"ID":"02fc8dbb-3054-47b4-9f7e-7cc986784199","Type":"ContainerDied","Data":"d258eae6ea29990bfbb206dbf03adcf9eadb73891a89993aadaee33344f494ea"} Dec 09 13:42:07 crc kubenswrapper[4703]: I1209 13:42:07.144701 4703 scope.go:117] "RemoveContainer" containerID="853b4410524683349c09e0e5533947de576ada5d75ddb059290bc4b60869a6b6" Dec 09 13:42:07 crc kubenswrapper[4703]: I1209 13:42:07.144642 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dpjgp" Dec 09 13:42:07 crc kubenswrapper[4703]: I1209 13:42:07.176538 4703 scope.go:117] "RemoveContainer" containerID="98e15102262e4f7790d192bff59da6ee47ea12e0cb8724947644405ac9d4946a" Dec 09 13:42:07 crc kubenswrapper[4703]: I1209 13:42:07.186163 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dpjgp"] Dec 09 13:42:07 crc kubenswrapper[4703]: I1209 13:42:07.197204 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dpjgp"] Dec 09 13:42:07 crc kubenswrapper[4703]: I1209 13:42:07.200757 4703 scope.go:117] "RemoveContainer" containerID="7332964c1b9a35e68d863c8106689bdcb3ffc727182d981ce9bfddfb613bbcb7" Dec 09 13:42:07 crc kubenswrapper[4703]: I1209 13:42:07.287313 4703 scope.go:117] "RemoveContainer" containerID="853b4410524683349c09e0e5533947de576ada5d75ddb059290bc4b60869a6b6" Dec 09 13:42:07 crc kubenswrapper[4703]: E1209 13:42:07.289339 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"853b4410524683349c09e0e5533947de576ada5d75ddb059290bc4b60869a6b6\": container with ID starting with 853b4410524683349c09e0e5533947de576ada5d75ddb059290bc4b60869a6b6 not found: ID does not exist" containerID="853b4410524683349c09e0e5533947de576ada5d75ddb059290bc4b60869a6b6" Dec 09 13:42:07 crc kubenswrapper[4703]: I1209 13:42:07.289413 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"853b4410524683349c09e0e5533947de576ada5d75ddb059290bc4b60869a6b6"} err="failed to get container status \"853b4410524683349c09e0e5533947de576ada5d75ddb059290bc4b60869a6b6\": rpc error: code = NotFound desc = could not find container \"853b4410524683349c09e0e5533947de576ada5d75ddb059290bc4b60869a6b6\": container with ID starting with 853b4410524683349c09e0e5533947de576ada5d75ddb059290bc4b60869a6b6 not found: ID does not exist" Dec 09 13:42:07 crc kubenswrapper[4703]: I1209 13:42:07.289485 4703 scope.go:117] "RemoveContainer" containerID="98e15102262e4f7790d192bff59da6ee47ea12e0cb8724947644405ac9d4946a" Dec 09 13:42:07 crc kubenswrapper[4703]: E1209 13:42:07.289929 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98e15102262e4f7790d192bff59da6ee47ea12e0cb8724947644405ac9d4946a\": container with ID starting with 98e15102262e4f7790d192bff59da6ee47ea12e0cb8724947644405ac9d4946a not found: ID does not exist" containerID="98e15102262e4f7790d192bff59da6ee47ea12e0cb8724947644405ac9d4946a" Dec 09 13:42:07 crc kubenswrapper[4703]: I1209 13:42:07.289969 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98e15102262e4f7790d192bff59da6ee47ea12e0cb8724947644405ac9d4946a"} err="failed to get container status \"98e15102262e4f7790d192bff59da6ee47ea12e0cb8724947644405ac9d4946a\": rpc error: code = NotFound desc = could not find container \"98e15102262e4f7790d192bff59da6ee47ea12e0cb8724947644405ac9d4946a\": container with ID starting with 98e15102262e4f7790d192bff59da6ee47ea12e0cb8724947644405ac9d4946a not found: ID does not exist" Dec 09 13:42:07 crc kubenswrapper[4703]: I1209 13:42:07.289994 4703 scope.go:117] "RemoveContainer" containerID="7332964c1b9a35e68d863c8106689bdcb3ffc727182d981ce9bfddfb613bbcb7" Dec 09 13:42:07 crc kubenswrapper[4703]: E1209 13:42:07.290391 4703 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7332964c1b9a35e68d863c8106689bdcb3ffc727182d981ce9bfddfb613bbcb7\": container with ID starting with 7332964c1b9a35e68d863c8106689bdcb3ffc727182d981ce9bfddfb613bbcb7 not found: ID does not exist" containerID="7332964c1b9a35e68d863c8106689bdcb3ffc727182d981ce9bfddfb613bbcb7" Dec 09 13:42:07 crc kubenswrapper[4703]: I1209 13:42:07.290422 4703 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7332964c1b9a35e68d863c8106689bdcb3ffc727182d981ce9bfddfb613bbcb7"} err="failed to get container status \"7332964c1b9a35e68d863c8106689bdcb3ffc727182d981ce9bfddfb613bbcb7\": rpc error: code = NotFound desc = could not find container \"7332964c1b9a35e68d863c8106689bdcb3ffc727182d981ce9bfddfb613bbcb7\": container with ID starting with 7332964c1b9a35e68d863c8106689bdcb3ffc727182d981ce9bfddfb613bbcb7 not found: ID does not exist" Dec 09 13:42:08 crc kubenswrapper[4703]: I1209 13:42:08.071250 4703 scope.go:117] "RemoveContainer" containerID="075036586a4acd047eda3bd628a4c1107d844816490ede14d5fd144b3a839241" Dec 09 13:42:08 crc kubenswrapper[4703]: E1209 13:42:08.072136 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 13:42:09 crc kubenswrapper[4703]: E1209 13:42:09.073172 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:42:09 crc kubenswrapper[4703]: I1209 13:42:09.085513 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02fc8dbb-3054-47b4-9f7e-7cc986784199" path="/var/lib/kubelet/pods/02fc8dbb-3054-47b4-9f7e-7cc986784199/volumes" Dec 09 13:42:15 crc kubenswrapper[4703]: E1209 13:42:15.073718 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:42:22 crc kubenswrapper[4703]: E1209 13:42:22.072467 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:42:23 crc kubenswrapper[4703]: I1209 13:42:23.070861 4703 scope.go:117] "RemoveContainer" containerID="075036586a4acd047eda3bd628a4c1107d844816490ede14d5fd144b3a839241" Dec 09 13:42:23 crc kubenswrapper[4703]: E1209 13:42:23.071287 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 13:42:27 crc kubenswrapper[4703]: E1209 13:42:27.074307 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:42:34 crc kubenswrapper[4703]: E1209 13:42:34.073484 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:42:36 crc kubenswrapper[4703]: I1209 13:42:36.070535 4703 scope.go:117] "RemoveContainer" containerID="075036586a4acd047eda3bd628a4c1107d844816490ede14d5fd144b3a839241" Dec 09 13:42:36 crc kubenswrapper[4703]: E1209 13:42:36.071364 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 13:42:42 crc kubenswrapper[4703]: E1209 13:42:42.073164 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:42:47 crc kubenswrapper[4703]: I1209 13:42:47.070476 4703 scope.go:117] "RemoveContainer" containerID="075036586a4acd047eda3bd628a4c1107d844816490ede14d5fd144b3a839241" Dec 09 13:42:47 crc kubenswrapper[4703]: E1209 13:42:47.071550 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 13:42:48 crc kubenswrapper[4703]: E1209 13:42:48.074804 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:42:56 crc kubenswrapper[4703]: E1209 13:42:56.073714 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:43:00 crc kubenswrapper[4703]: I1209 13:43:00.071087 4703 scope.go:117] "RemoveContainer" containerID="075036586a4acd047eda3bd628a4c1107d844816490ede14d5fd144b3a839241" Dec 09 13:43:00 crc kubenswrapper[4703]: E1209 13:43:00.072303 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 13:43:00 crc kubenswrapper[4703]: E1209 13:43:00.073002 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:43:09 crc kubenswrapper[4703]: E1209 13:43:09.076752 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:43:13 crc kubenswrapper[4703]: I1209 13:43:13.080300 4703 scope.go:117] "RemoveContainer" containerID="075036586a4acd047eda3bd628a4c1107d844816490ede14d5fd144b3a839241" Dec 09 13:43:13 crc kubenswrapper[4703]: E1209 13:43:13.093068 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 13:43:14 crc kubenswrapper[4703]: E1209 13:43:14.072608 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:43:20 crc kubenswrapper[4703]: E1209 13:43:20.072528 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:43:24 crc kubenswrapper[4703]: I1209 13:43:24.070772 4703 scope.go:117] "RemoveContainer" containerID="075036586a4acd047eda3bd628a4c1107d844816490ede14d5fd144b3a839241" Dec 09 13:43:24 crc kubenswrapper[4703]: E1209 13:43:24.072545 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 13:43:25 crc kubenswrapper[4703]: E1209 13:43:25.075152 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:43:33 crc kubenswrapper[4703]: E1209 13:43:33.072931 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:43:35 crc kubenswrapper[4703]: I1209 13:43:35.069769 4703 scope.go:117] "RemoveContainer" containerID="075036586a4acd047eda3bd628a4c1107d844816490ede14d5fd144b3a839241" Dec 09 13:43:35 crc kubenswrapper[4703]: E1209 13:43:35.070457 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 13:43:37 crc kubenswrapper[4703]: E1209 13:43:37.072914 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:43:45 crc kubenswrapper[4703]: E1209 13:43:45.074480 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:43:48 crc kubenswrapper[4703]: E1209 13:43:48.075045 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:43:50 crc kubenswrapper[4703]: I1209 13:43:50.070937 4703 scope.go:117] "RemoveContainer" containerID="075036586a4acd047eda3bd628a4c1107d844816490ede14d5fd144b3a839241" Dec 09 13:43:50 crc kubenswrapper[4703]: E1209 13:43:50.071737 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 13:44:00 crc kubenswrapper[4703]: E1209 13:44:00.073233 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:44:01 crc kubenswrapper[4703]: E1209 13:44:01.082139 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:44:02 crc kubenswrapper[4703]: I1209 13:44:02.070482 4703 scope.go:117] "RemoveContainer" containerID="075036586a4acd047eda3bd628a4c1107d844816490ede14d5fd144b3a839241" Dec 09 13:44:02 crc kubenswrapper[4703]: E1209 13:44:02.070953 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 13:44:13 crc kubenswrapper[4703]: I1209 13:44:13.070509 4703 scope.go:117] "RemoveContainer" containerID="075036586a4acd047eda3bd628a4c1107d844816490ede14d5fd144b3a839241" Dec 09 13:44:13 crc kubenswrapper[4703]: E1209 13:44:13.071906 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 13:44:14 crc kubenswrapper[4703]: E1209 13:44:14.074287 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:44:16 crc kubenswrapper[4703]: E1209 13:44:16.072539 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:44:25 crc kubenswrapper[4703]: E1209 13:44:25.077584 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:44:26 crc kubenswrapper[4703]: I1209 13:44:26.070045 4703 scope.go:117] "RemoveContainer" containerID="075036586a4acd047eda3bd628a4c1107d844816490ede14d5fd144b3a839241" Dec 09 13:44:26 crc kubenswrapper[4703]: E1209 13:44:26.070446 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-q8sfk_openshift-machine-config-operator(32956ceb-8540-406e-8693-e86efb46cd42)\"" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" podUID="32956ceb-8540-406e-8693-e86efb46cd42" Dec 09 13:44:29 crc kubenswrapper[4703]: E1209 13:44:29.073899 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:44:38 crc kubenswrapper[4703]: I1209 13:44:38.070062 4703 scope.go:117] "RemoveContainer" containerID="075036586a4acd047eda3bd628a4c1107d844816490ede14d5fd144b3a839241" Dec 09 13:44:39 crc kubenswrapper[4703]: I1209 13:44:39.035345 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-q8sfk" event={"ID":"32956ceb-8540-406e-8693-e86efb46cd42","Type":"ContainerStarted","Data":"644dfa0b47b00967e52d739837fb2888beafb8a77be6bd58094a5b0aa33a8975"} Dec 09 13:44:39 crc kubenswrapper[4703]: E1209 13:44:39.081999 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:44:41 crc kubenswrapper[4703]: E1209 13:44:41.080687 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:44:53 crc kubenswrapper[4703]: I1209 13:44:53.074848 4703 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 09 13:44:53 crc kubenswrapper[4703]: E1209 13:44:53.203890 4703 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 13:44:53 crc kubenswrapper[4703]: E1209 13:44:53.203967 4703 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 13:44:53 crc kubenswrapper[4703]: E1209 13:44:53.204153 4703 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n565h678h645h65h685h657h568h564h8fhcfhc4h58fh66bh564h649h5d5hb5h67dh57fh97h58ch557h4h68h59fh5c8h65ch586h5bch5b8h5c6h7q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p7hbq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(ce42c586-f397-4f98-be45-f56d36115d7a): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 13:44:53 crc kubenswrapper[4703]: E1209 13:44:53.205529 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:44:55 crc kubenswrapper[4703]: E1209 13:44:55.072183 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:45:00 crc kubenswrapper[4703]: I1209 13:45:00.173462 4703 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421465-qcgrw"] Dec 09 13:45:00 crc kubenswrapper[4703]: E1209 13:45:00.174796 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02fc8dbb-3054-47b4-9f7e-7cc986784199" containerName="extract-utilities" Dec 09 13:45:00 crc kubenswrapper[4703]: I1209 13:45:00.174817 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="02fc8dbb-3054-47b4-9f7e-7cc986784199" containerName="extract-utilities" Dec 09 13:45:00 crc kubenswrapper[4703]: E1209 13:45:00.174837 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02fc8dbb-3054-47b4-9f7e-7cc986784199" containerName="extract-content" Dec 09 13:45:00 crc kubenswrapper[4703]: I1209 13:45:00.174845 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="02fc8dbb-3054-47b4-9f7e-7cc986784199" containerName="extract-content" Dec 09 13:45:00 crc kubenswrapper[4703]: E1209 13:45:00.174860 4703 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02fc8dbb-3054-47b4-9f7e-7cc986784199" containerName="registry-server" Dec 09 13:45:00 crc kubenswrapper[4703]: I1209 13:45:00.174868 4703 state_mem.go:107] "Deleted CPUSet assignment" podUID="02fc8dbb-3054-47b4-9f7e-7cc986784199" containerName="registry-server" Dec 09 13:45:00 crc kubenswrapper[4703]: I1209 13:45:00.175174 4703 memory_manager.go:354] "RemoveStaleState removing state" podUID="02fc8dbb-3054-47b4-9f7e-7cc986784199" containerName="registry-server" Dec 09 13:45:00 crc kubenswrapper[4703]: I1209 13:45:00.176485 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421465-qcgrw" Dec 09 13:45:00 crc kubenswrapper[4703]: I1209 13:45:00.180838 4703 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 09 13:45:00 crc kubenswrapper[4703]: I1209 13:45:00.181556 4703 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 09 13:45:00 crc kubenswrapper[4703]: I1209 13:45:00.190388 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421465-qcgrw"] Dec 09 13:45:00 crc kubenswrapper[4703]: I1209 13:45:00.281669 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c9cc5dda-f533-4978-8669-1619375a181e-config-volume\") pod \"collect-profiles-29421465-qcgrw\" (UID: \"c9cc5dda-f533-4978-8669-1619375a181e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421465-qcgrw" Dec 09 13:45:00 crc kubenswrapper[4703]: I1209 13:45:00.282337 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c9cc5dda-f533-4978-8669-1619375a181e-secret-volume\") pod \"collect-profiles-29421465-qcgrw\" (UID: \"c9cc5dda-f533-4978-8669-1619375a181e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421465-qcgrw" Dec 09 13:45:00 crc kubenswrapper[4703]: I1209 13:45:00.282666 4703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mftc\" (UniqueName: \"kubernetes.io/projected/c9cc5dda-f533-4978-8669-1619375a181e-kube-api-access-9mftc\") pod \"collect-profiles-29421465-qcgrw\" (UID: \"c9cc5dda-f533-4978-8669-1619375a181e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421465-qcgrw" Dec 09 13:45:00 crc kubenswrapper[4703]: I1209 13:45:00.385737 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mftc\" (UniqueName: \"kubernetes.io/projected/c9cc5dda-f533-4978-8669-1619375a181e-kube-api-access-9mftc\") pod \"collect-profiles-29421465-qcgrw\" (UID: \"c9cc5dda-f533-4978-8669-1619375a181e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421465-qcgrw" Dec 09 13:45:00 crc kubenswrapper[4703]: I1209 13:45:00.386181 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c9cc5dda-f533-4978-8669-1619375a181e-config-volume\") pod \"collect-profiles-29421465-qcgrw\" (UID: \"c9cc5dda-f533-4978-8669-1619375a181e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421465-qcgrw" Dec 09 13:45:00 crc kubenswrapper[4703]: I1209 13:45:00.386788 4703 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c9cc5dda-f533-4978-8669-1619375a181e-secret-volume\") pod \"collect-profiles-29421465-qcgrw\" (UID: \"c9cc5dda-f533-4978-8669-1619375a181e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421465-qcgrw" Dec 09 13:45:00 crc kubenswrapper[4703]: I1209 13:45:00.387367 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c9cc5dda-f533-4978-8669-1619375a181e-config-volume\") pod \"collect-profiles-29421465-qcgrw\" (UID: \"c9cc5dda-f533-4978-8669-1619375a181e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421465-qcgrw" Dec 09 13:45:00 crc kubenswrapper[4703]: I1209 13:45:00.394902 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c9cc5dda-f533-4978-8669-1619375a181e-secret-volume\") pod \"collect-profiles-29421465-qcgrw\" (UID: \"c9cc5dda-f533-4978-8669-1619375a181e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421465-qcgrw" Dec 09 13:45:00 crc kubenswrapper[4703]: I1209 13:45:00.405418 4703 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mftc\" (UniqueName: \"kubernetes.io/projected/c9cc5dda-f533-4978-8669-1619375a181e-kube-api-access-9mftc\") pod \"collect-profiles-29421465-qcgrw\" (UID: \"c9cc5dda-f533-4978-8669-1619375a181e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421465-qcgrw" Dec 09 13:45:00 crc kubenswrapper[4703]: I1209 13:45:00.510625 4703 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421465-qcgrw" Dec 09 13:45:00 crc kubenswrapper[4703]: I1209 13:45:00.998953 4703 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421465-qcgrw"] Dec 09 13:45:01 crc kubenswrapper[4703]: I1209 13:45:01.271804 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421465-qcgrw" event={"ID":"c9cc5dda-f533-4978-8669-1619375a181e","Type":"ContainerStarted","Data":"88918c29380806bb4880ca5d85b5b0a0d7bafa1693ade9cad0d78115773c20e0"} Dec 09 13:45:01 crc kubenswrapper[4703]: I1209 13:45:01.272422 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421465-qcgrw" event={"ID":"c9cc5dda-f533-4978-8669-1619375a181e","Type":"ContainerStarted","Data":"8da1f29d108edac66a9a6ee80dc97089b537a3c35415c16531bc23b8cbbbf43a"} Dec 09 13:45:01 crc kubenswrapper[4703]: I1209 13:45:01.293742 4703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29421465-qcgrw" podStartSLOduration=1.2937141620000001 podStartE2EDuration="1.293714162s" podCreationTimestamp="2025-12-09 13:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 13:45:01.287795807 +0000 UTC m=+6000.536559326" watchObservedRunningTime="2025-12-09 13:45:01.293714162 +0000 UTC m=+6000.542477681" Dec 09 13:45:02 crc kubenswrapper[4703]: I1209 13:45:02.285651 4703 generic.go:334] "Generic (PLEG): container finished" podID="c9cc5dda-f533-4978-8669-1619375a181e" containerID="88918c29380806bb4880ca5d85b5b0a0d7bafa1693ade9cad0d78115773c20e0" exitCode=0 Dec 09 13:45:02 crc kubenswrapper[4703]: I1209 13:45:02.285794 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421465-qcgrw" event={"ID":"c9cc5dda-f533-4978-8669-1619375a181e","Type":"ContainerDied","Data":"88918c29380806bb4880ca5d85b5b0a0d7bafa1693ade9cad0d78115773c20e0"} Dec 09 13:45:03 crc kubenswrapper[4703]: I1209 13:45:03.709418 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421465-qcgrw" Dec 09 13:45:03 crc kubenswrapper[4703]: I1209 13:45:03.878772 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c9cc5dda-f533-4978-8669-1619375a181e-config-volume\") pod \"c9cc5dda-f533-4978-8669-1619375a181e\" (UID: \"c9cc5dda-f533-4978-8669-1619375a181e\") " Dec 09 13:45:03 crc kubenswrapper[4703]: I1209 13:45:03.879103 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9mftc\" (UniqueName: \"kubernetes.io/projected/c9cc5dda-f533-4978-8669-1619375a181e-kube-api-access-9mftc\") pod \"c9cc5dda-f533-4978-8669-1619375a181e\" (UID: \"c9cc5dda-f533-4978-8669-1619375a181e\") " Dec 09 13:45:03 crc kubenswrapper[4703]: I1209 13:45:03.879145 4703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c9cc5dda-f533-4978-8669-1619375a181e-secret-volume\") pod \"c9cc5dda-f533-4978-8669-1619375a181e\" (UID: \"c9cc5dda-f533-4978-8669-1619375a181e\") " Dec 09 13:45:03 crc kubenswrapper[4703]: I1209 13:45:03.880278 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9cc5dda-f533-4978-8669-1619375a181e-config-volume" (OuterVolumeSpecName: "config-volume") pod "c9cc5dda-f533-4978-8669-1619375a181e" (UID: "c9cc5dda-f533-4978-8669-1619375a181e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 13:45:03 crc kubenswrapper[4703]: I1209 13:45:03.887171 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9cc5dda-f533-4978-8669-1619375a181e-kube-api-access-9mftc" (OuterVolumeSpecName: "kube-api-access-9mftc") pod "c9cc5dda-f533-4978-8669-1619375a181e" (UID: "c9cc5dda-f533-4978-8669-1619375a181e"). InnerVolumeSpecName "kube-api-access-9mftc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 13:45:03 crc kubenswrapper[4703]: I1209 13:45:03.887277 4703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9cc5dda-f533-4978-8669-1619375a181e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c9cc5dda-f533-4978-8669-1619375a181e" (UID: "c9cc5dda-f533-4978-8669-1619375a181e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 13:45:03 crc kubenswrapper[4703]: I1209 13:45:03.982080 4703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9mftc\" (UniqueName: \"kubernetes.io/projected/c9cc5dda-f533-4978-8669-1619375a181e-kube-api-access-9mftc\") on node \"crc\" DevicePath \"\"" Dec 09 13:45:03 crc kubenswrapper[4703]: I1209 13:45:03.982121 4703 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c9cc5dda-f533-4978-8669-1619375a181e-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 09 13:45:03 crc kubenswrapper[4703]: I1209 13:45:03.982131 4703 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c9cc5dda-f533-4978-8669-1619375a181e-config-volume\") on node \"crc\" DevicePath \"\"" Dec 09 13:45:04 crc kubenswrapper[4703]: I1209 13:45:04.308127 4703 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421465-qcgrw" event={"ID":"c9cc5dda-f533-4978-8669-1619375a181e","Type":"ContainerDied","Data":"8da1f29d108edac66a9a6ee80dc97089b537a3c35415c16531bc23b8cbbbf43a"} Dec 09 13:45:04 crc kubenswrapper[4703]: I1209 13:45:04.308454 4703 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8da1f29d108edac66a9a6ee80dc97089b537a3c35415c16531bc23b8cbbbf43a" Dec 09 13:45:04 crc kubenswrapper[4703]: I1209 13:45:04.308236 4703 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421465-qcgrw" Dec 09 13:45:04 crc kubenswrapper[4703]: I1209 13:45:04.381371 4703 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421420-m6qgw"] Dec 09 13:45:04 crc kubenswrapper[4703]: I1209 13:45:04.389347 4703 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421420-m6qgw"] Dec 09 13:45:05 crc kubenswrapper[4703]: E1209 13:45:05.077460 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:45:05 crc kubenswrapper[4703]: I1209 13:45:05.095842 4703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39cb7561-4275-41f0-914d-7a0ea0653e27" path="/var/lib/kubelet/pods/39cb7561-4275-41f0-914d-7a0ea0653e27/volumes" Dec 09 13:45:10 crc kubenswrapper[4703]: E1209 13:45:10.072682 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:45:20 crc kubenswrapper[4703]: E1209 13:45:20.073379 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a" Dec 09 13:45:23 crc kubenswrapper[4703]: E1209 13:45:23.215235 4703 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 09 13:45:23 crc kubenswrapper[4703]: E1209 13:45:23.215752 4703 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Dec 09 13:45:23 crc kubenswrapper[4703]: E1209 13:45:23.216018 4703 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6kdkz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-6ljb8_openstack(862e9d91-760e-43af-aba3-a23255b0fd7a): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 13:45:23 crc kubenswrapper[4703]: E1209 13:45:23.217406 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-6ljb8" podUID="862e9d91-760e-43af-aba3-a23255b0fd7a" Dec 09 13:45:25 crc kubenswrapper[4703]: I1209 13:45:25.303844 4703 scope.go:117] "RemoveContainer" containerID="c4a1699dc61b1b1723b2b8a4513847ca72ecb8a36b3f7feb37ff08a1cb125a74" Dec 09 13:45:31 crc kubenswrapper[4703]: E1209 13:45:31.110290 4703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ce42c586-f397-4f98-be45-f56d36115d7a"